Events

Off the Lip 2016 – Bizarre Bazaar

22 October 2016

CogNovo Bizarre Bazaar

On 22 October 2016, between 11am and 3pm, the ground floor of Plymouth University's iconic Roland Levinsky Building will be transformed into a meeting place for ideas and knowledge, for research and craft, and for robots and people (that means you). On this first day of half term we would like to show you some results of research by us or our colleagues. With the campus being close to Plymouth's city centre, the university campus could be one of the stops – with most likely the most valuable showcase of research in the Southwest of England on that day, and possibly the most entertainment as well. If you have participated in one of the workshops during CogNovo Manufactory, you might want to present a result yourself, for example by showing your robot programming skills, singing as part of the Fantasy Orchestra, showing what you have drawn, and amaze others with a virtual world you have created. In addition you will have the chance to interact with any of the following highly interesting projects.

Events

Most events listed below are interactive installations that run all the time. In addition we have the following events on the main stage:

  • 11:00 Sound Games
  • 11:30 Experiments in Sonified Magic
  • 12:00 Tuning In
  • 13:00 Throwing salt over your shoulder: How can we combat causal illusions?
  • 13:30 Experiments in Sonified Magic
  • 14:00 The Sound of Chocolate
  • 14:30 Sound Games

Finger Music

While playing an air guitar requires years of practice and doesn't produce any sounds, this installation is the opposite: you can learn how to play in just a few seconds and, with a little practice, you can delight your audience. Wearing a pair of gloves, each movement of your hand and fingers is detected and translated into sound. You will be amazed how quickly your hands will learn the sounds of different patterns and how quickly meaningful melodies will emerge. Let your imagination and fingers go wild – at the end it might look like air guitar, but it will certainly sound better.

Movement produces sound and all sound is movement. For example cello players move their hands across strings and bow. This movement let the strings vibrate which itself moves the air between the string and your ear. This movement is what you can hear. In this installation we take several shortcuts: there is no need to learn an instrument and train it for many years. Any kind of hand movement will be detected by a pair of gloves. Inside a computer these movements are then translated into sounds and played back through loudspeakers. At the end, what you will hear is the movement between these speakers and your ear, controlled by your finger movement.


Would you trust this robot?

Do you think you can trust a robot? Especially the small, cute one over there at the gambling table who is waving at you? Well, you might want to find out! Our little friend would like to play a game with. But don't worry, we will provide the fake money and you can decide how much you will give to NAO – he might return much more and you will both benefit. On the other hand, spare parts are getting more and more expensive these days, so he might just keep all of it. At one point we should tell him that he can only buy candy for these coins.

Surprisingly this game was not developed for fun, nor by the robots to earn some money, but it is used in research as an implicit measure of trust. When asked directly, it would be very difficult for many people to say, how much they trust another person, a company, or a robot. On the other hand, they can often intuitively decide how much they would invest during a game with that other person or robot. While the investment game itself was developed in the 1940s as part of a mathematical branch called "Game Theory", it has first been applied to speech in human-robot-interactions here at Plymouth University as part of the CogNovo project.

presented by: Ilaria Torre


Sound Games

Are you a famous composer, but currently stuck with a writers block? But even if you are not – this collective improvisation is a great way to compose a musical piece as part of a group. Just grab your smartphone, install the app, and start moving around our space. You will receive mystical instruments, learn how to play them on your smart phone, interact with your fellow musicians – and play a game at the same time. So even if you don't manage to compose the next "Ode to Joy", you might win a prize at the end.

As interesting and engaging the music might sound, this setup is another tool developed here at Plymouth University to understand how people interact. In particular this tool is used to understand the extend in which music is a collaborative activity, and how much individual contributions shape the nature of physical and musical interactions. This installation has been developed at the Interdisciplinary Centre for Computer Music Research here at Plymouth University but is a collaboration itself – with a researcher from CogNovo.

presented by:Marcelo Gimenes, Frank Loesche


Tuning in

Singing in the shower does not involve a lot of coordination, except maybe keeping the balance. On the other hand singing in a choir is a very difficult process. While the singing itself requires a lot of training, exercising it in synchronicity with all the other singers is nearly impossible . Nevertheless humans, with just a little bit of practice, are very good at doing this. Part of this choir has never met before their first workshop evening this Thursday, but they will prove to you that coordination can easily be learned. Two of them will even let you have a look inside their brains to see how much the brain rhythms are synchronised.

Electroencephalography, often abbreviated as EEG, will be used to detect brain rhythms of two singers. EEG is a very sensitive technology that is often only used inside laboratories. During the past years several researcher attempted to use EEG outside these environments as well by controlling for disturbing factory through software instead of the setting. Together with a researcher from Oxford, who has pioneered some of the mobile EEG work, this installation will show a visualization of the brain patterns of two participants, who are engaged in the same activity – singing.

In addition, the film "Cortical Songs, Meduallan Visions (2014)" by Dan Paolantonio and Allister Gall will be screened.

presented by: Sue Denham, Shaun Lewin, Jack McKay Fletcher


The Sound of Chocolate

Did you ever notice that your food tastes so much better when you are happy? And do you have a song that makes you happy? Why not combine the both? We know it is a bit of cheating since chocolate makes most people happy, but we invited two guys from the land of chocolate – Belgium – to prove this idea to you. They designed a combination of sounds and chocolate for you that will make you want the Bizarre Bazaar last forever. And will make you happy.

Combining two or more senses into one experience is the topic these researchers are working on. Since a lot of research on sound, touch, and visuals is happening in Plymouth, we had a summer school a few years back about this topic. One of the former participants is now doing his research in Belgium and was obviously inspired by the taste of chocolate in this country. He has previously shown that chips and wine taste different if they are eaten while listening to a certain sound environment and he designed a special chocolate box with Belgian Chocolatiers that will be launched in Belgium later this year. Luckily he agreed to give you a avant-premier here in Plymouth.


Distorted Dimension

With the emergence of Virtual Reality sparking a surge in immersive and interactive experiences, it is not surprising that it is now being used everywhere. Once a technology written about in books and seen in movies, here at Bizarre Bazaar, we want you to experience what Virtual Reality is really like. You will not only encounter the technology but also experience its ability to distort the boundaries separating reality and fiction; right and wrong; emotion and reason. We like to think of ourselves as rational beings, but when faced with a virtual dilemma, how will you react?

This installation demonstrates the work of several researchers at CogNovo using both Virtual Reality and interactive sculpture to generate life-like moral dilemmas. Imagine yourself faced with an impossible situation, surrounded by sights and sounds. Are you ready to experience a virtual world like no other?


A space to wonder - collective improvisation with sound and movement

Up in the clouds, or rather in third floor of the Roland Levinsky Building, a special interactive attraction is waiting for you. You will be asked to wear a sensor – and then you can enter an empty space. As you will enter the room, sounds will start to appear as well. Listen to yourself, to your body as well as the sounds that are created through the sensors - and see how this resonates with the environment, with others, and your own thoughts. This project aims at bringing together physical and mental space.

A space to wonder

presented by: Klara Łucznik, Abigail Jackson, Eugenia Stamboliev, Ali Northcott


Close and personal with social robots

After their first appearance in fiction in the 1920s, humanoid machines have fuelled the imagination of generations. Robots have already helped producing the goods we consume for a few decades, but they have mostly been locked away in factories. Now they are increasingly populating the same space as we are – self-driving cars are essentially robots on wheels, other robots help in hospitals, and support children with special requirements. To interact with us humans, we have to learn what we expect from these machines and how to program them. Plymouth is one of the leading centres for social robotics in the world – and we would like to introduce you to Pepper, Nao, and Johnny just to name a few examples of robots here at the University.

During the Bizarre Bazaar you will have an opportunity to meet social robots and learn how building social robots gives us an insight on how the human mind works. The human mind, for all its complexity and amazing feats, sometimes is an almost automatic machine. This certainly is so when dealing with the social world: our brain automatically latches on to all things social, including robots. Roboticists will be on hand to demonstrate a range of robots, and to run experiments with you.

presented by: Tony Belpaeme, Thomas Colin


HAPLÓS/Bisensorial EXHIBIT: A Speculative, Adaptive, and Wearable Technology for Inducing Mental States Using Touch

How might cognitive wellness and therapeutic practices look in the future? Can mental discord be treated autonomously? Given that people are different and will therefore need different interventions, how can autonomous brain based therapy technologies be tailored to suit the needs of individuals at any given time? Have a look at one potential future of therapeutic treatment and what this future looks like right now.

HAPLÓS/Bisensorial is a speculative design concept and functioning prototype of wearable technology based on research in embodied cognition, somatic learning, and the effect of sound on cognitive processes. It uses binaural sound and tactile vibration on your back to induce mental states - such as calm. A genetic algorithm generates patterns of auditory and tactile stimuli, based on readings provided by an EEG headset. The result is intended to be an optimized and personalised soundscape and ‘touchscape’ that adjusts to your needs. HAPLÓS/Bisensorial is being developed at Plymouth University as part of the CogNovo programme. The technology has been opened to interested users during the two Manufactory days on Thursday and Friday – maybe you have already signed up for the workshop?

Haplos

presented by: Diego S. Maranan, Agi Haines, Jack McKay Fletcher, Sean Clarke, Kim Jansen, Claire delle Luche


Listen to yourself looking

Our eyes scan the world extracting information form the visual scene, often without our conscious control. In this project we turn those 'saccadic' eye movements into sounds. We invite you to view pictures on a screen and listen to the sounds of your eye movements. The sounds you hear will depend on when and where you look, and the visual patterns your eyes focus on when they pause.

presented by: Matthew Dunn, Sue Denham


Virtual Reality visualisation of a simulated tadpole spinal cord

In this hi-tech pond-dipping experiment, get closer to nature through data. Watch and interact with our Virtual Reality visualisation of a biologically realistic anatomical and functional model of an African Claw-Toed Frog Tadpole’s spinal cord. Our model, created by computational neuroscientists at here at Plymouth, is based on the thousands of electronic impulses that are carried through the tadpole’s nervous system and trigger the familiar wriggling movements of its tail.

This new visualization of the anatomy and spiking activity in a tadpole spinal cord model is based on virtual reality and provides a new perspective, bringing the viewer closer to the data an allowing them to interact with specific elements in order to access contextualized information in an intuitive and exciting way. The dataset that we are visualizing consists of the axonal trajectories and physiological activity of approximately 1,500 neurons and 80,000 axons from a model of the Xenopus tadpole.

Tadpole spinal cord

presented by: Marius Varga, Bob Merrison-Hort, Paul Watson, Roman Borisyuk, Dan Livingstone


KlingKlangKlong

Step into a mixed reality world with your smartphone and play KlingKlangKlong, a location-based, multi-player audio experience. While you are walking around campus with other players, KlingKlangKlong translates your location into sound. By moving through the physical space, you are also interacting with virtual players who react to your movements. Together, you are creating a dynamic soundscape. KlingKlangKlong does not have explicit rules or inherent goals. To play, you need to bring a modern smartphone (iPhone or Android).

Kling Klang Klong

presented by: Michael Straeubig


The Exciting Synesthesia Machine

People who have Synesthesia often experience words and numbers as colours, elaborate structures, or even scents. Step inside one of the machine booths and two players can experience a union of the senses from two separate locations using interactive visual and audio technology. Each player will hear a sonification of one players’ view of an image in the left ear, and the second player’s view in the right ear. Harmony indicates similarities in the two views.

In this 'machine' the user in booth one wears one-eyed paper cup glasses, a head mounted camera, a belt bag, and headphones. In the booth the user is able to view two posters of colourful graphics that he or she can only see a small part of at any one time and this restricted ‘view’ is recorded by the camera. The visual information is sent via Raspberry Pi to a computer that sonifies the data. In booth two, the second player is equipped with headphones and a tablet. The tablet displays the same colourful graphics. Using one finger the second user can touch points on the graphical display to trigger sounds from the fixed points. Each player can hear the sonification of the data.


Artificial Smiling Voice

Communication is so much more than just words, it consists of grammar, posture, tone – and smiles. Just by listening to them, most people are able to guess if the other person is smiling. Just try that the next time you talk on the phone :-) And since nowadays quite a few phones are able to talk back to you with their artificial voice, wouldn't it be nice, to hear it smile every once in a while? Would you be able to hear an artificial voice smiling? From what point on would this sound artificial or even creepy? Would you like to be woken up by a smiling voice in the future?

This exhibition will allow you listen to a number of manipulated artificial voices and see how much smiling you would attribute to these voices. These voices are part of a collaborative research project between one of the CogNovo researchers with the University of Mons in Belgium. It is a good example of two parties contributing their knowledge to an outcome that is bigger than the sum of its parts.

presented by: Ilaria Torre, Kevin El Haddad


Drone with Desires

Watch out for our winged intelligence model – a drone with desires - that decides where to travel using an artificial neural network based on a magnetic resonance image scan of artist Agatha Haines’ brain. Using a mathematical algorythm, the floating representation of the brain learns about its anatomy and surrounding environment as it moves around the Bazaar.

In choosing its direction of travel, the Drone with Desires tells us how the flexibility of the human brain might change if it was in a completely different anatomical structure. The drone makes decisions based on comfort and curiosity, moving its wings to navigate. Connections inside the brain alter their strength to replicate learning behavoiur as it develops in humans. The sounds you will hear in the gallery represent the most active nodes in the neural network and a live feed from drone as it chooses where to go.

Drones with Desire

presented by: Agi Haines, Jack McKay Fletcher, Christos Melidis, Sean Clarke, Marcel de Jeu, Jos van der Geest, Vaibhav Tyagi, Marcel Helmer


Conversation Piece

Conversations are amazing! Although we usually find the experience enjoyable and even relaxing, if you think about what is involved, then the pleasures of conversation may seem rather surprising. We manage to communicate with each other without knowing quite what will happen next. We quickly manufacture quite precisely timed sounds and gestures on the fly, which we exchange with each other without clashing - even managing to slip in some imitations as we go along! Usually meaning is all we really notice. In "Conversation Piece", we will transform your conversations into music so you can hear the amazing world of sound you create when talking with someone else.

So how does it all work? Social communication is arguably the most important human cognitive function of all, providing a basis for social bonding, information exchange and learning from others. Through the exchange of tightly coordinated multi-sensory signals people in conversation create a shared mental world. In essence we believe that a virtual social brain is co-constructed through well-timed sounds (speech as well as filler utterances with communicative function), gestures, facial expression, gaze, postural changes, and so on. Here we hope to provide new insights into this remarkable phenomenon.


Throwing salt over your shoulder: How can we combat causal illusions?

Sometimes it's clear when one event causes another, however we are all susceptible to causal illusions - beliefs that an event causes another when in fact they are unrelated. We can see this all around us, from the widespread use of certain types of homeopathy to belief in supernatural causes of natural events. This is because we tend to focus on how likely the effect is to happen in the presence of the cause but often ignore what happens when the potential cause is absent. This talk will discuss how we can use evidence from studies on this topic to try and minimise our susceptibility to causal illusions.

The research behind this presentation is done here at Plymouth University within the Psychology department and is part of CogNovo. It focuses on learning and attention for irrelevant cues and explores the nature of how we make decisions. Many different experiments exploring the basis for notable effects in associative learning has been run as part of this PhD project and Tara, the researcher behind this project, is happy to talk you through some of her findings and their possible implications on everyday life.

presented by: Tara Zaksaite


FIT with NAO

Have you ever seen a robot work out? While this would be quite easy to program, but mostly useless for the robot, it is much more interesting, difficult, and useful to have a robot help you working out. And this is exactly what FIT with NAO is about: our researcher do not only have a robot helping you in strengthening you positive imagery, but they also know why this works from a scientific point of view. So please come and have a look at this demonstration video of FIT.

Functional Imagery Training (FIT) is a new theory-based, manualized intervention that trains positive goal imagery developed by David Kavanagh (Queensland University of Technology), Jackie Andrade and Jon May (Plymouth University). A NAO robot will deliver a video session of FIT with mental imagery exercises to help you increase your physical activity levels. This particular intervention is developed here at Plymouth University as part of the CogNovo research programme.

presented by: Joana Galvão, Leonie Cooper, Lloyd Taylor, Jackie Andrade, Jon May, Tony Belpaeme, David Kavanagh


Cheap Teleoperated Robot Arm Project (CHAP)

Robots are very useful – and very expensive. Building a robot can easily cost many thousands or even millions, especially if they are not just going to look cute, but will actually assist you or do some useful work. Also most robot companies will keep their robot design a secret and (re-)develop everything, instead of sharing their knowledge. This CHAP addresses both of these problems: it is aimed at developing a design for a tele-operated mobile manipulator costing less than £5000. The design is developed at Plymouth University and will be open source for anyone to build and improve.

presented by: Guido Bugmann


Fitting a Population's Visual Preferences into an Artificial Intelligence

How can computer possibly learn which images look nice and what pictures are just … not so? Sending each of them to art school might just not do the trick and their hardware will be outdated by the time they graduate anyways. So our researcher takes a quarter of a million photographs off the internet, show each of them to a Artificial Intelligence and also tells it what real people thought about this image. The AI learns from these examples – and as a result it correctly guess how aesthetically pleasing images are, it has never seen before.

This algorithm is inspired by Neuroscientific research and detects colors, edge orientation, symmetry, and other features in images. The resulting classification is correct in 78.81% of photographs, which is a very good result for an Artificial Intelligence. This algorithm is being developed as part of the international CogNovo research programme at Plymouth University.

presented by: Francois Lemarchand


Senti-Mental

Senti-Mental is a ludic brain model derived from a melding of an MRI scan and real-time twitter feeds scrapped form the collective mind of the Off-The Lip event.

Tweets run through a sentiment analysis filter which impacts on colour and activity within the low-polygon-volumetric-model extracted from an MRI scan of head and brain. Looking more like a crystal-cave than a grey-jelly Senti-Mental is illuminated by the hive mind of visitors to the event.

Senti-Mental incorporates aspects of i-DAT’s Quarum Sensing Project which explores cultural computation, ludic data and playful experimentation with creative technology. Quorum is an algorithmic system that feeds off data generated by material and virtual environments and the physical and social behaviour of audiences. It incorporates bio-inspired algorithmic swarm decentralised decision making processes to generate a dynamic and evolving collective behaviour.

Senti-Mental

presented by: Mike Phillips, Luke Christison


Experiments in Sonified Magic

In the time honoured tradition of intellectual trading between the art of magic and the science of psychology we have invited magician Stuart Nolan to collaborate with Cognovo researchers and faculty on an experiment in sonified magic. We'll be trying out some mobile sensors developed in this year's Cognovo summer school.

presented by: Stuart Nolan, Hannah Drayson, Thomas Wennekers, Sue Denham