Esegui ricerca
13 February 2014

Touch, feel, see and hear the data

Aumenta dimensioni testoDiminuisci dimensioni testo

It is now possible to sense scientific data as a means to deal with the mountains of information we face in our environment by applying subconscious processing to big data analysis

Imagine that data could be transposed into a tactile experience. This is precisely what the CEEDs project, funded by the EU, promises. It uses integrated technologies to support human experience, when attempting to make sense of very large datasets. Jonathan Freeman Professor of Psychology at Goldsmith University of London, UK, talks to youris.com about how this project can help present data better, depending on the feedback participants provide to data from their environment, via close monitoring  of their explicit responses—such as eye movement, for example—and their inner reactions, like heart rate. 

What inspired you to get involved in data representation?
I felt there was a disjoint between our online and offline experience.  For example, when shopping online or searching for a product—maybe a pair of jeans—the webpage on which you land, receives information about previous searches in all your cookies. Then, it can make inferences about you and target content appropriately.  In the physical environment of a shop, there just isn’t that level of insight and information provided to the environment.  One big driver is to ask whether there are ways we can serve content that better suits a user’s needs. And it does not have to be in a commercial environment.

What solution do you suggest?
We realised that humans have to deal with all this data. The problem is that our ability to analyse and understand it is a massive bottleneck. At the same time, the brain does an awful lot of processing that is not being used and that we are not consciously aware of. Nor does it figure in our behaviour.  Our idea was therefore to marry the two and apply human subconscious processing to the analysis of big data.

Could you provide a specific example of how this could be of benefit?
Take a scientist analysing say a huge neuroscience dataset in our project experience induction machine. We apply measurements that tell us whether they are getting fatigued or overloaded with information.  If that’s the case, the system does one of two things.  It either changes the visualisations to simplify them so as to reduce the cognitive load [a measure of brain workout], thus keeping the user less stressed and more able to focus.  Or it will implicitly guide the person to areas of the data representation that are not as heavy in information.  We can use subliminal clues to do this, such as arrows that flash so quickly that people are not aware of them.

Part of your approach involves watching the watchers use data. So what kind of technology do you rely on?
We devised an immersive setup, where the user is subjected to constant monitoring.  We use a Kinect [motion sensing device] to track people’s postural and body response. A glove tracks people’s hand movements in more detail and measures galvanic skin responses, which is a measure of stress. We have an eye tracker that tells the user where about in the data they are focusing. It also looks at their pupil to see how dilated they are, as a measure of their cognitive work rate. In parallel, a camera system analyses facial expressions and a voice analysis system measures the emotional characteristics of their voice.  Finally, users can wear the vest we developed under the project in this mixed reality space, called the CEEDs eXperience Induction Machine (XIM), which measures their heart rate.

Is the visual part of the project important?
Visualisation technologies in the experience induction machine are important as people are in an immersive 3D environment.  But the representations that we use for the data are not just visual.  There are also audio representations for data: spatialisation of audio and sonification of data so that users can hear the data.  For example, the activity that normally flows in part of the brain can be represented so that more activity can sound louder or higher pitched to neuroscientists looking at these flows. There are also tactile actuators in the glove that allow users to grab data and feel feedback in their fingertips.  

What is so novel about this approach?
The Kinect is available. But never before has anyone put all these components in one place to trial them together.  And never before has this advanced set of sensors been put together with the goal of addressing optimising the human understanding of big data. This is novel, cutting edge and ambitious. It is not simple product development. This is about pushing the boundaries and taking risks.

Who will this technology be useful to?
Initially those who deal with massive data sets such as economists, neuroscientists and data analysts will benefit.  But people will also benefit.  We are all bombarded with information.  There are going to be real benefits for people in having systems that response to your implicit cues as a consumer or person.  It does not have to be in a consumption context.

Could you provide an example of application outside commercial applications?
Imagine you are an archaeologist working in the field and you come across a piece of pottery. You look at it and say it comes from the 4th century and from such and such object. It takes years of experience for an archaeologist to be able to do that. In our project, what we are doing is measuring how expert archaeologists look at objects and evaluate them.  Then, we feed this interpretation into a database to speed up potential matching of pottery pieces.  It makes the machine better, speeding up the predictive search powers of technology.

youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content.