Skip to navigation Skip to content

Issue
Ten

Interaction in action

If we were designing computers from scratch, using all the different interfaces and other technologies available now, the mouse and the keyboard would probably still be very popular options, along with touch-screens and speech recognition, but our mobile phones, laptops and desktop computers might also be all-singing, all-dancing, all-smelling systems – plugged into our brains...…

Interaction in action

Article by Peter Barr

If we were designing computers from scratch, using all the different interfaces and other technologies available now, the mouse and the keyboard would probably still be very popular options, along with touch-screens and speech recognition, but our mobile phones, laptops and desktop computers might also be all-singing, all-dancing, all-smelling systems – plugged into our brains...

With multimodal human–computer interaction, “you start with the person, not the computer,” says Professor Stephen Brewster. And people can now interact with computers in ways that would have been hard to imagine a few years ago – not just with all of their senses but also with their brains. 

Brewster is SICSA’s theme leader for multimodal interaction, Professor of Human–Computer Interaction in the Department of Computing Science at the University of Glasgow, and a key member of the Glasgow Interactive Systems Group –  one of the the biggest research groups of its kind in the UK or Europe. Other universities in Scotland also do research in human–computer interaction (HCI), including Abertay (focusing on games), Dundee (accessibility for older and disabled people), Edinburgh Napier (artificial companions), Strathclyde (usability and user experience) and St Andrews (multi-touch and surface displays), while the University of Edinburgh is a leader in speech recognition and synthesis. But Glasgow leads the way in HCI, exploring what Brewster describes as “unusual ways of interaction, ”including novel visual and 3D audio systems, tactile and haptic (touch-based) systems which directly interface with the brain by detecting changes in electrical activity, and even experiments with smell-based interfaces. In recent years, the emphasis has also been on mobile applications, taking advantage of the rich technologies now integrated into mobile devices and the fact that mobile computing is becoming ubiquitous – part of our everyday lives.

Many academics define multimodal interaction as simply “different types of interaction,” but Brewster prefers to define it as how human beings interact with computers, sending and receiving information via eyes, ears and skin or using gestures – an area of HCI where Glasgow is doing some leading research. Brewster also explains that researchers in Scotland are interested in interfaces which work across the spectrum of human abilities, including people with cognitive, hearing and vision impairments. Theoretically, if researchers make further progress with brain–computer interfaces, even somebody with “locked-in syndrome” may be able to interact via electrodes attached to the scalp, moving a cursor around on a screen just by thinking, without moving one single muscle. 

CULTURAL TECHNOLOGY
One of the most intriguing projects going on in Glasgow is research into gestures, looking not just at how to use gestures to interact with mobile devices but also at the social and cultural acceptability of different gestures – e.g. rotating your wrist or tapping parts of your body.  Brewster says the group are asking very basic questions, such as “would people use these gestures?” in a normal environment (in the street or an office), without feeling embarrassed or being misunderstood by observers.  Would they look weird or feel weird? Are the gestures too complicated or even too simple? Are there any universal gestures which would work in any country without causing problems? Do these innocent-seeming gestures mean something rude in other cultures? 

The researchers in Glasgow are still at the prototype stage, sending students into the real world to experiment with different gestures by recording the reactions of the people around them as well as of the users themselves.  Another study is comparing the reactions of people in the UK and India to study the cultural issues involved.  “This is where signal processing meets social computing,” says Brewster.

BRAINWAVES, SMELLS AND TOUCHY-FEELY SYSTEMS
By rotating your arm in a clockwise direction, you may be instructing the system to perform a particular task, like turning up the volume. But Rod Murray-Smith of the University of Glasgow is also interested in the enormous potential of brain–computer interfaces. He is working on interaction techniques which can detect a “motor-imagery” event in the brain, such as imagining a hand or foot movement, filtering the signal from the other electrical disturbances (such as those caused by blinking and facial movements), and using this to control a computer. “The big problem,” says Brewster, “is that input is still very slow,” but researchers are confident of future progress – not so much reading the mind as allowing users to control software by imagining certain actions.

If brain interfaces seem strange, then a smell interface may appear even stranger. Researchers in Glasgow have studied the possibilities of such a system, looking at some basic applications like using smells as “fire!” alerts or as reminders to perform tasks such as “eat” or “take medication.” One idea was to use smells to aid memories by creating links to digital photos – e.g. the smell of seaweed may prompt a search for photos of a day at the beach.  The major problem with “smell interfaces,” however, is that they are hard to synthesise – “there is no RGB of smell,” says Brewster.

What kicks off these ideas is the fact that human beings offer so many practical options for non-verbal interaction with computers, allowing researchers to “take advantage of the richness of our bodies and brains,” using everything from skin and smell to muscular movements and “brainwaves.” 

Haptic computing, including the use of “force-feedback” devices which create the illusion of touching and manipulating real three-dimensional objects, is another key area in HCI, and the subject of extensive research in Glasgow and elsewhere. According to Brewster, haptics have a key role to play in creating interfaces that feel ‘real’ to users – e.g. to add a third dimension to displays in museums, particularly useful for people with visual impairments.  Haptics have also been used to teach various physical skills, including everything from making jewellery to simulating veterinary procedures. 

THE MOBILE DIMENSION
Brewster has been interested in HCI since he was a student, writing his PhD thesis on Providing a structured method for integrating non-speech audio into human–computer interfaces at the University of York in the early 1990s, when he focused primarily on earcons (structured sound patterns used to represent specific items or events) and non-speech sound. At that time, mobile phones were crude devices which just about managed to handle a phone call and nothing much else, but dramatic advances in mobile technology in recent years have made them the focus of research in countries all over the world – as HCI embraces the development of novel multimodal interfaces for mobile devices.

According to Brewster, mobile devices are just beginning to get enough processing power to enter the next stage of HCI – ubiquitous computing – when embedded computing devices become second nature, available anywhere at any time, enabling people to use them discreetly while carrying out other tasks. Computer scientists may say “you ain't seen nothing yet” with these powerful mobile devices, but you could also say “you ain't seen/heard/touched/smelled/thought nothing yet” when it comes to their long-term potential.

ENABLING DISABILITY
Mobile devices will be one of the major technologies used in the home to deliver “telehealthcare” in the future. According to a recent survey, people in their sixties are becoming increasingly comfortable with mobile devices, so “techno-phobia” is less of a barrier to widespread adoption.

The prime motivation to develop the new home-care systems is simply the fact that the population is ageing, making it more cost-effective to enable people to remain at home until they're much older. As we age, we also develop multiple “small impairments,” while more and more people also have more serious disabilities. As well as health care, older people also need to keep in touch with friends and family, and access different types of information, so overcoming social isolation is another priority. 

Brewster and colleagues at the University of Edinburgh are interested in the development of various options for multimodal interaction, ranging from very simple ambient devices – e.g. lights which change colour to signal or prompt an event – to more complex solutions delivered via mobile devices which users carry around with them all of the time. This diversity is needed to cope with different levels of cognitive and physical ability, starting at the most basic.

For example, some people may benefit from a device which reminds them to eat or take medication, via visual, auditory or tactile interaction, or even an appropriate smell. Others may benefit more from communications systems tailored to their personal needs, delivered via mobile devices.  The possibilities are endless, but scientists are also aware of the need for discretion. “It's important to develop unobtrusive devices and deliver information and reminders in the most appropriate way,” says Brewster.  As well as developing high-tech solutions, researchers have also done studies to look at the disruptiveness of signals or alerts, so they can make them more timely as well as appropriate. “If no one uses the system,” says Brewster, “it's not a good system,” and that's why usability is also extremely important. 

BACK TO THE FUTURE
Ten years ago, Brewster and other HCI specialists were beginning to turn more attention to mobile devices as the integrated technologies began to mature, taking advantage of open platforms and multiple functions (vibration and cameras, etc.) to try to “do things with them that they don't already do.”  Tablet computers now bridge the gap between mobile and desktop devices, and social media have led to an explosion in activity. Interfaces have also developed in every direction, and social, educational and business needs have also evolved.  So what about ten years from now? 

Brewster anticipates more focus on usage than on user devices, and different forms of interaction with embedded devices as we enter the era of ubiquitous networking. In terms of applications, home care will be high on the agenda, including remote monitoring of health care as well as psychological/emotional support. He also thinks that information systems will be able to “monitor cognitive workload,” so that if you are stressed, doing too much at once, the computer will reorganise your workload.

Will systems also read our minds and know what we think before we do? Even Brewster can't answer this question – yet.


 

 

"Interaction in action". Science Scotland (Issue Ten)
Printed from http://www.sciencescotland.org/feature.php?id=98 on 21/10/17 03:52:01 AM

Science Scotland is a science & technology publication brought to you by The Royal Society of Edinburgh (www.rse.org.uk).