CAL-laborate Volume 4 June 2000










*******

Touching a Virtual World: What can it teach us?

Barry Richardson
Department of Psychology, Monash University, Australia

Virtual reality displays of haptic information may force a rethink about the nature of touch, and the importance of externalization for perceptual realism. Also, a closer look at externalization may cast some light on the basis of self-awareness.

The race is on to find ways of delivering virtual touch. Virtual reality systems have so far concentrated on vision and hearing but some set-ups now include sensors on gloves that allow monitoring of movements, and possibly some production of movements. There are also some systems that, with feedback from remote sites, allow surgery across thousands of miles via telerobotics. One such system is based in Perth and was recently demonstrated to Queen Elizabeth II. She was reportedly "amazed". However, there is a long way to go before she or any of us can enjoy the experiences available on the hollidecks of Federation starships depicted in the TV series Star Trek.

Impressive as some virtual reality systems are with respect to vision and hearing, creating virtual touch is a challenge we have hardly begun to face. We don't even agree on what touch is. Some of us use the word to include only the skin (cutaneous) sensations such as pressure, shear, tickle, itch, temperature change, and chemical reactions. Others include in touch the position senses of kinesthesis and proprioception, which terms are themselves used to mean different things by different people.

To achieve realism in a virtual system we shall need to capture all the interacting features of our senses, regardless of what we call them. In view of this, I will side-step labelling issues for now and use the term 'haptics' to cover all senses to do with the reception or production of cutaneous and movement/position information.

What determines how real a percept is? One feature important for realism is common to all senses and is called externalization.

Externalization of percepts

No aspect of perception is more remarkable than our ability to perceive objects and events as 'out there', remote from us, or distal. This trick works despite having to depend on receptors that are well and truly proximal.

The surface of the human eye's retina is curved like the inside of a tennis ball but this is not what gives rise to our perception of the visual world as being three-dimensional. The image on the retina, called the proximal stimulus, is two-dimensional and we rely on a large number of binocular and monocular cues to interpret this essentially 'flat' display as having depth, and containing information about distal objects. Externalization is based on inferences drawn from the proximal stimulus display, though it may be that the senses have to work together to achieve this.

A few years ago it was suggested that auditory externalization relies on vision (Bartley, 1969). We now know that in a free field, in contrast to sound delivered via headphones, we externalize sounds rather than perceive them as located in our ears or inside our heads. This is possible, not only because we have two ears and can use inter-aural differences to help make directional and distance judgements, but because our pinnae change the frequency-amplitude profiles of sounds before they impinge on the eardrum. It has been shown recently that these pinna cues are extraordinarily important in providing a sense of true auditory three-dimensional space (Middlebrooks and Green, 1991).

What about touch? When you feel for coins in a pocket or purse, are you externalizing the tactile stimuli or does the skin only perceive proximally? Most psychologists readily classify vision and hearing as 'far' senses - those that perceive distal stimuli, but touch is often referred to as a 'near' or proximal sense, as are the senses of taste and smell. Yet isn't that nasty smell distally attributed, rather than sensed in the nostrils as a state of olfactory receptors? And when we taste food, are we not very much aware of texture and temperature as qualities it possesses in contrast to, for example, the temperature and texture of the tongue? These questions remain for debate but with regard to the skin, it does not follow that because tactile stimuli actually contact the skin, the percept must be proximal.

Firstly, we do not pay attention to our receptor states when feeling, any more than we do when seeing or hearing. Our attention is generally directed to the object we are touching or being touched by. This is obvious when we explore an object not with bare fingers, but with a probe such as a white cane, as used by the blind. We build up a percept of the texture or object at the exploring tip of the cane, not at the handle end and not by noting changes in the states of receptors under our skin touching the handle of the cane, although these changes are the information upon which externalization depends. Kennedy, Richardson and Magee (1980) argued that one thing we 'feel' when riding a bicycle is the surface of the road, as a distal stimulus. This is distinct from the handlebars and saddle, through which information about the road surface is obviously transmitted, and which feel more proximal than the road, but also are distal, in the sense of being external to us. The road's visual texture combines with haptic information and may assist externalization.

Secondly, it has been reported that when hearing is masked and changes in vibration at two fingertips signal movements of a sound in space, the tactile percept seems to move across the space between the fingertips in a manner analogous to visual phi phenomena (Gescheider, 1980). Such sound-to-touch transforms may also be perceived as stationary in space (Richardson, 1990). More dramatically, researchers at the Smith Kettlewell Institute of Visual Sciences tested a blind subject who reported an object being 'out there' in front of him when its image was actually present only as a pattern of vibrations on the skin of his back (Guarniero, 1974). With this device, the subjects actively scanned a target object with a TV camera whose light and dark areas of output were transduced to on-and-off states of vibration on the skin. The perception of the object as distal was apparently dependent upon the blind subject having control over the camera's movements, and therefore having control over the resultant changes (movements) in the tactile pattern representing the object. This would suggest that activity on the part of the observer may enhance perception, externalization, or both. However, this has not been satisfactorily established (Epstein, Hughes, Schneider and Bach-y-Rita, 1986).

Active and passive haptic exploration in two dimensions

Gibson (1962) said that passive touch consists of being touched, while active touch consists of touching. This distinction has since been extended (e.g. Loomis and Lederman, 1986) to take account of more situations than Gibson's definition could accommodate. No matter how defined, active touch is generally thought to be 'better' than passive touch. This has prompted several studies focusing on whether free activity offers anything other than the likelihood of gaining richer and more relevant tactile information than is possible when someone else controls our movements, or movements of the stimulus. The alternative view is that active touch adds to the percept something special or important, beyond information quality or quantity.

Few researchers would have expected active exploration to be worse than passively guided exploration yet this was reported when the task was to identify a raised line drawing using a single fingertip (Magee and Kennedy, 1981; Richardson, Symmons and Kennedy, 1998). One difficulty in interpreting such results has been lack of assurance that the exploratory movements of passive and active subject are really identical (matched) and that the only difference in experience is the active versus passive conditions of the independent variable.

This difficulty has been overcome with a device called the Tactile Display System (Richardson and Symmons, 2000), which digitally records an active subject's movements in the X-Y plane and then uses motors to guide passive subjects over a pathway identical to that followed by the active explorer. Meaningful active-passive comparisons are made possible with this system as are systematic measures of the importance of components of haptic exploration (e.g. pressure, shear and kinesthesis).

Another device, built in Sweden, matches the information received by active and passive explorers by arranging for a virtual tactile pattern to be felt as the explorer (either active or passive-guided) moves an array of pins with which a fingertip is in contact (Jansson, 1998). A computer detects the location of the fingertip in the X-Y plane and provides fingertip stimulation in the form of pins vibrating, according to what would be felt in that location if a raised line drawing was present. Preliminary results with these two systems are in agreement in some respect e.g. a surprising amount of shape information can be gleaned from a line moved under a stationary fingertip, in contrast to free movement of that fingertip over the drawing. However, passive-guided superiority has been confirmed only with the Tactile Display System. It is hoped that more can be learned about haptics by comparing results with these two kinds of display, one real and the other virtual.

Three dimensional virtual objects in haptic space

While research continues on haptic perception of two-dimensional displays, other work uses a probe to simulate the haptic exploration of different surface textures, and three-dimensional solid objects. SensAble Technologies (http://www.sensable.com/) have produced a device called the PHANTOM which consists of a rod-like probe, linked at joints and free to move in various directions within a limited space. By manually exploring with this probe, a user can feel resistance at certain places as if a solid object is present. In fact, however, a computer is programmed to create such resistance according to the virtual object selected for haptic display. This is a compelling demonstration that a tactual object does not even have to exist in order to be perceived and externalized.

Current applications of the PHANTOM include practice at assembling and manipulating parts, telerobotics, training of skills such as surgical techniques, and human factors research. With only a single 'point of contact' this is a very rudimentary system that does not do justice to the richness of normal haptic experience. However, the fact that telecommunication companies (e.g. British Telecom) are working on similar systems with a view to commercial applications suggests that progress in haptic research is likely to proceed at a rapid pace.

Self-awareness

Externalization is surely important for a sense of realism and it is real objects or stimulus events of which we are conscious when we perceive. We must attribute the source of stimulation to distal locations for to do otherwise would mean being body-bound and uninformed about the world around us. This may not matter for clams which react automatically to possible predators and gather sustenance from the water surrounding them. This would seem to justify the suggestions that clams lack self-awareness just as we are unaware of the physiological processes that underlie our perceptions. When we perceive, some very clever automatic processes swing into action and do most of the complicated work with our (selves) knowing little or nothing about them. The end result of these parallel and unconscious physiological processes is the conscious percept.

How these processes develop in our early years is still largely a mystery but it does not seem impossible for there to be a link between externalization and self-awareness. Might this be nothing more than the inevitable consequence of first having externalized objects, and then having noticed that among them is a special one called 'self'? This would not be an abrupt discovery occurring on the way to work one day, but a gradual appreciation of which externalized objects belong to self, and which to others. An example of something along these lines might be seen when a child at first tracks their own hand-movements (as if surprised and unable to anticipate these movements), but then comes to realize that he/she is in control of the hand and can now correlate the haptic and visual experiences. A similar self-awareness may be demonstrated by dogs when they stop chasing their own tails.

If this reasoning is correct, it follows that self-awareness is not a peculiarly human characteristic but an attribute of any organism, or machine, capable of perceiving objects as distal and then perceiving itself as a special case among these objects. This reasoning, in turn, leads to the suggestion that robots can become not only conscious (with this being verifiable according to the same criteria we use to declare humans as conscious) but self-aware.

It seems clear that externalization is a necessary condition for self-awareness, for without it we would not know of a world of distal objects at all. However, the suggestion here is that it is also a sufficient condition for self-awareness. Sufficiency can be argued because it is inevitable that, as we move through infancy, we will encounter bits of ourselves among externalized objects and will ultimately integrate these into an object (self) that is distinct from all other objects. When this integration is complete, we may be said to be self-aware.

In conclusion, it seems possible that the attempt to produce a convincing haptic virtual display will lay to rest the mistaken view that touch is not a distal sense. In addition, it is possible that the process of externalization can account for self-awareness.

References

Bartley, S. H. (1969) Principles of Perception. 2nd Ed, Harper & Row, 293.
Epstein, W., Hughes, B., Schneider, S. and Bach-y-Rita, P. (1986) Is anything out there?: A study of distal attribution in response to vibrotactile stimulation. Perception, 15, 275-284.
Gescheider, G. A. (1970) Some comparisons between touch and hearing. IEEE Transactions on Man-Machine Systems, MMS-11, 28-35.
Gibson, J. J. (1962) Observations on active touch. Psychological Review, 69(6), 477-491.
Guarniero, G. (1974) Experience of tactile vision. Perception, 3, 101-104.
Jansson, G. (1998) Tactile pictures for visually impaired people: 2D pictures and virtual 3D objects. Paper presented at the second Swedish Symposium on Multimodal Communication, Lund, Sweden, 16-17 July.
Kennedy, J. M., Richardson, B. L. and Magee, L. (1980) In The perception of pictures Volume 2, ed. M. Hagen. New York: Academic Press.
Loomis, J. M. and Lederman, S. J. (1986) Tactual perception. In Handbook of Perception and Human Performance (Vol.2), eds K. Boff, L. Kaufman and J. Thomas. New York: Wiley.
Magee, L. and Kennedy, J. M. (1981) Exploring pictures tactually. Nature, 283, 287.
Middlebrooks, J. C. and Green, D. M. (1991) Sound localisation by human listeners. Annual Review of Psychology, 42, 135-139.
Richardson, B. L. (1990) Separating signal and noise in vibrotactile devices for the deaf. British Journal of Audiology, 24, 105-109.
Richardson, B. L., Symmons, M. and Kennedy, J. M. (1998) Findings with the Tactile Display System. Conference on Representation and Blindness, San Marino, 23-25 May.
Richardson, B. L. and Symmons, M. (2000) The TDS: A new device for comparing active and passive-guided touch. IEEE Tansactions of Rehabilitation Engineering. Accepted for publication.

Barry Richardson
Department of Psychology
Monash University
Gippsland Campus
Churchill, VIC 3842
Australia
barry.richardson@sci.monash.edu.au


Return to Contents

CAL-laborate Volume 4 June 2000

[an error occurred while processing this directive]

Page Maintained By: PhySciCH@mail.usyd.edu.au
Last Update: Monday, 30-Apr-2012 15:16:20 AEST
URL: http://science.uniserve.edu.au/pubs/callab/vol4/richard.html