The Haptic Radar / Extended Skin Project

[ Wearable laser-based whiskers for augmented spatial awareness ]

Alvaro Cassinelli, Carson Reynolds & Masatoshi Ishikawa

 

 

SECTIONS

  1. Abstract
  2. Introduction
  3. Headband Prototype
  4. Ongoing Research Topics
  5. Publications
  6. References
 

VIDEO DEMOS

Prototype experiment: (April 2006):

Computer-controlled simulator (virtual maze):


Abstract

We are developing a wearable and modular electronic device whose goal is to allow users to perceive and respond to spatial information using haptic cues in an intuitive and inherently parallel way.The system is composed of an array of modules, each of which senses range information and transduces it as an appropriate vibro-tactile cue on the skin directly beneath the module. In the future, this modular interface may cover precise skin regions or be distributed in over the entire body surface and then function as adouble-skin with enhanced and tunnable sensing capabilities.

Among the targeted applications of this interface are visual prosthetics for the blind, augmentation of spatial awareness in hazardous working environments, as well as enhanced obstacle awareness for car drivers (in this case the extended-skin sensors may cover the surface of the car).

The first prototype (a headband configuration) provides the wearer with 360 degrees of spatial awareness and had very positive reviews in our first proof-of-principle experiment (see video here).

Introduction

This experiment extends existing research on Electronic Travel Aids (ETAs) relying on tactile-visual sensory substitution for the visually impaired. Keeping things simple, we can say that two different paths have been extensively explored in the past: one is to use the input from an imaging camera to drive a two-dimensional haptic display (placed on the skin [Kaczmarek1991] or even on the tongue [Bach-Y-Rita2003]). This approach benefits from the research on reading aids based on TVSS [Stein1998]. The other approach consist on extending the capabilities of the cane for the blind (using ultrasound or even laser rangefinders), and converting the sensed range-data to a convenient vibro-tactile cue on the hand wearing the device (see for example [Yuan2004], [Benjamin1973]).

Each approach has its own advantages and disadvantages. For instance, the "image-mapped-into-the-skin" visual-to-tactile substitution approach is very promising because it potentially provides the user with an extremely rich source of information. However, it may not be as intuitive as it seems at first glance: although research has showed that the brain has some capacity of diverting the tactile stimuli to high-level visual processing centers [Ptito05], people's reflex reaction to a sudden skin stimuli may remain protective, so even a trained subject (able to "see" with the skin) may not be able to shut down this hardwired, low-level spinal cord reflex. More importantly, since stereoscopic vision may be just impossible to achieve through this method, the approach does not provides any intuitive spatial (depth) cue.

On the other hand, the white-cane approach is extremely intuitive precisely because it is not really a sensorial-substitution approach: the cane only extends the reach of the user hand through distal attribution. But still, the traditional cane (as well as sophisticated ETAs based on this "extension of the hand" approach such as the Laser Cane [Benjamin1973] or the MiniGuide ultrasonic aid [Phillips1998]) provides "direct spatial awareness" only in the direction pointed by the device (the experience is somehow similar to "tunnel vision"). The user must therefore actively scan the surrounding. This process is inherently sequential, and as a consequence global spatial awareness relies heavily on memory and on the user's ability to mentally reconstruct the surrounding, potentially overloading cognitive functions. The user must scan the environment as a sighted people would do with a flashlight in a dark room. The result is fragmented, poor spatial awareness with a very low temporal resolution or ``tunnel vision''. In certain cases this approach may be sufficient, for instance when tracking a signed path on the floor. Some researchers have proposed an interesting way to overcome this resource-consuming scanning by using an automated scanner and converting the temporal stream of range-data to a modulated audio wave [Meijer1992], [Borenstein1990]. However, this is not an ideal solution, because (1) the user must be trained (a fair amount of time) to interpret sound-scape in order to ``de-multiplex'' the temporal data into space cues, and (2) the device is (cognitively) obtrusive since it may distract people from naturally processing the audio input.

The Haptic Radar "spatially extended skin" paradigm

The haptic-radar project intends filling the gap between the two different approaches described above, while trying to retain the most interesting aspects of each. This is realized thanks to the following paradigm:

  • Modularity (or parallelism). The system is modular, exploiting the input parallelism of the skin organ (this is similar to the image-mapped on the skin, but instead of image-pixels, what we have here is depth-pixels - this could also be an interesting research topic: use a 3D range camera to drive a 2D haptic display).
  • Direct range-to-tactile transduction. Each module behaves as a ``mini-cane'' (or artificial hair or antenna) that translates depth-range information into a tactile cues right behind the sensor. Each module produces a stimulus coding to direction, proximity and speed of the obstacle/object. When properly tunned, the stimulus should be readily interpretable by both the user's low- and high-level cognitive system.

Actually, depending on the number and relative placement of the modules, this device can be seen as either an enhancement of the white cane approach (when just using a single module carried on the hand) or as a variation of the more classic ``image on the skin'' TVSS (when a large quantity of sensors are placed on the same skin region). It is a variation of the later approach, because instead of luminance-pixels what we have here are depth-pixels and besides the range data does not come from a unique 3D camera (or raster-scan laser rangefinder), but instead each measurement is independently performed at the module site. The user relies on spatial proprioception to give meaning to the tactile cues.

The driving intuition here is that tactile perception and spatial information are closely related in our cognitive system for evolutionary reasons: an analogy for our artificial sensory system in the animal world would be the cellular cilia, insect antennae, as well as the specialized sensory hairs of mammalian whiskers. We then speculate that, at least for certain applications (such as clear path finding and collision avoidance), the utility and efficiency of this type of sensory substitution may be far greater than what what we can expect of more classical TVSS systems. (i.e. the "image on the skin" formed by processing and "printing" an image onto a 2D array of tactile stimulators). Eventually, this approach to range information can be applied to spaces at different scales and allow users to perceive other information such as texture, roughness, or temperature of objects far away.

Of course, the optimal placement of the modules on the skin is of crucial importance. First, since each module is supposed to work as a "micro-cane", it is important that the module orientation remains consistent with its output. In the case of the ordinary white-cane, the user relies on body proprioception to learn about the hand/arm orientation; in the case of the proposed modular system, it is important that each module is placed in a meaninful, stable way over the body surface. One obvious way is to locate the modules on "bands" around the body, each module sustaining a single angular section of the field of view (a "cone of awareness" as in our head-band prototype). However, other strategies are possible, for instance placing a very few modules all around the body at "critical" locations (places which may be fragile/delicate, such as the head or the joints - places where people would normally wear helmets and protections in dangerous situations).

Non-obstrusivenes is also of concern. The same question arises in classic visual-to-tactile systems, but there are quite a few skin regions (with more or less good spatial resolution sensitiviness) that can be readily used for our purposes. The skin of the head is a good choice because the skin does not moves a lot relatively to the skull - so that module stimulus gives the user a clear and consistent cue about the relative position of the object/obstacle.

In a word, what we are proposing here is to build artificial, wearable, ligh-based hairs (or antenas).

In the near future, we may be able to create something like on-a-chip, skin-implantable whiskers using MOEMS technology. The actual hair stem will be an invisible, unobstrusive, steerable laser beam. Results in a similar direction have been already achieved in the framework of the smart laser scanner project in our own lab [Cassinelli05].

Proof-of-Principle (Head-band Prototype)

To date (14.11.2006), we have designed and run preliminary tests on two different prototypes, both configured as a wearable headband. This particular configuration provides the wearer with 360 degrees of spatial awareness.

The first prototype is a simulator: Modules does not contain range-finders, but just a (motor) vibrator and a monitor LED. The device is controlled by a computer and enables the wearer to explore and evolve in a virtual maze. This prototype was built primarily to study the abiliy of the user to deal with 360 degrees of spatial awareness in a controlled (virtual) environment. A short video demo can be seen here. Aprototype using pneumatic actuators is being considered that would allow us to perform experiments while in a fMRI machine.
Each module of the second prototype contains an infrared proximity sensor (SHARP GP2D12) with a maximum range of 80 cm (giving the user a relatively short sphere of awareness - everything at arm range). Vibro-tactile stimulation is achieved using cheap off-the-shelf miniature off-axis vibration motors, and tactile cues are created by simultaneously varying the amplitude and speed of the rotation. The key question here was: do participants intuitively react so as to avoid objects approaching from behind, without previous training? A prototype using ultrasound range-finders is being built (max. range of 6 meters). The user can interactively "tune" the maximum detection range. Various stimulations modes have been implemented: (1) vibration directly proportional to the range-finder output; (2) unique motor vibrating at any given time, indicating either the "center of gravity" of the range-detection, or exactly the opposite direction (then showing the direction of clear path).

Details on the hardware and results of the preliminary experiments can be found here (check also the slide presentation).


Ongoing Research Topics

Nature of the skin stimulus

The nature of the stimulus is a crucial point to consider. It may be not enough to translate range data into intensity or frequency of the tactile stimulus. As one subject pointed out in our proof-of-principle experiment questionnaire: "if the stimulus evolve in a continuous way, there is no startling effect and I don't feel the need to avoid the object". Based on the concept of tactile-icons [Brewster04] (a tree-like classification of meaninful tactile stimulus for human-computer interaction), we are now thinking about creating an intuitive (i.e. readily interpretable) set of tactons to represent and classify different levels of information such as global spatial awareness (a crowdiness/emptiness continuous cue relating somehow to high-level cognitive feeling of claustrophobia or agoraphobia), then a sub-category of more region-specific range information, then obstacle's particular nature, including speed of the object (it would be very useful to avoid small yet dangerous moving objects such as rotating wings of a fan for intance), speed of looming objects, etc.

Networked array of modules

Interestingly, modules could be made to communicate with each other (even if they are situated in places far appart on the body) in other to create whole-body dynamic stimulus (such as "waves" on the skin to indicate the direction of the looming object), or even new "through the body" sensations by sequentially activating diametrally oposed actuators on the body (to represent directions for instance, or virtual bodies and geometrical figures in the case of augmented reality applications).


Publications

  • Cassinelli, A., Reynolds, C. and Ishikawa, M. "Haptic Radar". The 33rd International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH), August 1, (2006), Boston, Massachusetts, USA. [PDF-202KB, Large Quicktime Video, Small Quicktime Video, MPG-4].
  • Cassinelli, A., Reynolds, C. and Ishikawa, M. (2006) "Augmenting spatial awareness with Haptic Radar". Tenth International Symposium on Wearable Computers (ISWC), October 11 - 14, 2006, Montreux, Switzerland. Short paper (4 pages) [PDF-103KB]. Slide presentation [PPT-6.4MB]. Unpublished long version (6 pages) [PDF-268KB].

References

  • [Bach-Y-Rita2003] P. Bach-Y-Rita, M. E. Tyler, and K. A. Kaczmarek. Seeing with the brain. Int. Journal of Human Computer Interaction, 15(2):285–295, 2003.
  • [Benjamin1973] J. M. Benjamin, N. A. Ali, and A. F. Schepis. A laser cane for the blind. Proc. San Diego Biomedical Symp., 12:53–57, 1973.
  • [Borenstein1990] J. Borenstein. The navbelt - a computerized multi-sensor travel aid for active guidance of the blind. CSUN’s Fifth Annual Conference on Technology and Persons with Disabilities, Los Angeles, California, pages 107–116, 1990.
  • [Brewster04] S. Brewster and L. Brown. Tactons: Structured tactile messages for non-visual information display. Proceedings of Australasian User Interface Conference, Austalian Computer Society, (Dunedin, New Zealand), pages 15–23, 2004.
  • [Cassinelli05] A. Cassinelli, S. Perrin, and M. Ishikawa. Smart laser scanner for 3d human-machine interface. ACM SIGCHI 2005 Extended Abstracts, Portland, Oregon, pages 1138– 1139, 2005.
  • [Kaczmarek1991] K. A. Kaczmarek, J. G. Webster, P. Bach-y Rita, and W. J. Tompkins. Electrotactile and vibrotactile displays for sensory substitution systems. IEEE Trans Biomed Eng, 38(1):1–16, January 1991.
  • [Meijer1992] P. B. L. Meijer. An experimental system for auditory image representations. IEEE Transactions on Biomedical Engineering, 39(2):112–121, Feb 1992.
  • [Phillips1998] G. Phillips. The miniguide ultrasonic mobility aid. GDP Research, South Australia, 1998.
  • [Ptito05] M. Ptito, S. Moesgaard, A. Gjedde, and R. Kupers. Crossmodal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain, 128:606–614, Mar 2005.
  • [Stein1998] D. K. Stein. The optacon: Past, present, and future. The Braille Monitor, 41(5), 1998.
  • [Yuan2004] D. Yuan and R. Manduchi. A tool for range sensing and environment discovery for the blind. IEEE Conference on Computer Vision and Pattern Recognition Workshop, page 39, 2004.

[ Home | Sensor Fusion | DVS | Vision Chip | Optics in Computing | Members | Papers | Movies ]


Ishikawa-Namiki-Komuro laboratory WWW admin: www-admin (at) k2.t.u-tokyo.ac.jp
Page last updated by A. Cassinelli, 15.11.2006