The Haptic Radar / Extended Skin Project[ Wearable laser-based whiskers for augmented spatial awareness ] Alvaro Cassinelli, Carson Reynolds & Masatoshi Ishikawa
|
||
|
|
AbstractWe are developing a wearable and modular electronic device whose goal is to allow users to perceive and respond to spatial information using haptic cues in an intuitive and inherently parallel way.The system is composed of an array of modules, each of which senses range information and transduces it as an appropriate vibro-tactile cue on the skin directly beneath the module. In the future, this modular interface may cover precise skin regions or be distributed in over the entire body surface and then function as adouble-skin with enhanced and tunnable sensing capabilities. Among the targeted applications of this interface are visual prosthetics for the blind, augmentation of spatial awareness in hazardous working environments, as well as enhanced obstacle awareness for car drivers (in this case the extended-skin sensors may cover the surface of the car). The first prototype (a headband configuration) provides the wearer with 360 degrees of spatial awareness and had very positive reviews in our first proof-of-principle experiment (see video here). |
Introduction |
This experiment extends existing research on Electronic Travel Aids (ETAs) relying on tactile-visual sensory substitution for the visually impaired. Keeping things simple, we can say that two different paths have been extensively explored in the past: one is to use the input from an imaging camera to drive a two-dimensional haptic display (placed on the skin [Kaczmarek1991] or even on the tongue [Bach-Y-Rita2003]). This approach benefits from the research on reading aids based on TVSS [Stein1998]. The other approach consist on extending the capabilities of the cane for the blind (using ultrasound or even laser rangefinders), and converting the sensed range-data to a convenient vibro-tactile cue on the hand wearing the device (see for example [Yuan2004], [Benjamin1973]). Each approach has its own advantages and disadvantages. For instance, the "image-mapped-into-the-skin" visual-to-tactile substitution approach is very promising because it potentially provides the user with an extremely rich source of information. However, it may not be as intuitive as it seems at first glance: although research has showed that the brain has some capacity of diverting the tactile stimuli to high-level visual processing centers [Ptito05], people's reflex reaction to a sudden skin stimuli may remain protective, so even a trained subject (able to "see" with the skin) may not be able to shut down this hardwired, low-level spinal cord reflex. More importantly, since stereoscopic vision may be just impossible to achieve through this method, the approach does not provides any intuitive spatial (depth) cue. On the other hand, the white-cane approach is extremely intuitive precisely because it is not really a sensorial-substitution approach: the cane only extends the reach of the user hand through distal attribution. But still, the traditional cane (as well as sophisticated ETAs based on this "extension of the hand" approach such as the Laser Cane [Benjamin1973] or the MiniGuide ultrasonic aid [Phillips1998]) provides "direct spatial awareness" only in the direction pointed by the device (the experience is somehow similar to "tunnel vision"). The user must therefore actively scan the surrounding. This process is inherently sequential, and as a consequence global spatial awareness relies heavily on memory and on the user's ability to mentally reconstruct the surrounding, potentially overloading cognitive functions. The user must scan the environment as a sighted people would do with a flashlight in a dark room. The result is fragmented, poor spatial awareness with a very low temporal resolution or ``tunnel vision''. In certain cases this approach may be sufficient, for instance when tracking a signed path on the floor. Some researchers have proposed an interesting way to overcome this resource-consuming scanning by using an automated scanner and converting the temporal stream of range-data to a modulated audio wave [Meijer1992], [Borenstein1990]. However, this is not an ideal solution, because (1) the user must be trained (a fair amount of time) to interpret sound-scape in order to ``de-multiplex'' the temporal data into space cues, and (2) the device is (cognitively) obtrusive since it may distract people from naturally processing the audio input. The Haptic Radar "spatially extended skin" paradigmThe haptic-radar project intends filling the gap between the two different approaches described above, while trying to retain the most interesting aspects of each. This is realized thanks to the following paradigm:
Actually, depending on the number and relative placement of the modules, this device can be seen as either an enhancement of the white cane approach (when just using a single module carried on the hand) or as a variation of the more classic ``image on the skin'' TVSS (when a large quantity of sensors are placed on the same skin region). It is a variation of the later approach, because instead of luminance-pixels what we have here are depth-pixels and besides the range data does not come from a unique 3D camera (or raster-scan laser rangefinder), but instead each measurement is independently performed at the module site. The user relies on spatial proprioception to give meaning to the tactile cues. The driving intuition here is that tactile perception and spatial information are closely related in our cognitive system for evolutionary reasons: an analogy for our artificial sensory system in the animal world would be the cellular cilia, insect antennae, as well as the specialized sensory hairs of mammalian whiskers. We then speculate that, at least for certain applications (such as clear path finding and collision avoidance), the utility and efficiency of this type of sensory substitution may be far greater than what what we can expect of more classical TVSS systems. (i.e. the "image on the skin" formed by processing and "printing" an image onto a 2D array of tactile stimulators). Eventually, this approach to range information can be applied to spaces at different scales and allow users to perceive other information such as texture, roughness, or temperature of objects far away. |
Of course, the optimal placement of the modules on the skin is of crucial importance. First, since each module is supposed to work as a "micro-cane", it is important that the module orientation remains consistent with its output. In the case of the ordinary white-cane, the user relies on body proprioception to learn about the hand/arm orientation; in the case of the proposed modular system, it is important that each module is placed in a meaninful, stable way over the body surface. One obvious way is to locate the modules on "bands" around the body, each module sustaining a single angular section of the field of view (a "cone of awareness" as in our head-band prototype). However, other strategies are possible, for instance placing a very few modules all around the body at "critical" locations (places which may be fragile/delicate, such as the head or the joints - places where people would normally wear helmets and protections in dangerous situations). Non-obstrusivenes is also of concern. The same question arises in classic visual-to-tactile systems, but there are quite a few skin regions (with more or less good spatial resolution sensitiviness) that can be readily used for our purposes. The skin of the head is a good choice because the skin does not moves a lot relatively to the skull - so that module stimulus gives the user a clear and consistent cue about the relative position of the object/obstacle. |
In a word, what we are proposing here is to build artificial, wearable, ligh-based hairs (or antenas). In the near future, we may be able to create something like on-a-chip, skin-implantable whiskers using MOEMS technology. The actual hair stem will be an invisible, unobstrusive, steerable laser beam. Results in a similar direction have been already achieved in the framework of the smart laser scanner project in our own lab [Cassinelli05]. |
Proof-of-Principle (Head-band Prototype)To date (14.11.2006), we have designed and run preliminary tests on two different prototypes, both configured as a wearable headband. This particular configuration provides the wearer with 360 degrees of spatial awareness.
Details on the hardware and results of the preliminary experiments can be found here (check also the slide presentation). |
Publications
|
[ Home | Sensor Fusion | DVS | Vision Chip | Optics in Computing | Members | Papers | Movies ]