Bradley Rhodes, Kenji Mase, "Wearables in 2005," IEEE Pervasive Computing, vol. 5, no. 1, pp. 92-95, 2006.

Wearables in 2005

Bradley Rhodes and Kenji Mase

In July 1996, one year before the first International Symposium on Wearable Computers, DARPA sponsored a workshop entitled "Wearables in 2005" (www.darpa.mil/MTO/Displays/Wear2005). Attendees predicted how wearable computers might be used in 2005 and identified key technology gaps that needed to be filled to make their vision a reality. In October 2005, the 9th Annual International Symposium on Wearable Computing was held in Osaka, Japan, the first to be held in Asia. Participants presented a wide range of research from both industry and academia, spanning 13 countries and weaving together such diverse fields as interface design, hardware and systems, gesture and pattern recognition, textiles, augmented reality, and clothing design.1 Many of the themes would have sounded familiar in 1996, with continuing improvements in ergonomics and power management as well as gesture recognition and augmented reality.

As you would hope, the field has also developed in new directions in the past decade, with a much greater emphasis on large-scale recording and annotation of everyday activities, on the science and engineering of clothing design, and on performing thorough quantitative evaluations of potential input devices. We have also seen a large increase in the use of accelerometers, smart phones, and RFID readers as researchers leverage continuing drops in cost and size in the consumer electronics world.

As the largest primary conference for wearables researchers, ISWC provides a good snapshot of the state of the field. So, with the benefit of hindsight, here are some highlights of how wearables research actually looked in 2005.

RFID for context awareness

RFID readers have long been used as wearable input devices in semicontrolled environments because they let computers quickly identify tagged objects without requiring lengthy user input or complex machine-vision algorithms. This year, RFID technology was ubiquitous, with six papers presenting work that used it.

Two papers used RFID tags traditionally, as a fast input method. In "Wearable Technology for Crime Scene Investigation," Chris Baber and his colleagues used RFID-tagged evidence bags to quickly link evidence to speech annotations and other log entries. Baber and colleagues note that using the evidence bags to tag and retrieve audio annotations can provide the advantages of digital records while retaining the advantages of physical annotations written on the bags. In "An Event-driven Navigation Platform for Wearable Computing Environments," Masakazu Miyamae and his colleagues demonstrated a platform that could use a wide variety of sensors to determine a wearer's location, including GPS for outdoors and RFID and barcode scanners for indoor locations. The wearer would explicitly scan a tag to receive information about a location.

Figure 1. The iGlove has an embedded RFID reader that informs
researchers what objects the wearer is touching at a given time.


The remaining four papers used RFID tags not as explicit input devices but to automatically determine the context of wearers as they went about their daily activities. In "Fine-Grained Activity Recognition by Aggregating Abstract Object Usage," Donald J. Patterson and his colleagues describe a joint University of Washington/Intel Research Seattle study in which subjects wore a glove with an embedded RFID reader while performing such everyday morning activities as making oatmeal, eating breakfast, and setting and clearing the table. This research's ultimate goal is to automatically detect "activities of everyday living" in areas such as elder care, where staff would like a general idea of what a resident is doing so they can tell when to move the resident to a higher level of care. In the study, researchers placed RFID tags on every object in the kitchen that subjects touched during a practice trial. The tags provided the wearable with a continuous stream of information regarding what objects the subject was touching and for how longat a given time. The researchers then fed this data into an automatic classifier that attempted to label the activities. The best classifier achieved 88 percent accuracy in correctly distinguisheding among 11 interleaved and occasionally interrupted activities 88% of the time, with 60 objects tagged in the environment. This paper won this year's Best Paper Award, the first to be awarded at an ISWC. "Hands-On RFID: Wireless Wearables for Detecting Use of Objects" by Kenneth P. Fishkin, Matthai Philipose, and Adam Rea, described the technical details of the RFID-reader glove, dubbed the "iGlove."

Assaf Feldman and his colleagues from the MIT Media Lab used a similar technique in "ReachMedia: On-the-Move Interaction with Everyday Objects," one of this year's six best-paper nominees. This project uses a bracelet instead of a glove and focuses more on interface and application design issues and less on activity classification than the previous project. The system provides just-in-time information about objects in the environmentÑfor example, a summary of a book the wearer is holding. The wearer gives explicit input using simple gestures, which wireless accelerometers in the bracelet detect. The wearable also automatically senses what object the wearer is holding via the wireless RFID reader. Technically, the authors call this "semi-implicit" input: the wearable can identify the object you're holding, but you might have to hold it a certain way to make sure the reader reads the tag, especially with larger objects. The system speaks information about the held objects through a cellular phone ear piece.

The final RFID paper, "RFID Information Grid for Blind Navigation and Wayfinding" by Scooter Willis and Sumi Helal of the University of Florida, describes a system that helps the blind navigate by embedding RFID chips under rugs, along baseboards, and along sidewalks' edges. RFID readers integrated into a shoe and the tip of a walking cane read the tags. Location is read out on a cellphone.

Interface evaluations

In wearable computing's early days, most interface evaluation studies were either highly task-driven (such as the pioneering Vu-Man studies at Carnegie Mellon University that looked at wearables for vehicle repair) or preliminary evaluations of what were mostly demonstrations of new concepts or technology. As the wearables field has matured, evaluations of wearable interfaces have become more quantitative and robust.

One such evaluation this year was "The Impacts of Limited Visual Feedback on Mobile Text Entry for the Twiddler and Mini-QWERTY Keyboards" by James Clawson and his colleagues, Kent Lyons, Thad Starner, and Edward Clarkson, another best-paper nominee. This is the latest in a series of papers from Georgia Tech on typing speeds for mobile keyboards. Previous papers studied learning rates for the Twiddler one-handed chording keyboard, expert Twiddler-user speeds, and Mini-QWERTY keyboard speeds and learning rates. In this latest longitudinal study, the authors found that experts on the mini-QWERTY keyboard can type around 60 words per minute (two-handed). This drops, however, to 46 wpm when typing blind (for example, as you would if you were taking notes while looking at slides or typing under the desk during a meeting). This compares to about 47 wpm (with higher accuracy) using the Twiddler one-handed chording keyboard, which is already designed for eyes-free use. (In the interest of full disclosure, Bradley Rhodes was on Kent Lyons' PhD thesis committee.)

Another input-related study was "Acceptable Operating Force for Buttons on In-Ear Type Headphones" by Vincent Buil and Gerard Hollemans, in which the authors determined the maximum force they could apply to a button on an in-ear headphone without causing discomfort. Buil and Hollemans were motivated by a new interface for controlling portable audio players (demonstrated at the conference) where you simply tap the ear piece in a given pattern to control selection and playback.

For output, Jason Wither and Tobias Höllerer of the University of California, Santa Barbara evaluated three techniques to enhance the apparent depth of augmented-reality overlays in the third best-paper nominee, "Pictorial Depth Cues for Outdoor Augmented Reality." The first technique uses horizontal and vertical "shadow planes" to indicate the depth of the 3D cursor and any virtual objects by projecting their shadows on two receding grids. The second technique displays a small top-down view of graphical objects in the screen's bottom-left corner. The third technique uses color-coding of markers and virtual objects to indicate their depth relative to the 3D cursor position. Markers range from light blue to red to deep green as they move from closer (to the user) than the cursor to further away than the cursor. The researchers were surprised to find almost no statistically significant differences among the techniques, but they did reduce the error rate from around 23 percent to around 10 percent when they added other markers to the scene to show relative depth. The authors suggest that the head-mounted display's relatively low video resolution might have obscured differences among the techniques, especially since subjects viewed both graphical objects and the real world through the video screen.

Clothing design

Figure 2. In a presentation by Joanna Berzowska combining artistry
and technology, memory wire makes brooch flowers open and close.
(figure courtesy of XS Labs, 2005)


We've always encouraged artistic papers at ISWC, as long as they have technical content, but it's often hard for presenters to bridge the culture and communications gap between the technical and artistic communities. Joanna Berzowska's presentation on her and and Marcelo Coelho's paper on animated kinetic dresses, "Kukkia and Vilkas: Kinetic Electronic Garments," succeeded admirably. The project's goal was entirely aesthetic: one dress's hemline rises and lowers as if betraying or thwarting the wearer's secret desires, and brooch flowers open and close of their own accord on another dress's neckline. But the presentation included all the technical details and lessons necessary to accomplish these creations. In particular, the authors used Nitinol (memory wire) sewn into felt to cause the motion. After trying many configurations, they determined that a tight coil was the best configuration to "set" the Nitinol, as it created the largest motion. They also determined that felt was the perfect fabric for their projects for a number of reasons: Felt is sturdy, so when the Nitinol relaxes back to its non-set shape, the felt pulls the dress or flower back to the normal position. Felt is thick, so circuitry and wires can be felted into the fabric itself. And felt is a good insulator of heat and electricity and is relatively fire-retardant, so the wearer is protected if the electronics short out.

Looking at clothing design from a more general perspective was best-paper nominee "A Design Process for the Development of Innovative Smart Clothing that Addresses End-User Needs from Technical, Functional, Aesthetic and Cultural View Points" by Jane McCann, Richard Hurford, and Adam Martin. Rather than presenting a specific design or piece of research, this paper lays out a framework for thinking about smart clothing's design. It also surveys issues ranging from identifying the end-user's body, activity, and cultural needs to fiber selection and layering to integrating electronics.

Recording events

In the past few years, an increasing number of projects have looked at recording everything a wearable user does and then presenting that information in the form of an annotated diary. The main bottleneck for such applications isn't storage, which continues to double every year or so, but annotating and filtering the raw data to make it browsable and searchable. In addition to the University of Washington/Intel project described earlier, several papers at this year's ISWC addressed this difficult problem.

A real-world example of this annotation bottleneck is the motivation for "Recognizing Mimicked Autistic Self-Stimulatory Behaviors Using HMMs" by Tracy Westeyn and her colleagues. Children with the neurodevelopmental disorder autism often engage in repetitive behaviors such as flapping their arms or screaming. Therapists treating such children would like to know the frequency with which these events occur to gauge their treatment's effectiveness, but logs parents keep are notoriously inaccurate. It's not practical for therapists to browse through days of video footage looking for these so-called self-stimulatory behaviors. The eventual goal of this research, which is still in its early stages, is to provide a system with which a therapist can browse through a relatively small number of video clips, watching the salient behaviors and throwing out the few remaining false positives.

Also addressing this bottleneck is "Wearable Hand Activity Recognition for Event Summarization" by Walterio Mayol and David Murray. This project uses a shoulder-mounted active camera to capture what you're doing with your hands, with the ultimate goal of automatically creating video logs of how skilled artisans perform their crafts. The system automatically detects hands by looking for skin color within the camera's field of view and automatically pans the camera to follow your hands as you reach for and manipulate objects. These objects are similarly recognized by color histogram, and the video is then segmented into individual events based on objects being held and hand motion. The final result is a set of key-frame images, each representing a salient moment in the overall video.

Another paper looking at the creation of video logs is the final best-paper nominee "A Body-Mounted Camera System for Capturing User-View Images without Head-Mounted Camera" by Hirotake Yamazoe, Akira Utsumi, and Kenichi Hosaka. Unlike the other papers, this work focuses on the ergonomic and fashion problems associated with head-mounted cameras. Cameras that hang from a lanyard around the neck are far more comfortable and socially acceptable than head-mounted cameras. However, head-mounted cameras have the advantage of automatically capturing whatever the wearer is looking atÑlikely the most important part of a scene. Built on the authors' previous work, the system uses one wide-angled front-facing camera and a pair of infrared cameras that look up at the wearer's head. The system uses the image of the wearer's head to estimate the direction the wearer is looking relative to the camera's orientation and then extracts the region of the wide-angle image that corresponds to that view.

These highlights are just a small sample of the 28 papers and 17 posters presented at ISWC 2005. You can obtain the full list of papers, including abstracts and PDFs, from the IEEE Computer Society Digital Library at www.computer.org/publications/dlib. We also hope you'll participate in the next ISWC, to be held 11—14 October 2006 in Montreux, Switzerland. Submissions are due 21 April. Details are at the conference Web site, http://iswc.net.

Bradley Rhodes is a research scientist at Ricoh California Research Center and program co-chair of the International Symposium on Wearable Computers 2005 (ISWC 05). Contact him at rhodes@bradleyrhodes.com.

Kenji Mase is a professor at Nagoya University and program co-chair of the International Symposium on Wearable Computers 2005 (ISWC 05). Contact him at mase@nagoya-u.jp.

Reference

  1. Proc. 9th IEEE Int'l Symp. Wearable Computers (ISWC 05), IEEE CS Press, 2005.

Copyright © IEEE 2006