Use of Haptics for the Enhanced Musuem WebsiteUSC

Use of Haptics for the Enhanced Musuem Website-USC Our mission for the Enhanced Museum project is to explore new technologies for the exhibition of three-dimensional art objects (Goldberg, Bekey, Akatsuka, and Bressanelli, 1997; McLaughlin, 1998; McLaughlin, Goldberg, Ellison, and Lucas, 1999; McLaughlin and Osborne, 1997; Schertz, Jaskowiak, and McLaughlin, 1997). Although it is not yet commonplace, a few museums are exploring methods for 3D digitization of priceless artifacts and objects from their sculpture and decorative arts collections, making the images available via CD-ROM or in-house kiosks. For example, the Canadian Museum of Civilization has collaborated with Ontario-based Hymarc to use the latter’s ColorScan 3D laser camera to create three-dimensional models of more than fifty objects from the museum’s collection (Canarie, Inc., 1998; Shulman, 1998).

A similar partnership has been formed between the Smithsonian Institution and Synthonic Technologies, a Los Angeles-area company. At Florida State University , the Deparment of Classics is working with a team to digitize Etruscan artifacts using the RealScan 3D imaging system from Real 3D (Orlando, Florida), and art historians from Temple University are collaborating with researchers from the Watson Research Laboratory’s visual and geometric computing group to create a model of Michaelangelo’s Pieta with the Virtuoso shape camera from Visual Interface (Shulman, 1998). In collaboration with our colleagues at USC’s accredited art museum, the Fisher Gallery, our IMSC team is developing an application for the Media Immersion Environment that will not only permit museum visitors to examine and manipulate digitized three-dimensional art objects visually, but will also allow visitors to interact remotely, in real time, with museum staff members to engage in joint tactile exploration of the works of art. Our team believes that the “hands-off” policies that museums must impose limit appreciation of three-dimensional objects, where full comprehension and understanding rely on the sense of touch as well as vision. Haptic interfaces will allow fuller appreciation of three-dimensional objects without jeopardizing conservation standards. Our goal is to assist museums, research institutes and other conservators of priceless objects in providing the public with a vehicle for object exploration, in a modality that could not otherwise be permitted. Our initial application will be to a wing of the virtual museum focusing on examples of the decorative arts: the Fisher Gallery’s collection of teapots. The collection is comprised of 150 teapots from all over the world.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

It was a gift to USC in memory of the late Patricia Daugherty Narramore by her husband Roth Narramore. The Narramores, USC alumni, collected the pots on their many domestic and international journeys. Some items are by local artists, others by artists and makers from other countries, including China, Indonesia, Canada, Japan, Brazil, England, Portugal, Morroco, and Sweden. Materials used to make the pots range from porcelain and clay to wicker and metal. The teapots are ideal candidates for haptic exploration, not only for their varied shapes but also for theirunusual textures and surface decoration. Figure 1. Teapots from the Fisher Gallery’s Narramore CollectionHaptics refers to the modality of touch and the associated sensory feedback. Haptics researchers are interested in developing, testing, and refining tactile and force feedback devices that allow users to manipulate and “feel” virtual objects with respect to such features as shape, temperature, weight and surface texture (Basdogan, Ho, Slater, and Srinavasan, 1998; Bekey, 1996; Burdea, 1996; Brown ; Colgate, 1994; Buttolo, Oboe, Hannaford ; McNeely, 1996; Dinsmore, Langrana, Burdea, and Ladeji, 1997; Geiss, Evers, ; Meinzer, 1998; Ikei, Wakamatsu, ; Fukuda, 1997; Liu, Iberall, ; Bekey, 1989; Howe, 1994; Howe and Cutkosky, 1993; Mar, Randolph, Finch, van Verth, ; Taylor, 1996; Massie, 1996; Millman, 1995; Mor, 1998; Nakamura ; Inoue, 1998; Rao, Medioni, Liu, ; Bekey, 1988; Srinivasan ; Basdogan, 1997; Yamamoto, Ishguro, ; Uchikawa, 1993).

Haptic acquisition and display devicesResearchers have been interested in the potential of force feedback devices such as pen or stylus-based masters, like Sensable’s PHANToM (Massie, 1996; Salisbury, Brock, Massie, Swarup, & Zilles, 1995; Salisbury & Massie, 1994), as alternative or supplemental input devices to the mouse, keyboard, or joystick. The PHANToM is a small, desk-grounded robot that permits simulation of single fingertip contact with virtual objects through a thimble or stylus. It tracks the x, y, and z Cartesian coordinates and pitch, roll and yaw of the virtual probe as it moves about a three-dimensional workspace, and its actuators communicate forces back to the user’s fingertips as it detects collisions with virtual objects, simulating the sense of touch. The CyberGrasp from Virtual Technologies is an exoskeletal device which fits over a 22 DOF CyberGlove, providing force feedback and vibrotactile contact feedback; it is used in conjunction with a position tracker to measure the position and orientation of the forearm in three-dimensional space.

Similar to the CyberGrasp is the Rutgers Master II (Burdea, 1996; Gomez, 1998; Langrana, Burdea, Ladeiji, and Dinsmore, 1997) which has an actuator platform mounted on the palm that gives force feedback to four fingers. Position tracking is done by the Polhmeus Fastrak. Alternative approaches to haptic sensing and discrimination have employed the vibrotactile display, which applies multiple small force vectors to the fingertip. For example, Ikei, Wakamatsu, and Fukuda (1997) used photographs of objects and a contact pin array to transmit tactile sensations of the surface of objects.

Each pin in the array vibrates commensurate with the local intensity (brightness) of the surface area. Image intensity is roughly correlated with the height of texture protrusions. A data glove originating at Sandia (Sandia, 1995) uses rod-like plungers to tap the fingertips lightly to simulate tactile sensations, and a magnetic tracker and strain gauges to follow the movements of the user’s hand and fingers. Howe (1996) notes that vibrations are particularly helpful in certain kinds of sensing tasks, such as assessing surface roughness, or detecting system events (for example, contact and slip in manipulation control). Researchers at the Fraunhofer Institute for Computer Graphics in Darmstadt have developed a glove-like haptic device they call the ThermoPad, a haptic temperature display based on Peltier elements and simple heat transfer models; they are able to simulate not only the “environmental” temperature but also the sensation of heat or cold one experiences when grasping or colliding with a virtual object. At the University of Tsukuba, Japan, Iwana, Yano, and Hashimoto (1997) are using the HapticMaster, a 6 DOF device with a ball grip that can be replaced by various real tools for surgical simulations and other specialized applications.

A novel type of haptic display is the Haptic Screen (Iwana, Yano, and Hashimoto, 1997), a device with a rubberized elastic surface with actuators, each with force sensors, underneath. The surface of the Haptic Screen can be deformed with the naked hand. An electromagnetic interface couples the ISU Force Reflecting Exoskeleton, developed at Iowa State University, to the operator’s two fingers, eliminating the burdensome heaviness usually associated with exoskeletal devices. Finally, there is considerable interest in 2D haptic devices. For example, Pai and Reissell at the University of British Columbia have used the Pantograph 2D haptic interface , a two-DF force-feedback planar device with a handle the user moves like a mouse, to feel the edges of shapes in images (Pai ; Reissell, 1997).

At IMSC we are currently working with both the PHANToM and the CyberGrasp, using the Polhemus Fastrak for tracking the position of the CyberGrasp user’s hand. The tracking problem has been widely studied in the context of mobile robots at USC (Roumeliotis, Sukhatme, and Beckey, 1999a, 1999b). In the museum application the visitor and the museum staff member will be able to manipulate haptic data jointly, regardless of display type. Thus one of our primary concerns is insure proper registration of the disparate devices with the 3D environment and with each other. Of potential use in this regard is work by Iwata, Yano, and Hashimoto (1997) on LHX (Library for Haptics), a modular software that can support a variety of different haptic displays. LHX allows a variety of mechanical configurations, supports easy construction of haptic user interfaces, allows networked applications in virtual spaces, and includes a visual display interface. We are particularly eager to begin work with the CyberGrasp; to date we have been unable to identify any published work or conference papers reporting research using the device, which we attribute in part to its expense and relative infancy as a haptic display device. Figure 2.

Haptic acquisition and display devicesRepresentative applications in haptic acquisition and display A primary application area for haptics has been in surgical simulation and medical training. Langrana, Burdea, Ladeiji, and Dinsmore (1997) used the Rutgers Master II haptic device in a training simulation for palpation of subsurface liver tumors. They modeled tumors as comparatively harder spheres within larger and softer spheres. Realistic reaction forces were returned to the user as the virtual hand encountered the “tumors,” and the graphical display showed corresponding tissue deformation produced by the palpation. Finite Element Analysis was used to compute reaction forces corresponding to deformation from experimentally obtained force/deflection curves. Andrew Mor of the Robotics Institute at Carnegie Mellon (Mor, 1998) has used the PHANToM in conjunction with a 2DOF planar device so that the new device would generate a moment measured about the tip of a surgical tool in an arthroscopic surgery simulation, thus providing a more realistic training for the kinds of unintentional contacts with ligaments and fibrous membranes that an inexperienced resident might encounter.

At MIT, De and Srinivasan (1998) have developed models and algorithms for reducing the computational load required to generate visual rendering of organ motion and deformation and the communication of forces back to the user resulting from tool-tissue contact. They model soft tissue as thin-walled membranes filled with fluid. Force-displacement response is comparable to that obtained in in vivo experiments. Giess, Evers, and Meinzer (1998) integrated haptic volume rendering with the PHANToM into the pre-surgical process of classifying liver parenchyma, vessel trees and tumors. Surgeons at the Pennsylvania State University School of Medicine in collaboration with Cambridge-based Boston Dynamics used two PHANToMs in a training simulation in which residents passed simulated needles through blood vessels, allowing them to collect baseline data on the surgical skill of new trainees. Iwata, Yano, and Hashimoto (1998) report the development of a surgical simulator with a “free form tissue” which behaves like real tissue, e.

g., can be cut. Gruener (1998), in one of the few research reports which expresses reservations about the potential of haptics in medical applications, found that subjects in a telementoring session did not profit from the addition of force feedback to remote ultrasound diagnosis.

There have been a few projects in which haptic displays are used as alternative input devices for painting, sculpting and computer-assisted design. At CERTEC, the Center of Rehabilitation Engineering in Lund, Sweden, Sjostrom (Sjostrom, 1997) and his colleagues have created a painting application in which the PHANToM can be used by the visually impaired; line thickness varies with the user’s force on the fingertip thimble and colors are discriminated by their tactual profile. Marcy, Temkin, Gorman, and Krummel (1998) have developed the Tactile Max, a PHANToM plug-in for 3D Studio Max. Dynasculpt, a prototype from Interval Research Corporation (Snibbe, Anderson, and Verplank, 1998) permits sculpting in three dimensions by attaching a virtual mass to the PHANToM position and constructing a ribbon through the mass’s path through the 3D space. Gutierrez, Barbero, Aizpitarte, Carrillo, and Eguidazu (1998) have integrated the PHANToM into DATum, a geometric modeller.

Objects can be touched, moved, or grasped (with two PHANToMs), and the assembly/disassembly of mechanical objects can be simulated. Haptics has also been incorporated into scientific visualization. Drubeck, Macias, Weinstein, Johnson, and Hollerbach (1998) have interfaced SCIrun, a computation software steering system, to the PHANToM. Both haptics and graphics displays are directed by the movement of the PHANToM stylus through haptically rendered data volumes. Similar systems have been developed for geoscientific applications (e.g., the Haptic Workbench, Veldkamp, Truner, Gunn, and Stevenson, 1998), Green and Salisbury (1998) have produced a convincing soil simulation (Green and Salisbury, 1998) where they have varied parameters such as soil properties, plow blade geometry, and angle of attack. At Interactive Simulations, a San Diego-based company, researchers have succeeded in adding a haptic feedback component to Sculpt, a program for analyzing chemical and biological molecular structures, which will permit analysis of molecular conformational flexibility and interactive docking.

There are several commercial 3D digitizing cameras available for applications like the museum, such as the ColorScan and the Virtuoso shape cameras. The latter uses six digital cameras, five black and white cameras for capturing the shape information and one color camera which acquires texture information which is layered onto the triangle mesh. Our digitization process begins with models acquired from photographs, using a semiautomatic system to infer complex 3-D shapes from photographs developed at IMSC (Chen, 1998, 1999).

Images are used as the rendering primitives, beginning with six input images of our “teapots” at 60 degrees separation; multiple input pictures are allowed, taken from nearby viewpoints with different position, orientation and camera focal length. Other comparable approaches to digitizing museum objects (e.g., Synthonics) use an older version of the shape-from-stereo technology which requires the cameras to be calibrated whenever the focal length or relative position of the two cameras is changed. The direct output of the IMSC program is volumetric but is converted to a surface representation for the purpose of graphic rendering. The reconstructed surfaces are quite large, on the order of 40 MB.

They are decimated with a modified version of a program for surface simplication using quadric error metrics written by Garland and Heckbert (1997). Figure 3. Teapot digitization: 1 of six input views; an image of the reconstructed point set; an image of the omnidirectional solid model (reconstructed surface)Pai and Reissell(1997) report on a technique based on wavelets for multiresolution modeling of 2D shapes. The models rely on a robust edge detector to detect boundary curves in the image. These curves are then rendered as solid objects using a haptic interface.

The system also incorporates a fast contact detection algorithm based on collision trees. The paper includes a discussion of a state machine that serves as a simple model for contact transition and hence, force computation. Volumetric data is used extensively in medical imaging and scientific visualization. Currently the GHOST SDK, which is the development toolkit for the PHANToM, construes the haptic environment as scenes composed of geometric primitives. Huang, Qu, and Kaufman of SUNY-Stony Brook have developed a new interface which supports volume rendering, based on volumetric objects, with haptic interaction. The APSIL library (Huang, Qu, and Kaufman, 1998) is an extension of GHOST.

To date the Stony Brook group has developed succesful demonstrations of volume rendering with haptic interaction from CT data of a lobster, a human brain, and a human head, simulating stiffness, friction, and texture solely from the volume voxel density. The development of the new interface may facilitate working directly with the volumetric representations of the teapots obtained through the view synthesis methods. The surface texture of an object can be displacement mapped (consisting of thousands of tiny polygons) (Srinivasan and Basdogan, 1997), although the computation demand is such that force discontinuities can occur, or more commonly, a “texture field” can be constructed from 2-D image data. For example, Ikei, Wakamatsu, and Fukuda (1997 created textures from images converted to greyscale, then enhanced to heighten brightness and contrast, such that the level and distribution of intensity corresponds to variation in the height of texture protrusions and retractions (Ikei et al., 202). They then employed an array of vibrating pins to communicate tactile sensations to the user’s fingertip, with the amplitude of the vibration of each pin driven at the intensity level of the underlying portion of the image.

Surface texture may also be rendered haptically, through techniques such as force perturbation, where the direction and magnitude of the force vector is altered using the local gradient of the texture field to simulate effects such as coarseness (Srinivasan and Basdogan, 1997). Synthetic textures such as wood, sandpaper, cobblestone, rubber, and plastic may also be created using mathematical functions for the height field (Anderson, 1996; Basogan, Ho, and Srinivasan, 1997). The ENCHANTER environment (Jansson, Faenger , Konig and Billberger, 1998) has a texture mapper which can render sinus, triangular, and rectangular textures as well as textures provided by other programs, for any haptic object provided by the Ghost SDK.Researchers working with force feedback devices for object sensing have been concerned with issues of presence, or the fidelity (realism) of the haptic experience. For instance, Brown and Colgate (1994), in their physics-based approach to haptic display, address the issue of stability guarantees in virtual environments. In particular they note the threat to presence created when the virtual environment becomes computationally unstable, as for example when a normally “passive” tool, such as a chisel, begins to move independently of the control of the user who is wielding it. Similarly, a virtual wall must unilaterally constrain the user’s forward movement. Brown and Colgate develop a model for improving the passivity of the haptic display through inherent physical damping and the impedence of virtual walls through increased sampling (update rates).

The many potential applications in industry, the military, and entertainment for force feedback in multi-user environments, where two or more users orient to and manipulate objects in a shared environment, have led to work such as that of Buttolo and his colleagues (Buttolo, Hewitt, Oboe, & Hannaford, 1997; Buttolo, Oboe, Hannaford, & McNally, 1996), who note that the addition of force feedback to multi-user environments demands low latency and high collision detection sampling rates. LANs, because of their low communication delay, may be conducive to applications in which users can touch each other, but for wide area networks, or any environment where the demands above cannot be met, Buttolo et al propose their “one-user-at-a-time” architecture. Mark and his colleagues (Mark, Randolph, Finch, van Verth, and Taylor, 1996) have proposed a number of solutions to recurring problems in haptics, such as improving the update rate for forces communicated back to the user. They propose the use of intermediate representation of force through a “plane and probe” method: a local planar approximation to the user’s hand location is computed when the probe or haptic tool penetrates the plane, and the force is updated at approximately 1 kHz by the force server, while the application recomputes the position of the plane and updates it at approximately 20 kHz. Mark et al. also propose solutions to add surface texture and friction to what otherwise would be the slick surface produced under their model, using a parameterized “snag” distribution on the object surface. They also present a method for specifying torques as well as force, and a “recovery-time algorithm” for preventing force discontinuity artifacts, such as occur when the haptic probe’s sideways movement is too fast relative to the computation of the new intermediate representation.

Mark et al. have developed a device-independent library of routines for haptic interfaces, Armlib, which supports multi-user and multi-hand applications. Armlib works with a number of different haptic display devices, including the PHANToM. Psychophysical studies: perceptions of shape and texture in multimodal virtual environmentsThe behavior of the human haptic system has been the subject of far more systematic study than has touching with robotic masters. Texture, apprehended by most subjects through lateral, side-to-side hand movement or exploratory procedure, is only one of several haptically important dimensions of object recognition, including hardness, shape, and thermal conductivity (Klatzky, Lederman, & Reed, 1987).

Most researchers report that subjects are able to discriminate textures and to a lesser extent shapes using the haptic sense only. For example, Ballesteros, Manga, and Reales (1997) reported a moderate level of accuracy for single-finger haptic detection of raised-line shapes, with asymmetric shapes being more readily discriminated. Hatwell (1995) found that recall of texture information coded haptically was successful when memorization was intentional, but not when it was incidental, indicating that haptic information processing may be effortful for subjects. Hughes and Jansson (1994) lament the inadequacy of embossed maps and other devices intended to communicate information to the visually handicapped through the sense of touch, a puzzling state of affairs insomuch as texture perception by active touch (purposeful motion of the skin surface relative to the surface of some distal object) appears to be comparatively accurate, and even more accurate than vision in apprehending certain properties, such as smoothness (Hughes & Jansson, 302). The authors note in their critical review of the literature on active-passive equivalence that active and passive touch (as when a texture is presented to the surface of the fingers, see Hollins et al.

, 1993, below) have repeatedly been demonstrated by Lederman and her colleagues (Lederman, 1985; Lederman, Thorne, & Jones, 1986; Loomis & Lederman, 1986) to be functionally equivalent with respect to texture perception, in that touch modality does not seem to account for a significant proportion of the variation in judgments of such basic dimensions as roughness, even though the two types of touch may lead to different sorts of attributions (respectively, about the texture object and about the cutaneous sensing surface) and motor information should clearly be useful in assessing the size and distribution of surface protrusions and retractions. Active-passive touch is more likely to be equivalent in certain types of perceptual tasks; active touch should be less relevant to judgments of “hardness” than it is to assessments of “springiness”.Bibliography:NONE