Эротические рассказы

The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon OviattЧитать онлайн книгу.

The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt


Скачать книгу
but a crawler will exhibit considerable distress when put in the same situation [Campos et al. 1992]. In other words, non-crawling infants were not afraid of being atop a clear surface, but crawlers were. This suggests that non-crawling infants are not as affected by visual cues of surfaces as are crawling infants and that the ability of locomotion to bind visual and somatosensory cues has provided the infant with the knowledge that “clear” surfaces are not accompanied by the expected somatosensory cues and, therefore, do not afford locomotion.

      In addition, the perception of depth is affected differentially by different modes of locomotion. For example, crawlers will crawl down slopes or across surfaces that are safe for crawling but that are not safe for walking, and early walkers will walk down slopes and across surfaces that are only safe for walking [Gibson and Walk 1960, Adolph et al. 2008, 2010, Adolph 1997]. Further, crawlers and walkers detect safe drop-offs vs. unsafe drop-offs that are specific to their particular mode of locomotion [Kretch and Adolph 2013]. A safe drop for a crawler is quite different from one for an early walker, as the risk in falling farther is greater. This is detected and used in this form of multimodal experience. Interestingly, infants use multimodal-multisensory information when approaching and understanding slope and drop-off: they reach over the edges and feel with their hands, before descending.

      Why would locomotion have such a profound effect on depth perception? The assumption from these and other studies is that the motor experience of locomotion allows the individual to experience depth through their own visual-motor experience. The multimodal-multisensory information that results from locomotion (motor action) including proprioception, somatosensation, and vision, are all simultaneous, or time-locked, allowing for integration of the multiple inputs (sensory and motor) into a representation that combines the inputs. New perception-action correspondences are dynamically coupled as a response to the environmental stimulation that accompanies new behaviors.

       2.4.2 Three Dimensional Object Structure

      Multimodal exploration of objects emerges in the first year of life. Very young infants will bring an object into their mouths for tactile exploration, whereas older infants (7 months) will begin to bring an object in front of their eyes for visual inspection [Rochet 1989, Ruff et al. 1992]. Early multimodal object exploration leads to significant gains in object knowledge. For example, infants who explore objects through self-generated actions are better able to apprehend two objects as separate entities compared to infants who do not explore, even when the objects explored and the objects seen are completely different [Needham 2000]. That is, the exploration served to help the infants learn about object properties that generalized beyond their direct experience.

      Generally, infants are only able to learn about objects and their properties after they have the ability to reach and hold objects in the environment (multimodal input). However, recent work has allowed pre-grasping infants to be able to hold objects in the hopes of seeing increases in learning about object structure before intentional reaching and holding can occur. This body of work employs “sticky mittens”—mittens that are put onto infants that have Velcro on the palms such that objects (that also have Velcro on them) can be held by the infants at an earlier age than their manual dexterity would allow them to do so [Needham et al. 2002]. The infants are able to “grab” the objects through swiping at them by 3 months of age. With the sticky mittens experience, infants showed significantly more reaches toward objects than infants that wore non-sticky mittens [Libertus and Needham 2010]. But the more interesting aspect of these studies was how learning was changed when infants manually interacted with the objects.

      Very brief experience with sticky mittens led to 3-month infants’ understanding that the actions of others are goal directed, a behavior that is not typically observed until much later [Sommerville et al. 2005]. The implication is that even a very brief experience (2–3 min in this case) seeing one’s own actions as goal directed led to understanding that same experience in others.

      Other research with older children (18–36-month-olds) suggests that multimodal interactions with objects through visually guided action leads to enhanced understanding of object structure. When 18-month-olds are allowed to manually rotate a novel object during initial exposure they do so in a somewhat random manner (see Figure 2.4 top two graphs). But by 24 months, toddlers will manually rotate objects in the same way as older children and adults (see Figure 2.4 bottom two graphs) [Pereira et al. 2010]. Adult’s rotations of similar three-dimensional objects are not statistically different from the 2½ year-olds depicted in Figure 2.4 (see Harman et al. 1999 for adult data). Furthermore, the rotations are not determined by haptic cues to object structure as 24-month-olds will rotate uniquely shaped objects in plexiglass cubes and spheres in the same way [James et al. 2014]. Showing oneself specific views of objects through multimodal interaction (in this case planar views) was also correlated with object recognition scores in 24-month-old children: the more planar views were focused upon, the higher the recognition of objects [James et al. 2013].

      Figure 2.4 Flattened viewing space of objects rotated manually at various ages. The “hot spots” of increased dwell time in older children reflect planar views of objects. The focus on these planar views in adults manually exploring novel objects is well documented [Harman et al. 1999, James et al. 2001]. (From Pereira et al. [2010])

      Another study investigated how moving an object along a trajectory influences the perception of elongation in object structure. When young children actively moved an object horizontally, they were better able to generalize the object’s structure to new objects that were similarly elongated horizontally [Smith 2005]. However, when children simply watched an experimenter perform the same movement, there was no benefit in elongation perception.

      The studies outlined above show that once children can manually interact with objects, their perception of the world changes significantly. They use their hands to manipulate objects in a way that impacts their learning and reflects their understanding of themselves and their environment.

       2.4.3 Symbol Understanding

      As discussed in Section 2.3.4, the extant literature has shown that handwriting symbols is especially effective for early symbol learning. Indeed, handwriting represents an important advance in motor skills and tool use during preschool years. Children progress from being barely able to hold a pencil to the production of mostly random scribbles before being able to produce specific, meaningful forms. In the following, we consider a special case of symbol learning: learning letters of the alphabet.

      When children are first learning letters, they must map a novel, 2D shape onto the letter name and the letter sound. Eventually, they must put combinations of the symbols together to create words, which is another level of symbolic meaning. This is by no means a trivial task. The first step alone, learning to perceive a set of lines and curves as a meaningful unit, has a protracted development in the child and requires explicit teaching. One of the problems is that letters do not conform to what the child has already learned about 3D objects. Specifically, if a letter is rotated from upright, its identity can change. For instance, rotating a “p” 180° results in a different letter identity, a “d”. Change of identity after a 180°degree rotation does not occur with other types of objects: an upright cup and an upside-down cup are both cups. This quality of symbols alone makes things difficult for the early learner and manifests in the numerous reversal errors children make when perceiving and producing letters. Like other objects, however, the symbols must be distinguished from one another by detecting similarities and dissimilarities. In terms of letters, the similarities and differences may be very slight changes in the visual input, for example, the difference between an uppercase C and an uppercase G. Things get even more difficult when one introduces the many examples of letters that are handwritten that one must decipher. However, the variability present in handwritten letters may be important in understanding why


Скачать книгу
Яндекс.Метрика