The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon OviattЧитать онлайн книгу.
Keysers, M. Umilta, L. Fogassi, V. Gallese, and G. Rizzolatti. 2002. Hearing sounds, understanding actions: Action representation in mirror neurons. Science, 297:846–848. DOI: 10.1126/science.1070311. 38
M. Longcamp, C. Boucard, J-C. Gilhodes, J.-L. Anton, M. Roth, B. Nazarian, and J-L. Velay. 2008. Learning through hand- or typewriting influences visual recognition of new graphic shapes: Behavioral and functional imaging evidence. Journal of Cognitive Neuroscience, 20(5):802–815. DOI: 10.1162/jocn.2008.20504. 35
M. Longcamp, M.-T. Zerbato-Poudou, and J.-L. Velay. 2005. The influence of writing practice on letter recognition in preschool children: A comparison of handwriting and typing. Acta Psychologica, 119:67–79. DOI: 10.1016/j.actpsy.2004.10.019. 35
A. R. Luria. 1961. The Role of Speech in the Regulation of Normal and Abnormal Behavior. Liveright, Oxford. 33, 34
Y. Maehara and S. Saito. 2007. The relationship between processing and storage in working memory span: Not two sides of the same coin. Journal of Memory and Language, 56(2):212–228. DOI: 10.1016/j.jml.2006.07.009. 30
J. Markham and W. Greenough. 2004. Experience-driven brain plasticity: beyond the synapse. Neuron Glia Biology, 1(4):351–363. DOI: 10.1017/s1740925x05000219. 34
R. Mayer and R. Moreno. 1998. A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychololgy, 90:312–320. DOI: 10.1037/0022-0663.90.2.312
M. A. Meredith, J. W. Nemitz, and B. E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. Journal of Neuroscience, 7:3215–3229. 24
G. Miller. 1956. The magical number seven plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2):81–97. 30
G. A. Miller, E. Galanter, and K. H. Pribram. 1960. Plans and the Structure of Behavior. Holt, Rinehart and Winston, New York. 30
S. Morein-Zamir, S. Soto-Faraco, and A. Kingstone. 2003. Auditory capture of vision: Examining temporal ventriloquism. Cognitive Brain Research, 17(1):154–163. 24
S. Mousavi, R. Low, and J. Sweller. 1995. Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology 87(2):319–334. DOI: 10.1037/0022-0663.87.2.319. 32
K. Nakamura, W.-J. Kuo, F. Pegado, L. Cohen, O. Tzeng, and S. Dehaene. 2012. Universal brain systems for recognizing word shapes and handwriting gestures during reading. In Proceedings of the National Academy of Science, 109(50):20762–20767. DOI: 10.1073/pnas.1217749109. 24, 35, 626
D. Norman. 1988. The Design of Everyday Things. Basic Books, New York. 38, 39
S. L. Oviatt. 2000. Multimodal signal processing in naturalistic noisy environments. In B. Yuan, T. Huang, and X. Tang, editors, Proceedings of the International Conference on Spoken Language Processing [ICSLP’2000], pp. 696–699, vol. 2. Chinese Friendship Publishers, Beijing. 25
S. L. Oviatt. 2006. Human-centered design meets cognitive load theory: Designing interfaces that help people think. In Proceedings of the Conference on ACM Multimedia, pp. 871–880. ACM, New York. DOI: 10.1145/1180639.1180831. 32, 34
S. L. Oviatt. 2012. Multimodal interfaces. In J. Jacko, editor, The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications [revised 3rd edition], pp. 405–430, ch. 18. CRC Press, Boca Raton, FL. 25
S. Oviatt. 2013. The Design of Future Educational Interfaces. Routledge Press, New York. DOI: 10.4324/9780203366202. 35, 39, 41
S. Oviatt, A. Arthur, and J. Cohen. 2006. Quiet interfaces that help students think. In Proceedings of the Conference on User Interface Software Technology, pp. 191–200. ACM Press, New York. DOI: 10.1145/1166253.1166284.
S. Oviatt, A. Arthur, Y. Brock, and J. Cohen. 2007. Expressive pen-based interfaces for math education. Proceedings of the Conference on Computer-Supported Collaborative Learning, International Society of the Learning Sciences. DOI: 10.3115/1599600.1599708. 33, 34
S. Oviatt, A. Cohen, A. Miller, K. Hodge, and A. Mann. 2012. The impact of interface affordances on human ideation, problem solving and inferential reasoning. In Transactions on Computer-Human Interaction, ACM Press, New York. DOI: 10.1145/2362364.2362370. 22, 39, 40, 609
S. Oviatt and P. Cohen. 2015. The Paradigm Shift to Multimodality in Contemporary Computer Interfaces. Morgan Claypool Synthesis Series. Morgan & Claypool Publishers, San Rafael, CA. DOI: 10.2200/S00636ED1V01Y201503HCI030. 20, 27, 29
S. L. Oviatt, R. Coulston, S. Shriver, B. Xiao, R. Wesson, R. Lunsford, and L. Carmichael. 2003. Toward a theory of organized multimodal integration patterns during humancomputer interaction. In Proceedings of the International Conference on Multimodal Interfaces [ICMI’03], pp. 44–51. ACM Press, New York. DOI: 10.1145/958432.958443. 21, 25, 27, 28
S. L. Oviatt, R. Coulston, and R. Lunsford. 2004a. When do we interact multimodally? Cognitive load and multimodal communication patterns. In Proceedings of the International Conference on Multimodal Interaction [ICMI’04]. ACM Press, New York. DOI: 10.1145/1027933.1027957. 32
S. L. Oviatt, C. Darves, and R. Coulston. 2004b. Toward adaptive conversational interfaces: Modeling speech convergence with animated personas. Transactions on Computer Human Interaction [TOCHI] 11(3):300–328. DOI: 10.1145/1017494.1017498. 37, 38, 39
S. L. Oviatt, R. Lunsford, and R. Coulston. 2005. Individual differences in multimodal integration patterns: What are they and why do they exist? In Proceedings of the Conference on Human Factors in Computing Systems [CHI’05], CHI Letters. pp. 241–249. ACM Press, New York. DOI: 10.1145/1054972.1055006. 26
S. L. Oviatt, M. MacEachern, and G. Levow. 1998. Predicting hyperarticulate speech during human-computer error resolution. Speech Commun., 24(2):1–23. DOI: 10.1016/S0167-6393(98)00005-3. 22, 618
A. Owen, K. McMillan, A. Laird and E. Bullmore. 2005. N-back working memory paradigm: A meta-analysis of normative functional neuroimaging studies. Human Brain Mapping, 25:46–59. DOI: 10.1002/hbm.20131. 31
F. Paas, J. Tuovinen, H. Tabbers, and P. Van Gerven. 2003. Cognitive load measurement as a means to advance cognitive load theory. Educational Psychology, 38(1):63–71. DOI: 10.1207/S15326985EP3801_8 32
L. Reeves, J. Lai, J. Larson, S. Oviatt, T. Balaji, S. Buisine, P. Collings, P. Cohen, B. Kraal, J.-C. Martin, M. McTear, T. V. Raman, K. Stanney, H. Su, and Q. Wang. 2004. Guidelines for multimodal user interface design. Communications of the ACM, 47(1):57–59. DOI: 10.1145/962081.962106. 28
I. A. Richter and T. Wells, (eds.) 2008. Leonardo da Vinci Notebooks, Oxford World’s Classics (2nd edition), Oxford University Press. 20
G. Rizzolatti and L. Craighero. 2004. The mirror-neuron system. Annual Review of Neuroscience, 27:169–192. DOI: 10.1146/annurev.neuro.27.070203.144230. 38
A. Sale, N. Berardi, and L. Maffei. 2009. Enrich the environment to empower the brain. Trends in Neuroscience, 32:233–239. DOI: 10.1016/j.tins.2008.12.004. 34
E. Saund, D. Fleet, D. Larner, and J. Mahoney. 2003. Perceptually-supported image editing of text and graphics. In Proceedings of the 16th Annual ACM Symposium on User Interface Software Technology [UIST’2003], pp. 183–192. ACM Press, New York. DOI: 10.1145/964696.964717. 28
C. Schroeder and J. Foxe. 2004. Multisensory convergence in early cortical processing. In G. Calvert, C. Spence, and B. Stein, editors, The Handbook of Multisensory Processing, pp. 295–309. MIT Press, Cambridge, MA. DOI: 10.1007/s10339-004-0020-4. 20
L. Shapiro, editor. 2014. The Routledge Handbook of Embodied Cognition. Routledge Press, New York. DOI: 10.4324/9781315775845. 35
B. Smith, editor. 1988. Foundations of Gestalt Theory. Philosophia Verlag, Munich and Vienna. 20
C. Spence and S. Squire. 2003. Multisensory