Synchronizing Modalities: A Model for Synchronization of Gesture and Speech as Evidenced by American Sign Language
114-122 (complete paper
or proceedings contents
Multiple modalities exist in natural languages, both spoken and signed, as simultaneous channels of communication, and are synchronized and coordinated in their syntactically structured forms so that the patterns of one predict the patterns of the other. This paper presents a basic investigation into the wh-movement of American Sign Language and the co-occurring manual and nonmanual expressions to show that a standard generative analysis is sufficient as an account of ASL. Consequently, and significantly, this yields a model that systematically parses multiple modalities as coordinated structures with synchronized phrasal domains. The model extends Merge Grammar, a minimalist grammar. It implements all features, including the abstract syntactic features associated with nonmanual markings in ASL, as discrete lexical items, to produce top-down parsed derivations of linguistic tree structures. The result is a clear, natural narrative of multiple modalities on the phrasal level. No such model exists. Crosslinguistically, the implementation is operative for parsing and representing how phrasal domains that are not demonstrably syntactic constituents, such as in prosody in spoken languages, interact with syntactic structures.
Proceedings of the 25th West Coast Conference on Formal Linguistics
edited by Donald Baumer, David Montero, and Michael Scanlon
Table of contents