Non-verbal communication and context: multi-modality in interaction
theory of mind
MetadataShow full item record
Other TitlesCambridge Handbook of Language and Context
AbstractTraditionally, the study of linguistics has focussed on verbal communication. In the sense that linguistics is the scientific study of language, the approach is perfectly justified. Those working in the sub-discipline of linguistic pragmatics, however, are faced with something of a dilemma. The aim of a pragmatic theory is to explain how utterances are understood, and utterances, of course, have both linguistic and non-linguistic properties. As well as this, current work in pragmatics emphasizes that the affective dimension of a speaker’s meaning is at least as important as the cognitive one and it is often the non-linguistic properties of utterances that convey information relating to this dimension. This paper highlights the major role of non-verbal ‘modes’ of communication (‘multi-modality’) in accounting for how meaning is achieved and explores in particular how the quasi-musical contours we impose on the words we say, as well as the movements of our face and hands that accompany speech, constrain the context and guide the hearer to our intended meaning. We build on previous exploration of the relevance of prosody (Wilson and Wharton 2006) and, crucially, looks at prosody in relation to other non-verbal communicative behaviours from the perspective of relevance theory. In-so-doing, we also hope to shed light on the role of multimodality in both context construction and utterance interpretation and suggest prosody needs to be analysed as one tool in a set of broader gestural ones (Bolinger 1983). Relevance theory is an inferential model, in which human communication revolves around the expression and recognition of the speaker’s intentions in the performance of an ostensive stimulus: an act accompanied by the appropriate combination of intentions. This inferential model is proposed as a replacement for the traditional code-model of communication, according to which a speaker simply encodes into a signal the thought they wish to communicate and the hearer retrieves their meaning by decoding the signal they have provided. We will argue that much existing work on multimodality remains rooted in a code model and show how adopting an inferential model enables us to integrate multimodal behaviours more completely within a theory of utterance interpretation. As ostensive stimuli, utterances are composites of a range of different behaviours, each working together to form a range of contextual cues.
CitationMadella P, Wharton T (2023) 'Non-verbal communication and context: multi-modality in interaction', in Romero-Trillo J (ed(s).). Cambridge Handbook of Language and Context, Cambridge, UK: Cambridge University Press pp.419-435.
PublisherCambridge University Press
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International