slowly
In English, iconic modulations can arguably be at-issue and thus interpreted in the scope of grammatical operators. An example is the following sentence: If the talk is loooong , I’ll leave before the end . This means that if the talk is very long, I’ll leave before the end (but if it’s only moderately long, maybe not); here, the iconic contribution is interpreted in the scope of the if- clause, just like normal at-issue contributions. The iconic modulation of GROW has similarly been argued to be at-issue (Schlenker 2018b). (See Section 5.2 for further discussion on the at-issue vs. non-at-issue semantic contributions.)
While conceptually similar to iconic modulations in English, the sign language versions are arguably richer and more pervasive than their spoken language counterparts.
Iconic modulation interacts with the marking of telicity noticed by Wilbur ( Section 2.1 ). GROW , discussed in the preceding sub-section, is an (atelic) degree achievement; the iconic modifications above indicate the final degree reached and the time it took to reach that degree. Similarly, for telic verbs, the speed and manner in which the phonological movement reaches its endpoint can indicate the speed and manner in which the result state is reached. For example, if LSF UNDERSTAND is realized slowly and then quickly, the resulting meaning is that there was a difficult beginning, and then an easier conclusion. Atelic verbs that don’t involve degrees can also be iconically modulated; for instance, if LSF REFLECT is signed slowly and then quickly, the resulting meaning is that the person’s reflection intensified. Here too, the iconic contribution has been argued to be at-issue (Schlenker 2018b).
There are also cases in which the event structure is not just specified but radically altered by a modulation, as in the case of incompletive forms (also called unrealized inceptive, Liddell 1984; Wilbur 2008). ASL DIE , a telic verb, is expressed by turning the dominant hand palm-down to palm-up as shown below (the non-dominant hand turns palm-up to palm-down). If the hands only turn partially, the sign is roughly interpreted as ‘almost die’.
Normal vs. incompletive form of DIE in ASL ( Credits: J. Kuhn)
a. DIE in ASL
b. ALMOST-DIE in ASL
Similarly to the fact that multiple levels of speed and size can be indicated by the verb GROW in (18) , the incompletive form of verbs can be modulated to indicate arbitrarily many degrees of completion, depending on how far the hand travels; these examples thus seem to necessitate an iconic rule (Kuhn 2015). On the other hand, while the examples with GROW can be analyzed by simple predicate modification (‘The group grew and it happened like this: slowly’), examples of incompletive modification require a deeper integration in the semantics, similar to the semantic analysis of the adverb almost or the progressive aspect in English. (Notably, it’s nonsense to say: ‘My grandmother died and it happened like this: incompletely.’)
The key theoretical question lies in the integration between iconic and conventional elements in such cases. If one posits a decompositional analysis involving a morpheme representing the endstate ( EndState or Res , see Section 2.1 ), one must certainly add to it an iconic component (with a non-trivial challenge for incompletive forms, where the iconic component does not just specify but radically alters the lexical meaning). Alternatively, one may posit that a structural form of iconicity is all one needs, without morphemic decomposition. An iconic analysis along these lines has been proposed (Kuhn 2015: Section 6.5), although a full account has yet to be developed.
The logical notion of plurality is expressed overtly in some way in many of the world’s languages: pluralizing operations may apply to nouns or verbs to indicate a plurality of objects or events (for nouns: ‘plurals’; for verbs: ‘pluractionals’). Historically, arguments of Logical Visibility have not been made for plurals in sign languages, since—while overt plural marking certainly exists in sign language—plural morphemes also appear overtly in spoken languages (e.g., English singular horse vs. plural horses ).
Nevertheless, mirroring areas of language in which arguments of Logical Visibility do apply, plural formation in sign language shows a number of unique and revealing properties. First, the morphological expression of this logical concept is similar for both nouns and verbs across a large number of unrelated sign languages: for both plural nouns (Pfau & Steinbach 2006) and pluractional verbs (Kuhn & Aristodemo 2017), plurality is expressed by repetition. We note that repetition-based plurals and pluractionals also exist in speech (Sapir 1921: 79).
Second, in sign language, these repeated plural forms have been shown to feed iconic processes. Modifications of the way in which the sign is repeated may indicate the number of objects or events, or may indicate the arrangement of these pluralities in space or time. Relatedly, so-called ‘punctuated’ repetitions (with clear breaks between the iterations) refer to precise plural quantities (e.g., three objects or events for three iterations), while ‘unpunctuated’ repetitions (without clear breaks between the iterations) refer to plural quantities with vague thresholds, and often ‘at least’ readings (Pfau & Steinbach 2006; Schlenker & Lamberton 2022).
In the nominal domain, the number of repetitions may provide an indication of the number of objects, and the arrangement of the repetitions in signing space can provide a pictorial representation of the arrangement of the denotations in real space (Schlenker & Lamberton 2022). For instance, the word TROPHY can be iterated three times on a straight line to refer to a group of trophies that are horizontally arranged; or the three iterations can be arranged as a triangle to refer to trophies arranged in a triangular fashion. A larger number of iterations serves to refer to larger groups. Here too, the iconic contribution can be at-issue and thus be interpreted in the scope of logical operators such as if -clauses.
TROPHY in ASL, repetition on a line:
TROPHY in ASL, repetition as a triangle:
Credits: M. Bonnet
Punctuated (= easy to count) repetitions yield meanings with precise thresholds (often with an ‘exactly’ reading, e.g., ‘exactly three trophies’ for three punctuated iterations); unpunctuated repetitions yield vague thresholds and often ‘at least’ readings (e.g., ‘several trophies’ for three unpunctuated iterations). While one may take the distinction to be conventional, it might have an iconic source. In essence, unpunctuated iterations result in a kind of pictorial vagueness on which the threshold is hard to discern; deriving the full range of ‘exactly’ and ‘at least’ readings is non-trivial, however (Schlenker & Lamberton 2022).
In the verbal domain, pluractionals (referring to pluralities of events) can be created by repeating a verb, for instance in LSF and ASL. A complete analysis seems to require both conventionalized grammatical components and iconic components. The form of reduplication—as identical reduplication or as alternating two-handed reduplication—appears to conventionally communicate the distribution of events with respect to either time or to participants. But a productive iconic rule also appears to be involved, as the number and speed of the repetitions gives an idea of the number and speed of the denoted events (Kuhn & Aristodemo 2017); again, the iconic contribution can be at-issue.
Iconic plurals and pluractionals alike are now treated by way of mixed lexical entries that include a grammatical/logical component and an iconic component. For instance, if N is a (singular) noun denoting a set of entities S , then the iconic plural N -rep denotes the set of entities x such that:
Condition (i) is the standard definition of a plural; condition (ii) is the iconic part, which is itself in need of an elucidation using general tools of pictorial semantics (see Section 4 ).
Loci, which have been hypothesized to be (sometimes) the overt realization of variables, can lead a dual life as iconic representations. Singular loci may (but need not) be simplified pictures of their denotations: if so, a person-denoting locus is a structured area I , and pronouns are directed towards a point i that corresponds to the upper part of the body. In ASL, when the person is tall, one can thus point upwards (there are also metaphorical cases in which one points upwards because the person is powerful or important). When a person is understood to be in a rotated position, the direction of the pronoun correspondingly changes, as seen in (21) for a person standing upright or hanging upside down (Schlenker 2018a; see also Liddell 2003).
Iconic mappings involving loci may also preserve abstract structural relations that have been posited to exist for various kinds of ontological objects, including mereological relations, total orderings, and domains of quantification.
First, two plural loci—indexed over areas of space—may (but need not) express mereological relations diagrammatically, with a locus a embedded in a locus b if the denotation of a is a mereological part of the denotation of b (Schlenker 2018a). For example, in (22) , the ASL expression POSS-1 STUDENT (‘my students’) introduces a large locus (glossed as ab to make it clear that it contains subloci a and b —but initially just a large locus). MOST introduces a sublocus a within this large locus because the plurality denoted by a is a proper part of that denoted by ab . And critically, diagrammatic reasoning also makes available a third discourse referent: when a plural pronoun points towards b —the complement of the sublocus a within the large locus ab —the sentence is acceptable, and b is understood to refer to the students who did not come to class.
In English, the plural pronoun they clearly lacks such a reading when one says, Most of my students came to class. They stayed home , which sounds contradictory. (One can communicate the target interpretation by saying, The others stayed home , but the others is not a pronoun.) Likewise, in ASL, if the same discourse is uttered using default, non-localized plural pronouns, the pattern of inferences is exactly identical to the English translation.
A second case of preservation of abstract orders pertains to degree-denoting and sometimes time-denoting loci. In LIS, degree-denoting loci are represented iconically, with the total ordering mapped to an axis in space, as described in Section 1.3 . Time-denoting loci may but need not give rise to preservation of ordering on an axis, depending on whether normal signing space is used (as in the ASL examples (9) above), or a specific timeline, as mentioned in Section 1.2 in relation to Chinese Sign Language. As in the case of diagrammatic plural pronouns, the spatial ordering of degree- and time-denoting loci generates an iconic inference—beyond the meaning of the words themselves—about the relative degree of a property or temporal order of events.
A third case involves the partial ordering of domain restrictions of nominal quantifiers: greater height in signing space may be mapped to a larger domain of quantification, as is the case in ASL (Davidson 2015) and at least indefinite pronouns in Catalan Sign Language (Barberà 2015).
A special construction type, classifier predicates (‘classifiers’ for short), has raised difficult conceptual questions because they involve a combination of conventional and iconic meaning. Classifier predicates are lexical expressions that refer to classes of animate or inanimate entities that share some physical characteristics—e.g., objects with a flat surface, cylindrical objects, upright individuals, sitting individuals, etc. Their form is conventional; for instance, the ASL ‘three’ handshape, depicted below, represents a vehicle. But their position, orientation and movement in signing space is interpreted iconically and gradiently (Emmorey & Herzig 2003), as illustrated in the translation of the example below.
‘A car drove by [with a movement resembling that of the hand]
These constructions have on several occasions been compared to gestures in spoken language, especially to gestures that fully replace some words, as in: This airplane is about to FLY-take-off , with the verb replaced with a a hand gesture representing an airplane taking off. But there is an essential difference: classifier predicates are stable parts of the lexicon, gestures are not.
Early semantic analyses, notably by Zucchi 2011 and Davidson 2015, took classifier predicates to have a self-referential demonstrative component, with the result that the moving vehicle classifier in (24) means in essence ‘move like this’, where ‘this’ makes reference to the very form of the classifier movement. As mentioned in Section 2.2 , this analysis has been extended to Role Shift by Davidson (Davidson 2015), who took the classifier to be in this case the signer’s rotated body.
The demonstrative analysis of classifier predicates as stated has two general drawbacks. First, it establishes a natural class containing classifiers and demonstratives (like English this ), but the two phenomena possibly display different behaviors. Notably, while demonstratives behave roughly like free variables that can pick up their referent from any of a number of contextual sources, the iconic component of classifiers can only refer to the position/movement and configuration of the hand (any demonstrative variable is thus immediately saturated). Second, the demonstrative analysis currently relegates the iconic component to a black box. Without any interpretive principles on what it means for an event to be ‘like’ the demonstrated representation, one cannot provide any truth conditions for the sentence as a whole.
Any complete analysis must thus develop an explicit semantics for the iconic component. This is more generally necessary to derive explicit truth conditions from other iconic constructions in sign language, such as the repetition-based plurals discussed in Section 3.3 . above: in the metalanguage, the condition [a certain expression] iconically represents [a certain object] was in need of explication.
A recent model has been offered by formal pictorial semantics, developed by Greenberg and Abusch (e.g., Greenberg 2013, 2021; Abusch 2020). The basic idea is that a picture obtained by a given projection rule (for instance, perspective projection) is true of precisely those situations that can project onto the picture. Greenberg has further extended this analysis with the notion of an object projecting onto a picture part (in addition to a situation projecting onto a whole picture). This notion proves useful for sign language applications because they usually involve partial iconic representations, with one iconic element representing a single object or event in a larger situation. To illustrate, below, the picture in (25a) is true of the situation in (25b) , and the left-most shape in the picture in (25a) denotes the top cube in the situation in (25b) .
Illustration of a projection rule relating (parts of) a picture to (objects in) a world. ( Credits: Gabriel Greenberg)
(a) Picture
(b) Situation
The full account makes reference to a notion of viewpoint relative to which perspective projection is assessed, and a picture plane, both represented in (25b) . This makes it possible to say that the top cube (top-cube) projects onto the left-hand shape (left-shape) relative to the viewpoint (call it π), at the time t and in the world w in which the projection is assessed. In brief:
Classifier predicates (as well as other iconic constructions, such as repetition-based plurals) may be analyzed with a version of pictorial semantics to explicate the truth-conditional contribution of iconic elements.
To illustrate, consider a pair of minimally different words in ASL that can be translated as ‘airplane’: one is a normal noun, glossed as PLANE , and the other is a classifier predicate, glossed below as PLANE-cl . Both forms involve the handshape in (26) , but the normal noun includes a tiny repetition (different from that of plurals) which is characteristic of some nominals in ASL. As we will see, the position of the classifier version is interpreted iconically (‘an airplane in position such and such’), whereas the nominal version need not be.
Handshape for both (i) ASL PLANE (= nominal version) and (ii) ASL PLANE-cl (= classifier predicate version). ( Credits: J. Kuhn)
Semantically, the difference between the two forms is that only the classifier generates obligatory iconic inferences about the plane’s configuration and movement. This has clear semantic consequences when several classifier predicates occur in the same sentence. In (27b) , two tokens of PLANE-cl appear in positions a and b , and as the video makes clear, the two classifiers are signed close to each other and in parallel. As a result, the sentence only makes an assertion about cases in which two airplanes take off next to each other or side by side. In contrast, with a normal noun in (27a) , the assertion is that there is danger whenever two airplanes take off at the same time, irrespective of how close the two airplanes are, or how they are oriented relative to each other.
(ASL, 35, 1916, 4 judgments; short video clip of the sentences, no audio )
To capture these differences, one can posit the following lexical entries for the normal noun and for its classifier predicate version. Importantly, the interpretation of PLANE-cl in (28b) is defined for a particular token of the sign (not a type), produced with a phonetic realization Φ.
Evaluation is relative to a context c that provides the viewpoint, \(\pi_{c}\). In the lexical entry for the normal noun in (28a) , plane' t,w is a (metalanguage) predicate of individuals that applies to anything that is an airplane at t in w . The classifier predicate has the lexical entry in (28b) . It has the same conventional component as the normal noun, but adds to it an (iconic) projective condition: for a token of the predicate PLANE-cl to be true of an object x , x should project onto this very token.
With this pictorial semantics in hand, we can make a more explicit comparison to the demonstrative analysis of classifiers. As described above, a demonstrative analysis takes classifiers to include a component of meaning akin to ‘move like this’. For Zucchi, this is spelled out via a lexical entry very close to the one in (28) , but in which the second clause (in terms of projection above) is instead a similarity function, asserting that the position of the denoted object x is ‘similar’ to that of the airplane classifier; the proposal, however, leaves it entirely open what it means to be ‘similar’. Of course, one may supplement the analysis with a separate explication in which similarity is defined in terms of projection, but this move presupposes rather than replaces an explicit pictorial account. In other words, the demonstrative analysis relegates the iconic component to a black box, whose content can be specified by the pictorial analysis. But once a pictorial analysis is posited, it become unclear why one should make a detour through the demonstrative component, rather than posit pictorial lexical entries in the first place.
A number of further refinements need to be made to any analysis of classifiers. First, to have a fully explicit iconic semantics, one must contend with several differences between classifiers and pictures.
The interaction between iconic representations and the sentences they appear in also requires further refinements. A first refinement pertains to the semantics. For simplicity, we assumed above that the viewpoint relative to which the iconic component of classifier predicates is evaluated is fixed by the context. Notably, though, in some cases, viewpoint choice can be dependent on a quantifier. In the example below, the meaning obtained is that in all classes, during the break, for some salient viewpoint π associated with the class , there is a student who leaves with the movement depicted relative to π; a recent proposal (Schlenker and Lamberton forthcoming) has viewpoint variables in the object language, and they may be left free or bound by default existential quantifiers, as illustrated in (30) . (While there is a strong intuition that Role Shift manipulates viewpoints as well, a formal account has yet to be developed.)
A second refinement pertains to the syntax. Across sign languages, classifier constructions have been shown to sometimes override the basic word order of the language; for instance, ASL normally has the default word order SVO (Subject Verb Object), but classifier predicates usually prefer preverbal objects instead. One possible explanation is that the non-standard syntax of classifiers arises at least in part from their iconic semantics; we revisit this point in Section 5.3 .
The iconic contributions discussed above are to some extent different from those found in speech. Iconic modulations exist in speech (e.g., looong means ‘very long’) but are probably less diverse than those found in sign. Repetition-based plurals and pluractionals exist in speech (Rubino 2013), and it has been argued for pluractional ideophones in some languages, that the number of repetitions can reflect the number of denoted events (Henderson 2016). But sign language repetitions can iconically convey a particularly rich amount of information, including through their punctuated or unpunctuated nature, and sometimes their arrangement in space (Schlenker & Lamberton 2022). As for iconic pronouns and classifier predicates, they simply have no clear counterparts in speech. From this perspective, speech appears to be ‘iconically deficient’ relative to sign.
But Goldin-Meadow and Brentari (2017) have argued that a typological comparison between sign language and spoken language makes little sense if it does not take gestures into account: sign with iconicity should be compared to speech with gestures rather than to speech alone, since gestures are the main exponent of iconic enrichments in spoken language. This raises a question: From a semantic perspective, does speech with gesture have the same expressive effect and the same grammatical integration as sign with iconicity?
This question has motivated a systematic study of iconic enrichments across sign and speech, and has led to the discovery of fine-grained differences (Schlenker 2018b). The key issue pertains to the place of different iconic contributions in the typology of inferences, which includes at-issue contributions and non-at-issue ones, notably presuppositions and supplements (the latter are the semantic contributions of appositive relative clauses).
While detailed work is still limited, several iconic constructions in sign language have been argued to make at-issue contributions (sometimes alongside non-at-issue ones). This is the case of iconic modulations of verbs, as for GROW in (18) , of repetition-based plurals and pluractionals, and of classifier predicates.
By contrast, gestures that accompany spoken words have been argued in several studies (starting with the pioneering one by Ebert & Ebert 2014 – see Other Internet Resources) to make primarily non-at-issue contributions. Recent typologies (e.g., Schlenker 2018b; Barnes & Ebert 2023) distinguish between co-speech gestures, which co-occur with the spoken words they modify (a slapping gesture co-occurs with punish in (31a) ; post-speech gestures, which follow the words they modify (the gesture follows punish in (31b) ; and pro-speech gestures, which fully replace some words (the slapping gestures has the function of a verb in (31c) .
When different tests are applied, such as embedding under negation, these three types display different semantic behaviors. Co-speech gestures have been argued to trigger conditionalized presuppositions, as in (32a) . Post-speech gestures have been argued to display the behavior of appositive relative clauses, and in particular to be deviant in some negative environments, as illustrated in (32b)–(32b′); in addition, post-speech gestures, just like appositive relative clauses, usually make non-at-issue contributions.
( Picture credits : M. Bonnet)
Only pro-speech gestures, as in (32c) , make at-issue contributions by default (possibly in addition to other contributions). In this respect, they ‘match’ the behavior of iconic modulations, iconic plurals and pluractionals, and classifier predicates. But unlike these, pro-speech gestures are not words and are correspondingly expressively limited. For instance, abstract psychological verbs UNDERSTAND (= (15a) ) and especially REFLECT (= (15b) ) can be modulated in rich iconic ways in LSF—e.g., if the hand movement of REFLECT starts slow and ends fast, this conveys that the reflection intensified (Schlenker 2018a). But there are no clear pro-speech gestures with the same abstract meanings, and thus one cannot hope to emulate with pro-speech gestures the contributions of UNDERSTAND and REFLECT , including when they are enriched by iconic modulations.
In sum, while the reintegration of gestures into the study of speech opens new avenues of comparison between sign with iconicity and speech with gestures, one shouldn’t jump to the conclusion that these enriched objects display precisely the same semantic behavior.
Unlike gestures in general and pro-speech gestures in particular, classifier predicates have a conventional form (only the position, orientation, and movement are iconically interpreted, accompanied in limited cases by aspects of the handshape). But there are still striking similarities between pro-speech gestures and classifier predicates.
First, on a semantic level, the iconic semantics sketched for classifier predicates in Section 4.2 seems useful for pro-speech gestures as well, sometimes down to the details—for instance, it has been argued that the dependency between viewpoints and quantifiers illustrated in (30) has a counterpart with pro-speech gestures (Schlenker & Lamberton forthcoming).
Second, on a syntactic level, classifier predicates often display a different word order from other constructions, something that has been found across languages (Pavlič 2016). In ASL, the basic word order is SVO, but preverbal objects are usually preferred if the verb is a classifier predicate, for instance one that represents a crocodile moving and eating up a ball (as is standard in syntax, the ‘basic’ or underlying word order may be modified on independent grounds by further operations, for instance ones that involve topics and focus; we are not talking about such modifications of the word order here).
It has been proposed that the non-standard word order is directly related to the iconic properties of classifier predicates. The idea is that these create a visual animation of an action, and preferably take their argument in the order in which their denotations are visible (Schlenker, Bonnet et al. 2024; see also Napoli, Spence, and Müller 2017). One would typically see a ball and a crocodile before seeing the eating, hence the preference for preverbal objects (note that the subject is preverbal anyway in ASL). A key argument for this idea is that when one considers a minimally different sentence involving a crocodile spitting out a ball it had previously ingested, SVO order is regained, in accordance with the fact that an observer would see the object after the action in this case.
Strikingly, these findings carry over to pro-speech gestures. Goldin-Meadow et al. (2008) famously noted that when speakers of languages with diverse word orders are asked to use pantomime to describe an event with an agent and a patient, they tend to go with SOV order, including if this goes against the basic word order of their language (as is the case in English). Similarly, pre-verbal objects are preferred in sequences of pro-speech gestures in French (despite the fact that the basic word order of the language is SVO); this is for instance the case for a sequence of pro-speech gestures that means that a crocodile ate up a ball. Remarkably, with spit-out-type gestural predicates, an SVO order is regained, just as is the case with ASL classifier predicates (Schlenker, Bonnet, et al. 2024, following in part by Christensen, Fusaroli, & Tylén 2016; Napoli, Mellon, et al. 2017; Schouwstra & de Swart 2014). This suggests that iconicity, an obvious commonality between the two constructions, might indeed be responsible for the non-standard word order.
Properties discussed above include: (i) the use of loci to realize anaphora, (ii) the overt marking of telicity and (possibly) context shift, (iii) the presence of rich iconic modulations interacting with event structure, plurals and pluractionals, and anaphora, (iv) the existence of classifier predicates, which have both a conventional and an iconic dimension. Although the examples above involve a relatively small number of languages, it turns out that these properties exist in several and probably many sign languages. Historically unrelated sign languages are thus routinely treated as a ‘language family’ because they share numerous properties that are not shared by spoken languages (Sandler & Lillo-Martin 2006). Of course, this still allows for considerable variation across sign languages, for instance with respect to word order (e.g., ASL is SVO, LIS is SOV).
Cases of convergence also exist in language emergence. Homesigners are deaf individuals who are not in contact with an established sign language and thus develop their own gesture systems to communicate with their families. While homesigners do not invent a sign language, they sometimes discover on their own certain properties of mature sign languages. Loci and repetition-based plurals are cases in point (Coppola & So 2006; Coppola et al. 2013). Strikingly, Coppola and colleagues (2013) showed in a production experiment that a group of homesigners from Nicaragua used both punctuated and unpunctuated repetitions, with the kinds of semantic distinctions found in mature sign language. Coppola et al. further
examined a child homesigner and his hearing mother, and found that the child’s number gestures displayed all of the properties found in the adult homesigners’ gestures, but his mother’s gestures did not. (Coppola, Spaepen, & Goldin-Meadow 2013: abstract)
This provided clear evidence that this homesigner had invented this strategy of plural-marking.
In sum, there is striking typological convergence among historically unrelated sign languages, and homesigners can in some cases discover grammatical devices found in mature sign languages.
It is arguably possible to have non-signers discover on the fly certain non-trivial properties of sign languages (Strickland et al. 2015; Schlenker 2020). One procedure involves hybrids of words and gestures. We saw a version of this in Section 5.3 , when we discussed similarities between pro-speech gestures and classifier predicates. The result was that along several dimensions, notably word order preferences and associated meanings, pro-speech gestures resemble ASL classifier predicates (they also differ from them in not having lexical forms).
More generally, hybrid sequences of words and gestures suggest that non-signers sometimes have access to a gestural grammar somewhat reminiscent of sign languages. (It goes without saying that there is no claim whatsoever that non-signers know the sophisticated grammars of sign languages, any more than a naive monolingual English speaker knows the grammar of Mandarin or Hebrew.) In one experimental study (summarized in Schlenker 2020), gestures with a verbal meaning, such as ‘send kisses’, targeted different positions, corresponding to the addressee or some third person, as illustrated below.
a. send kisses to you
b. send kisses to him/her
Credits : J. Kuhn
The conditions in which these two forms can be used turn out to be reminiscent of the behavior of the agreement verb TELL in ASL: in (5) , the verb could target the addressee position to mean I tell you , or some position to the side to mean I tell him/her . The study showed that non-signers immediately perceived a distinction between the second person object form and the third person object form of the gestural verb, despite the fact that English, unlike ASL, has no object agreement markers. In other words, non-signers seemed to treat the directionality of the gestural verb as a kind of agreement marker. More fine-grained properties of the ASL object agreement construction were tested with gestures, again with positive results.
More broadly, it has been argued that aspects of gestural grammar resemble the grammar of ASL in designated cases involving loci, repetition-based plurals and pluractionals, Role Shift, and telicity marking (e.g., Schlenker 2020 and references therein). These findings have yet to be confirmed with experimental means, but if they are correct, the question is why.
We have seen three cases of convergence in the visual modality: typological convergence among unrelated sign languages, homesigners’ ability to discover designated aspects of sign language grammar, and possibly the existence of a gestural grammar somewhat reminiscent of sign language in designated cases. None of these cases should be exaggerated. While typologically they belong to a language family, sign languages are very diverse, varying on all levels of linguistic structure. As for homesigners, the gestural systems they develop compensate for the lack of access to a sign language; indeed, homesigners bear consequences of lack of access to a native language (see for instance Morford & Hänel-Faulhaber 2011; Gagne & Coppola 2017). Finally, non-signers cannot guess anything about sign languages apart from a few designated properties.
Still, these cases of convergence should be explained. There are at least three conceivable directions, which might have different areas of applicability. Chomsky famously argued that there exists an innate Universal Grammar (UG) that underlies all human languages (see for instance Chomsky 1965, Pinker 1994). One possibility is that UG doesn’t just specify abstracts features and rules (as is usually assumed), but also certain form-to-meaning mappings in the visual modality, for instance the fact that pronouns are realized by way of pointing. A second possibility is that the iconic component of sign language—possibly in more abstract forms than is usually assumed—is responsible for some of the convergence. An example was discussed in Section 5.3 in relation to the word order differences between classifier predicates and normal signs, and between gesture sequences and normal words. A third possibility is that, for reasons that have yet to be determined, the visual modality sometimes makes it possible to realize in a more uniform fashion some deeper cognitive properties of linguistic expressions.
On a practical level, future research will have to find the optimal balance between fine-grained studies and robust methods of data collection (e.g., what are the best methods to collect fine-grained data from a small number of consultants? how can large-scale experiments be set up for sign language semantics?). A second issue pertains to the involvement of native signers and Deaf researchers, who should obviously play a central role in this entire research.
On a theoretical level, the traditional view of human language as a discrete system with iconicity at the margins is hard to maintain in view of the analysis of sign with iconicity (and possibly also of speech with gestures). Rather, human language is a hybrid system with a discrete/logical component and an iconic component. But there are multiple open issues. First, cases of Logical Visibility will no doubt give rise to further debates. Second, a formal iconic semantics appropriate for sign language has yet to be fully developed. Third, the interaction between the discrete/logical component and the iconic component must be investigated in greater detail. Fourth, the formal semantics of sign language should be extended with an equally formal pragmatics to investigate, among others, information structure, and the rich typology of inferences that has been unearthed for spoken languages (including implicatures, presuppositions, supplements, expressives, etc.). Importantly, this formal pragmatics will have to explore both the discrete/logical and the iconic component of sign language. Fifth, consequences of the iconic component for the syntax will have to be further explored, especially in view of the hypothesis that classifier predicates display a non-standard syntax because they have an iconic semantics. Last, but not least, the philosophy of language should take sign languages into account. For the moment, it almost never does so.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
anaphora | innateness: and language | logical form | ontology, natural language | plural quantification | presupposition | quotation | semantics: dynamic | tense and aspect
Author contributions: Schlenker and Kuhn wrote the article. Lamberton commented and provided some of the ASL data.
Acknowledgments: We are very grateful to Editor Ed Zalta and to two anonymous reviewers for very constructive comments and suggestions. Many thanks to Lucie Ravaux for help with the formatting and with the bibliography.
Funding: Schlenker, Lamberton, Kuhn: This research received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 788077, Orisem, PI: Schlenker).
Schlenker, Kuhn: Research was conducted at the DEC, Ecole Normale Supérieure - PSL Research University. The DEC is supported by grant FrontCog ANR-17-EURE-0017.
Copyright © 2024 by Philippe Schlenker < philippe . schlenker @ gmail . com > Jeremy Kuhn < jeremy . d . kuhn @ gmail . com > Jonathan Lamberton < jonlamberton @ gmail . com >
Mirror sites.
View this site from another server:
The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
IMAGES
VIDEO
COMMENTS
This paper presents an innovative approach for sign language recognition and conversion to text using a custom dataset containing 15 different classes, each class containing 70-75 different images.
This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed.
Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. ... this paper proposes a technique for acknowledging hand ...
Current Indian Sign Language Recognition systems, while employing machine learning algorithms, often lack real-time capabilities. In this paper, we introduce a method to construct an Indian Sign ...
The remainder of this paper is organized as follows. Section 2 includes a brief review of Deep Learning algorithms. Section 3 presents a taxonomy of the sign language recognition area. Hand sign language, face sign language, and human sign language literature are reviewed in Sections 4, 5, and 6, respectively.Section 7 presents the recent models in continuous sign language recognition.
2. Paper. Code. **Sign Language Recognition** is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people ...
People with hearing impairments are found worldwide; therefore, the development of effective local level sign language recognition (SLR) tools is essential. We conducted a comprehensive review of automated sign language recognition based on machine/deep learning methods and techniques published between 2014 and 2021 and concluded that the current methods require conceptual classification to ...
A machine can understand human activities, and the meaning of signs can help overcome the communication barriers between the inaudible and ordinary people. Sign Language Recognition (SLR) is a fascinating research area and a crucial task concerning computer vision and pattern recognition. Recently, SLR usage has increased in many applications, but the environment, background image resolution ...
Additionally, the paper delves into technological advancements in sign language recognition, visualization, and synthesis, identifying trends and gaps. The review concludes with a proposed framework for sign language recognition research, acknowledging the importance of diverse input modalities and anticipating future developments in this ...
Techniques used to collect sign language recognition data can be hardware-based, vision-based, or hybrid. 1) HARDWARE-BASED. Hardware-based approaches are designed to circumvent computer vision problems during sign language recogni-tion. These challenges may develop when recognizing signs from a video, for example.
Sign language is a predominant form of communication among a large group of society. The nature of sign languages is visual, making them distinct from spoken languages. Unfortunately, very few able people can understand sign language making communication with the hearing-impaired infeasible. Research in the field of sign language recognition (SLR) can help reduce the barrier between deaf and ...
We conducted a comprehensiv e. review of automated sign language recognition based on machine/deep learning methods and techniques. published between 2014 and 2021 and concluded that the current ...
The rest of the paper is organized as follows: Section 2 provides an overview of the current literature on state-of-the-art isolated and continuous SLR (sign language recognition). Section 3 describes the methodology for implementing an isolated SLR system for real-time sign language detection and recognition, which involves pre-processing ...
Despite the importance of sign language recognition systems, there is a lack of a Systematic Literature Review and a classification scheme for it. This is the first identifiable academic literature review of sign language recognition systems. It provides an academic database of literature between the duration of 2007-2017 and proposes a classification scheme to classify the research articles ...
This survey is directed to junior researchers and industry developers in related fields to gain key insights of sign language recognition and related human-machine interaction systems. The remainder of the paper is organized as follows: Section 2 provides an overview and reviews related works.
the keywords sign language recognition to identify significant related works that exist in the past two decades have included for this review work. We excluded papers other than out-of-scope sign language recognition and not written in English. The contributions to this comprehensive SLR review paper are as follows: Carried out a review of the ...
4. Sign Language Recognition. Sign language recognition (SLR) is the task of recognizing sign language glosses from video streams. It is a very important research area since it can bridge the communication gap between hearing and Deaf people, facilitating the social inclusion of hearing-impaired people.
Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture. Despite the need for deep interdisciplinary knowledge, existing research occurs in separate disciplinary silos, and tackles ...
ArSL there are 70% of sign language recognition systems. who achieved a verage accuracy of greater than 90%, while. 23% of the systems have accuracy be tween 80 and 89%. There are only 7% systems ...
the prestigious IEEE International Conference on Innovative Research in Computer Applications. The paper is anticipated to ... Deep learning-based sign language recognition system for static signs by Ankita Wadhawan1 • Parteek Kumar. 2020. [3] Bhagat, N. K., Vishnusai, Y., & Rathna, G. N. (2019). Indian Sign Language by Gesture Recognition ...
Sign languages, used by around 70 million Deaf individuals globally, are visual languages that convey visual and contextual information. Current methods in vision-based sign language recognition (SLR) and translation (SLT) struggle with dialogue scenes due to limited dataset diversity and the neglect of contextually relevant information. To address these challenges, we introduce SCOPE (Sign ...
Sign language recognition. Abstract: This paper presents a novel system to aid in communicating with those having vocal and hearing disabilities. It discusses an improved method for sign language recognition and conversion of speech to signs. The algorithm devised is capable of extracting signs from video sequences under minimally cluttered and ...
Sign language serves as the primary means of communication utilized by individuals with hearing and speech disabilities. However, the comprehension of sign language by those without disabilities poses a significant challenge, resulting in a notable disparity in communication across society. Despite the utilization of numerous effective Machine learning techniques, there remains a minor ...
Sign language (SL) [1] is a visual-gest ural. language used by deaf and hard-hearing people fo r. communication purposes. Three dimensional s paces and. the hand movements are used (and other ...
double-handed gestures but they are not real-time. In this paper, we propose a method to create an Indian Sign Language dataset using a webcam and then using transfer learning, train a TensorFlow model to create a real-time Sign Language Recognition system. The system achieves a good level of accuracy even with a limited size dataset. Keywords:
We focus on the description of loci in American Sign Language (ASL) and French Sign Language (LSF, for 'Langue des Signes Française'), but these properties appear in a similar form across the large majority of sign languages.Singular pronouns are signed by directing the index finger towards a point in space; plural pronouns can be signed with a variety of strategies, including using the ...
A novel sign language recognition system is presented in this paper with an exceptionally accurate and expeditious, which is developed upon the recently devised metaheuristic WAR Strategy ...