No document available.
Abstract :
[en] The semantic map model is relatively new in linguistic research, but it has been intensively used during the past three decades for studying a variety of cross-linguistic and language-specific questions. The number of linguistic domains to which the model has been applied highlights its efficiency in capturing regular patterns of semantic structure and crosslinguistic similarities of form-meaning correspondence (for a complete list of domains, see Georgakopoulos & Polis, 2018). One of the advantages of the model is that any type of meaning can be integrated in semantic maps, such as the functions of grammatical morphemes, the meanings of entire constructions, or the senses of lexical items, resulting in grammatical, constructional, and lexical semantic maps, respectively. However, the different types of maps have not received equal attention in the literature. Rather, there is a strong bias towards studies describing the cross-linguistic polyfunctionality of grammatical morphemes and constructions. Additionally, the bulk of research using the semantic map method has been adopting a synchronic perspective and the limited research that has added the diachronic dimension has focused almost exclusively on the grammatical domain (e.g., van der Auwera & Plungian, 1998; Narrog, 2010). A notable common denominator of most of the studies is that semantic maps have been plotted manually (cf., however, the studies using the Multidimensional Scaling procedure). The aim of this talk is threefold. First, it shows that – using synchronic polysemy data from large language samples, such as CLICS (List et al., 2014) or the Open Multilingual Wordnet (http://compling.hss.ntu.edu.sg/omw/) – one can infer large-scale weighted lexical semantic maps. These maps, which are constructed with the help of an adapted version of the algorithm introduced by Regier, Khetarpal, and Majid (2013), respect the connectivity hypothesis (Croft, 2001) and what we call the ‘economy principle’. As such, they generate more interesting implicational universals than regular colexification networks. Additionally, the automatically plotted semantic maps can be examined using standard network exploration software tools. These tools reveal much information otherwise ‘hidden’ in the graph — such as the modularity of the network, the centrality of meanings, etc. — and are essential when it comes to interpreting large-scale crosslinguistic datasets. Second, this talk seeks to demonstrate how information on the paths of semantic extensions undergone by content words may be incorporated into lexical semantic maps. We illustrate the method with the semantic extension of time-related lexemes (e.g. TIME, HOUR, SEASON, DAY) in Ancient Greek (8th – 1st c. BC) and Ancient Egyptian – Coptic (26th c. BC – 10th c. AD). Both languages give access to significant diachronic material, allowing us to trace long term processes of semantic change.
Third, in an effort to address some of the shortcomings of classical semantic maps, we suggest that they can be used conjointly with a new approach, namely Formal Concept Analysis (FCA, see Ryzhova & Obiedkov 2017). This complementarity between the two approaches proves to be efficient in revealing both language universals and areal patterns within the lexicon. A case-study on verbs of perception and cognition based on different datasets allows us to illustrates both the potentialities and the limitations of such an approach.