No document available.
Abstract :
[en] The aim of this talk is threefold. First, it shows that – using synchronic polysemy data from large language samples, such as CLICS (List et al., 2014), the Open Multilingual Wordnet (http://compling.hss.ntu.edu.sg/omw/), or BabelNet (https://babelnet.org/ about) – one can infer large-scale weighted lexical semantic maps. These maps, which are constructed with the help of an adapted version of the algorithm introduced by Regier, Khetarpal, and Majid (2013), respect the connectivity hypothesis (Croft, 2001) and the ‘economy principle’ (Georgakopoulos & Polis, 2018). As such, they generate more interesting implicational universals than regular colexification networks. Additionally, the automatically plotted semantic maps can be examined using standard network exploration software tools. These tools reveal much information otherwise ‘hidden’ in the graph — such as the modularity of the network, the centrality of meanings, etc. — and are essential when it comes to interpreting large-scale crosslinguistic datasets. Second, this talk seeks to demonstrate how information on the paths of semantic extensions undergone by content words may be incorporated into synchronic lexical semantic maps. We illustrate the principle with the semantic extension of time-related lexemes (e.g. TIME, HOUR, SEASON, DAY) in Ancient Greek (8th BC– 1st c. AD) and Ancient Egyptian – Coptic (26th c. BC – 10th c. AD). Both languages give access to significant diachronic material, allowing us to trace long term processes of semantic change within the lexicon. From a methodological point of view, we argue for the use of various types of graphs, including mixed multi-edge ones, which can capture bidirectionalities in semantic change and cases when information about pathways of change are not available (see already van der Auwera and Plungian, 1998 for the use of directed graphs). Third, in an effort to address some critiques that are voiced against the classical semantic maps approach, we suggest that this type of map can be used conjointly with (1) statistical techniques for dimensionality reductions (such as MDS, t-SNE, etc., see already Croft & Poole, 2008) and (2) Formal Concept Analysis (FCA, see Ryzhova & Obiedkov 2017). Based on a case-study on verbs of perception and cognition, we illustrate the complementarity between the three approaches for revealing universal areal and language specific patterns within the lexicon.