[en] In this paper, we intend to propose a unified methodology in order to assess the effectiveness of text-to-image generation models. Existing evaluation methods are based on criteria such as fidelity and alignment between verbal prompts and produced images, but the human and CLIPScore evaluations mix together criteria of position, action, and photorealism. Instead, we intend to adapt the model analysis elaborated in visual semiotics in order to identify a set of discrete visual composition criteria upon which to establish the accuracy of the assessments. We will therefore distinguish three fundamental dimensions -plastic categories, multimodal translation, and enunciation- each articulated in multiple specific sub-criteria. We will then test these criteria on Midjourney and DALL•E while providing the abstract structure of the prompts in order to allow them to be used in future quantitative analyses.
Research Center/Unit :
Traverses - ULiège
Disciplines :
Art & art history Languages & linguistics Computer science
Author, co-author :
D'Armenio, Enzo ; Université de Liège - ULiège > Département de langues et littératures romanes