Abstract :
[en] Despite recent advances in multi-modal AI technology, there remains a significant gap in their ability to be incorporated into complex design and engineering work. One such challenge relates to contexts where sketch-based inputs are desirable, due to the difficulty in recognizing freehand sketches or interpreting underlying human intent. To elucidate requirements for emerging sketch-based AI systems for complex design context, we consider an architectural design case-study. Using a Wizard of Oz experimental paradigm, we substitute the “tool” with human agents and conduct a lab-based study in which professional architectural designers complete a design brief using this “tool”. Here, participants execute functions such as recognizing freely produced design plans and perspective drawings for downstream applications (e.g., generating inspirational images or high quality renders). Observing the human agents performing the sketch recognition task, results demonstrate that agents not only rely on visible sketch elements (i.e., lines) and architectural drawing codes, but also on their memory of previous lines and their knowledge of the design brief to comprehend perceived lines. Agents gradually develop an understanding of the designed artifact, but also of the designer's intentions. These activities are crucial for the agent to obtain a functional model of the designed object, beyond a purely topological and geometric perception model. Insights about this human workflow bring new potential techniques of sketch recognition for design and engineering tasks, informing the inclusion of new resources within AI tools.
Scopus citations®
without self-citations
0