[en] We present a connectionist model of visuospatial working memory (WM). The core WM architecture encodes new information by binding it to contexts through Hebbian learning. The representations encoded by the model are 2-dimensional spatial locations. These representations were created using the internal pattern of activation from an auto-encoder that learns to reproduce its input. We simulated an experiment in which the model must encode locations of varying proximity presented sequentially, followed by order reconstruction and recall tests. The model generates two important predictions. First, spatial proximity impairs memory for order: In an order reconstruction test, WM representation of spatially closer locations are more difficult to discriminate, leading to increased confusion errors. Second, spatial proximity improves memory for items: In a recall task, the recall error (Euclidean distance) is smaller in sequences composed of spatially close locations. We tested the model's predictions against data from 30 subjects who were asked to perform the same task as the model. The two predictions from the model were confirmed. We propose that similarity effects in WM are governed by domain-general principles, as equivalent observations have been established for other dimensions of similarity, such as the auditory, visual, and phonological similarity.