dc.description.abstract | Visual search and working memory (WM) are tightly linked cognitive processes. Theories of
attentional selection assume that WM plays an important role in top-down guided visual search.
However, computational models of visual search do not model WM. Here we show that an
existing model of WM can utilize its mechanisms of rapid plasticity and pattern completion to
perform visual search. In this model, a search template, like a memory item, is encoded into the
network’s synaptic weights forming a momentary stable attractor. During search, recurrent
activation between the template and visual inputs amplifies the target and suppresses nonmatching features via mutual inhibition. While the model cannot outperform models designed
specifically for search, it can, “off-the-shelf”, account for important characteristics. Notably, it
produces search display set-size costs, repetition effects, and multiple-template search effects,
qualitatively in line with empirical data. It is also informative that the model fails to produce
some important aspects of visual search behaviour, such as suppression of repeated distractors.
Also, without additional control structures for top-down guidance, the model lacks the ability to
differentiate between encoding and searching for targets. The shared architecture bridges
theories of visual search and visual WM, highlighting their common structure and their differences. | en |