ELITE NETZWERK BAYERN

English  Sprachen Icon  |  Gebärdensprache  |  Leichte Sprache  |  Kontakt


Forschungsarbeit

Building object memories in a real-world environment

By Dejan Draschkow (27.01.2015)

Searching objects in our immediate environment is a task which engages us every day. From the instant each morning, when we are looking for the teabags in the kitchen, until the moment we search for the box of picture hooks to hang the newest family photo in the evening, we spend a considerable amount of time searching for what we currently need. Usually, we search for specific objects which are relevant to a certain task we want to conduct and rarely have time to instruct ourselves to memorize information about these objects. Yet, representations of relevant objects certainly would be useful for the future and we find that whenever prompted to, we can recall a whole lot about them. Often, it is even possible to remember objects which did not matter to us, but were part of the same environment as the objects we were looking for. Certainly, the surroundings which contain these objects play a role in our ability to recall object details. To understand the characteristics and compositions of extant representations formed during search in real world environments we investigated whether task-relevant and task-irrelevant information is influenced differently by interference from similar environmental contexts (i.e. high semantic similarity).

Draschkow: Fig. 1

In order to examine whether there is an interaction between memory interference by semantic similarity and object relevance, we let participants search for objects within a real world environment and assessed memory for it via a surprise free recall task. Participants were wearing a portable eye-tracker during the entire duration of the experiment, allowing us to investigate task-relevant prioritization of objects. We used a set of nine spatially separated compartments mimicking rooms of different categories. The rooms contained an array of objects typically found in each of the room categories (e.g., dining room). Participants were free to move around the environment, thus providing a situation that had the same spatial reference challenges that are present in active behavior in natural environments. However, participants were not allowed to interact with the objects in the rooms. Participants subsequently entered 5 rooms in which they had to find as many objects of a certain category as they could. To investigate how semantic similarity interferes with room memory, participants either entered 5 rooms of a similar category (e.g. dining rooms) or 5 rooms of different categories (e.g. dining room, office, children’s room, waiting room, living room). Thus, we defined a target room, which was either searched through first or last. Unaware of the subsequent memory test, participants searched for task-relevant objects as fast as they could for 30 seconds and reported each time they believed they had found one. After the searches were completed, participants had to recall everything they remembered from the rooms.

Critically, memory for task-relevant and task-irrelevant objects was affected differently by semantic interference. A trend towards higher memory performance for task-irrelevant objects in dissimilar contexts, suggest that that representations can suffer under semantic similarity interference. Task-relevant information was generally not affected by the similarity of contextual information, thus proving robust against interference. If at all, memory for task-relevant objects showed the reverse pattern, with performance being elevated for dissimilar rooms. These results suggest that semantic interference interacts with information relevance by disrupting memory for irrelevant items, but sparing (or even enhancing) memory for relevant ones.

In sum, we demonstrated that task-relevant object representations generated incidentally during search in real-world environments are robust to interference by semantically similar contexts. Semantic similarity, however, decreases memory performance for task-irrelevant objects. This implies that the task-relevance of environmental details is decisive for future recall ability and the susceptibility to memory interference.

Further Information: http://www.draschkow.com/


Scientific career
  • since 10/2014
  • PhD student, Scene Grammar Lab, Goethe University Frankfurt
  • since 02/2014
  • Center for Digital Technology and Management (CDTM), LMU & TUM Elite Honors Program
  • 2012-2014
  • Elite Graduate Program Neuro-Cognitive Psychology, Ludwig-Maximilians-Universität München
  • 2008-2011
  • Bachelor of Science in Psychology, Ludwig-Maximilians-Universität München

Scholarships and Awards
  • * Online-scholarship of e-fellows.net (2014)
  • * Best Poster Award sponsored by NCP (2013)
  • * Vision Science Society Student Award sponsored by Elsevier/Vision Research (2013)
  • * Grant of the Bavarian Ministry of work (2013)
  • * 1st Place in the National Pluralingual Competition of the Bulgarian Ministry of Science and Education (2008)

Publications and Conference Presentations
  • * Draschkow, D.,Wolfe, J. M., & Võ, M. L.-H. (2014). Seek and you shall remember: Scene semantics interact with visual search to build better memories. Journal of Vision, 14(8):10, 1–18
  • * Josephs, E. L., Draschkow, D., Wolfe, J. M., & Võ, M. L.-H. (2014, May). Active visual search boosts memory for objects, but only when making a scene. VSS, St. Pete, USA.
  • * Võ, M. L.-H., Draschkow, D., Josephs, E. L., & Wolfe, J. M. (2014, March). Seek and remember: Scene semantics interact with visual search to build better memories. TeaP, Giessen, Germany
  • * Võ, M. L.-H., Draschkow, D., Farmer, R., & Wolfe, J. M. (2013, November). Memory representations are not created equal: incidental encoding during object search outperforms intentional object memorization. Psychonomic Society, Toronto, ON.
  • * Draschkow, D., Võ, M. L.-H., Farmer, R., & Wolfe, J. M. (2013). Task dependent memory recall performance of naturalistic scenes: Incidental memorization during search outperforms intentional scene memorization. Journal of Vision, 13(9), 329–329