This document plots the results of a 30 min version of Real Objects Attentional Blink, In this task, subjects performed an attentional blink paradigm in which they reported the identity of two colorful real world object targets amongst grayscale real world object distractors. T1 showed up as either the 4th or 6th image and T2 appeared at a lag of either 1, 2, or 8 images from T1. At the end of each trial, subjects were presented with a probe that was equally likely to be the same exemplar/state as the target or different. Subjects reported with the keyboard whether the probe was of the same exemplar/same state (a), same exemplar/different state (s), different exemplar/same state (d), or different exemplar/different state (f). This report was made separately for T1 and T2 on each trial. This experiment is meant to test whether subjects lose all information about T2 when this target is “blinked” or whether the loss of information is graded.
This was collected in a hurry at the end of the quarter, and really could be experimental data if subjects can do the task. Data collection began on 3/8/2021. The goal is to model this data using some flavor of General Recognition Theory, which should now be fairly straightforward given the change in response format.
Note: No error bars yet! Subjects who had an accuracy of less than 30% overall on T1 reports were excluded.
Data cleaning note: The file titled “./Mon%20Mar%2008%202021%2001:30:05%20GMT-0800%20(PST).txt” is incomplete somehow, with only about 1/4 of the typical file size. This subject has been excluded due to incomplete data.