78.6

DEPTH PERCEPTION

Stereopsis and Artistic Talent

Livingstone, M.S., Lafer-Sousa, R., & Conway, B.R. (2011). Stereopsis and artistic talent: Poor stereopsis among art students and established artists. Psychological Science, 22, 336–338.

Pictorial depictions of three-dimensional scenes have been of interest to vision scientists because they provide a compelling example of how the perception of depth can arise solely from two-dimensional surfaces. The perception of depth in such pictures is conveyed by a variety of monocular depth cues including linear perspective, occlusion, and shading. By definition monocular depth cues require the use of only a single eye, and the perception of depth in pictures may in fact be enhanced when only a single eye is opened. Recently, Livingstone, Lafer-Sousa, and Conway turned this issue on its head by investigating whether expert artists—i.e., individuals who have extensive training using monocular depth cues—may have difficulty utilizing binocular depth cues such as stereopsis, which involves fusing the disparity that arises from the two eyes’ separate views of the world. To address this issue, Livingstone et al. created a disparity discrimination task using dynamic random dot stereograms in which observers had to discriminate whether a central square stimulus appeared at the same depth plane as the display monitor, or if it appeared behind or in front of the depth plane of the display monitor. A total of 10 non-zero disparities were shown; in addition, the central square could also appear at zero disparity. Two groups were compared. One group consisted of 403 art students and the other group consisted of 190 college students who were not majoring in art. The results clearly showed that the artists had significantly lower accuracy than the controls discriminating the relative depth of the central square in all 10 non-zero disparity conditions; however, the artists and controls had equal accuracy in the zero disparity condition. Having shown that artists exhibited worse disparity discrimination than the non-artists, Livingstone et al. also further investigated why this performance arose. One possibility is that the performance difference arose as a result of the fact that the artists may utilize monocular depth cues more than binocular depth cues. However, another possibility is that artists may be more likely to have physical anomalies that disable their use of stereopsis. In order to investigate this latter possibility, Livingstone et al. analyzed the photographs of 123 well-known artists to examine whether they had a greater likelihood of displaying interocular misalignment (i.e., strabismus) relative to the photographs of 129 congressmen. Although the two eyes were most likely to be aligned in both groups, there was a significantly higher incidence of misalignment in the artist group than in the control group. Based on these findings, the authors concluded that poor stereopsis may contribute to artistic talent. –B.S.G.

VISUAL PERCEPTION

What are they saying about me?

Anderson, E., Siegel, E. H., Bliss-Moreau, E., & Barrett, L. F. (2011). The visual impact of gossip. Science, 332(6036), 1446–1448.

When we hear gossip about our neighbors, acquaintances, or co-workers, we glean social and affective information indirectly, without necessarily having to experience the good, bad, or ugly behavior of others. This kind of affective learning lets us know who we might want to avoid and who we might want as our friend. But does this kind of gossip change the way we actually perceive our social world? Anderson et al examined whether gossip, or affective information about a stranger, influences the visual perception and detection of faces. A series of structurally neutral faces were presented to participants paired with “gossip” that consisted of positive (e.g., “helped an elderly woman with her groceries”), neutral (e.g., “passed a man on the street), or negative (e.g., “threw a chair at his classmate”) statements. Participants then engaged in a binocular rivalry task. In one eye, they saw either images of the neutral faces that had been previously paired with the affective statements or images of control faces that were novel. In the other eye, they saw images of houses. Binocular rivalry occurs when different images are presented to different eyes. Initially, one image is consciously seen and then the other and awareness of each percept alternates. Anderson et al asked participants to press one key when they experienced the face percept and another when they experienced the house percept. They found that the nature of the previously presented gossip affected the amount of time participants reported awareness of the face images relative to the house images. Faces that had been paired with negative affective information dominated in visual awareness relative to faces that had been paired with positive or neutral statements and relative to the control faces. The findings showed that negative affective information about others influenced vision in a top down manner. Gossip that suggests a stranger is mean or dishonest may change what enters our visual awareness, perhaps allowing us to more easily avoid contact with the villains in our midst.—L.C.N.

ATTENTIONAL BLINK

The two sides of noise addition

Martin E.W., Enns J.T., Shapiro K.L. (2011). Turning the attentional blink on and off: Opposing effects of spatial and temporal noise. Psychon Bull Rev, 18, 295–301.

Attentional blink refers to the finding that observers often fail to detect the second target when the two targets are embedded within a rapid stream of stimuli. It is typically assumed that the attentional blink reflects the depletion of capacity limited attentional resources by the processing of the first target, but many different accounts were offered to explain this phenomenon. Recently, several studies demonstrated that performing another task besides target detection can alleviate the attentional blink (e.g., Olivers & Nieuwenhuis, 2006). This includes performing an additional memory task, listening to a repetitive tune, or simply thinking about one’s holiday or the shopping requirements for a meal with friends. Such attentional blink reductions were explained within the context of the ‘overinvestment hypothesis’. According to this hypothesis, the attentional blink is due to overinvestment of attentional resources in the stream of stimuli resulting in processing of non-relevant distracting items. The requirement to perform an additional task prevents this overinvestment. Martin et al. are offering in this paper a different procedure to alleviate the attentional blink—the addition of noise to the stream of stimuli. Most intriguingly, they demonstrate that the addition of noise either increases or decreases the attentional blink, depending on the relevance of the noise dimension to target processing. Specifically, they added to the RSVP stream either temporal noise or spatial noise. The temporal noise involved introducing variability into the temporal rhythm of the RSVP stream. They assumed that such temporal variability is not relevant to the task at hand—detection of 2 white letters embedded in a stream of black letters. The spatial noise involved introducing variability in the size of the letters. This spatial variability was assumed to be relevant for target processing. They found that the attentional blink was diminished with the temporal noise but increased with the spatial noise. Prior accounts of the attentional blink cannot explain these opposite effects of noise addition, but Martin et al. offer two neurally-inspired explanations for their findings. One of these accounts evokes the stochastic resonance theory, suggesting that given the right circumstances, neural noise can actually increase the detection of a weak signal. Thus, following this theory the temporal noise meets the required circumstances for signal enhancement, but spatial noise does not. -Y.Y.

ATTENTIONAL CAPTURE

Reward Influences Attentional Capture

Anderson, B. A., Laurent, P. A., & Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, 108(25), 10367–10371.

Many recent studies have examined how reinforcement history or reward influences different aspects of attention, including conflict resolution in the Stroop task (Krebs et al. 2010 COGNITION 117, 341) and the magnitude of the attentional blink (Raymond & O’Brien 2009 PSYCHOL SCI 20, 981). Against this backdrop, Anderson and colleagues report important results demonstrating that perceptual-level attention is also affected by monetary reward (also see Hickey et al. 2011 VIS COGN 19, 117). Specifically, attentional capture by an irrelevant distractor object is affected by the distractor’s reward history.

In an initial training phase, participants performed a visual search for a red or green target circle that appeared among heterogeneously colored distractor circles. The target circle contained a horizontal or vertical line, and participants made a speeded discrimination response to the line. One of the target circles was a high-reward target and the other was a low-reward target. Correct responses to the high-reward target (e.g., the red circle ) were followed by a higher monetary reward ($ 0.05) on 80% of trials and a lower monetary reward ($ 0.01) on 20% of trials. Correct responses to the low-reward target (e.g., the green circle) were followed by the reverse, that is, a lower monetary reward on 80% of trials and a higher monetary reward on 20% of trials. In the initial experiment, participants performed 1008 training trials with these reward schedules.

Following the training trials, participants performed a series of test trials that involved visual search for a new target, a shape singleton that was either a colored diamond that appeared among heterogeneously colored circle distractors or a colored circle that appeared among heterogeneously colored diamond distractors. On half of the trials, one of the distractors was colored in red or green—one of the previously rewarded colors. None of the training trials were rewarded.

Anderson et al.’s central question was if a previously rewarded color would capture attention when it was no longer a target. If so, then response times would be longer when a rewarded color appeared on a test trial than when neither rewarded colors were present. Anderson et al. found exactly that. But, they also found that capture depended on the reward schedule. High-reward colors captured attention more potently than low-reward colors; responses were slower when a high-reward color appeared as a distractor than when a low-reward color appeared as a distractor.

Additional experiments demonstrated that training on as few as 240 trials could produce the same capture effect, provided that the rewards were larger overall ($ 0.10 and $ 0.02 for high- and low-reward targets, respectively). This learned capture effect also persisted over a delay of several days: One group of participants performed a set of training and test trials and then performed another set of test trials 4–21 days later, without additional training. These participants continued to exhibit learned capture after the delay.

The emerging findings on reward and attention are important in suggesting that the outcomes of attention and the environment shape future deployments of attention. –S.P.V

ATTENTIONAL SELECTION

Resolving How with When

de Vries, J. P., Hooge, I. T. C., Wiering, M. A. & Verstraten, F. A. J. (2011). How longer saccade latencies lead to a competition for salience. Psychological Science, Published online before print May 31, 2011, doi: 10.1177/095679761141057

Models of attentional selection assume two components contribute to the computation of salience: a bottom-up or stimulus-driven component and a top-down or goal-driven component. Recent evidence called this type of model into question showing that a salient stimulus, defined by both top-down and bottom-up components, can be selected for action more accurately when it is responded to quickly than when it is responded to slowly. Results show that eye movements towards a salient object will only deviate towards a less-salient object in the display when the saccade latencies are longer (Donk & vanZoest, 2008). Accumulator accounts of salience would assume the opposite: information accrued about the two objects should be initially noisy, then top-down and bottom-up influences should increase saliency driving action more reliably over time towards the high-salience item. How could a low-salience item be selected later if evidence only ever mounts in favor of the high-salience item? De Vries and colleagues recently offered a resolution of this issue by considering the time-course of visual processing of objects with different salience. They suggest that low-salience items may not be available as input to the eye-movement system early in processing, because low-salient item are processed more slowly. Short latency saccades may be too fast to allow for the effects of the low-salience objects because only high-salience object may have been sufficiently processed by the time the saccade is executed. In two experiments, de Vries and colleagues showed that visual attributes that do not differ in salience but do differ in processing speed affect saccades at different latencies. Participants were asked to look at either one of two tilted rectangles in a field of vertical rectangles. Short latency saccades were almost always directed towards the target defined by the faster feature, low spatial frequency (Exp. 1) or high contrast (Exp. 2), while longer saccades were less biased. The discussion argued that previous work manipulating the saliency of objects also inadvertently manipulated the speed of processing and that reconsidering this evidence in terms of processing time resolves the conflict over models of selection. Top-down and bottom-up components contribute to saliency of objects in time with their processing speed given the structure of the visual system. Evidence for low-salience objects accrues more slowly and so affects only the long latency saccades. This paper represents another fine example of how Attention and Perception must be integrated for a better understanding of behavior.—A.E.S.

CONSCIOUSNESS

Understanding invisible scenes

Mudrik, L., Breska, A., Lamy, D., & Deouell, L. Y. (2011). Integration Without Awareness. Psychological Science, 22(6), 764–770.

What is consciousness for? This question has some of the same quality as the question “what makes us uniquely human?” In the latter case, someone will propose that humans are unique in their use of tools, their mastery of language, or some other capability, and then someone else will come along with a clever experiment showing that a chimp or a dolphin or a pigeon can master aspects of that ability. In the case of consciousness, one writer will propose that This Task cannot be accomplished without conscious awareness and then someone else will show evidence that the task was executed without conscious awareness of that stimulus. A currently popular methodology makes use of continuous flash suppression (CFS). If one eye is presented with a patchwork (or “Mondrian”) of colored rectangles that are continuously changing at about 10 Hz, then a stimulus, presented to the other eye, will tend to be suppressed from conscious awareness for an extended period of time. Eventually, the suppressed stimulus will break through. If you have two stimuli and one breaks suppression more quickly than the other, it follows that some processing that occurred while the images were unseen, differentiated between the two images. In a new paper in Psychological Science, Mudrik et al. compared the suppression breaking time for scenes that were semantically congruent (athlete shooting a basketball) or incongruent (replace that basketball with a watermelon). They were careful to equate low-level stimulus salience since it would be boring to find that a high contrast watermelon breaks through before a low contrast basketball. They found that the incongruent scenes broke through about 160 msec faster than the congruent scenes. This suggests that some aspects of the semantics of the scene were available before the scene was visible. This would seem to falsify the hypothesis that we need conscious awareness to figure out how objects in scenes relate to each other. Rather like the human uniqueness experiments, it is important not to over-interpret this result. Sure, some animals can perform some linguistic tasks, but no chimp or dolphin will ever read the paragraph that you are reading now. Similarly, it seems that, without conscious awareness, your visual system can detect the oddity of someone trying to put a watermelon though a hoop. Still, you probably wouldn’t want to do art criticism on paintings rendered invisible by flash suppression—but maybe I have just set up the next clever experiment.-J.W.

ANATOMY

Size matters

Schwarzkopf et al (2011). The surface area of human V1 predicts the subjective experience of object size. Nature Neuroscience, 14, 28–30.

Possibly even more impressive than its confirmation is the intuition that must have been necessary for Schwarzkopf, Song, and Rees to formulate their hypothesis: The larger the striate cortex, the smaller the Ebbinghaus illusion. The dot from which their hypothesis emerged was the commonly held assumption that lateral inhibition is ultimately responsible for all sensory exaggerations of feature contrast. The Ebbinghaus illusion is merely one example of this, in which the contrast being exaggerated is one of size. (Circles look bigger when surrounded by small circles than they do when surrounded by big circles.) The next dot was a report by Murray et al (Nat Neurosci 9:429) that perceived size was correlated with activity in V1. To formulate their hypothesis, Schwarzkopf and colleagues had to connect these dots with the knowledge that lateral connections in V1 are relatively invariant in length, even across species.

Compared with the length of V1 connections, the sizes of both the Ebbinghaus illusion and striate cortex (including V1) vary massively across individuals. (Hemispheric differences in the size of V1 can even exceed a factor of 2 within individuals according to Duncan & Boynton; Neuron 38:659.) Scharzkopf et al confirmed their hypothesis using fMRI and psychophysics to estimate the sizes of V1 and the illusion, respectively, in 30 subjects. The correlation was significant, as was the correlation between V1 size and another size illusion: the Ponzo. (Interestingly, there was no significant correlation between the Ebbinghaus and Ponzo illusions, suggesting different mechanisms for these within V1.)

Schwarzkopf et al now need to see whether the exaggerations of any other visual features correlate with V1 size. One or two may, but Bosten and Mollon’s (Vis Res 50:1656) failure to find correlations between the amounts various features are exaggerated in an individual, suggest that many cannot. Also hard to reconcile with the findings of Schwarzkopf et al are the arguments of Milner and Dyde (TICS 7:10) against V1 mediating the Ebbinghaus illusion. Certainly these arguments and experiments deserve close scrutiny and replication, but other imaginative researchers need not wait. All they need to do is wonder…for what else does size matter? -J.A.S.

FACE PERCEPTION

The eyes have it

Peshek D, Semmaknejad N, Hoffman D, Foley P (2011) Preliminary Evidence that the Limbal Ring Influences Facial Attractiveness. Evolutionary Psychology 9(2) 137–146.

What makes some faces more beautiful than others? A plausible hypothesis proposes that we are genetically wired to find those faces beautiful whose features signal youth, health, and other qualities desirable in a mate. A subtle and interesting candidate facial feature in this connection is the limbal ring of the eye (which is very pronounced in the famous portrait of an afghan girl that appeared in the June 1985 issue of the National Geographic—easy to pull up on the web). The limbal ring is a dark annulus where the iris meets the sclera. It is also true that the limbal ring (1) tends to grow thinner and paler with age, (2) can be weakened or expunged by various diseases that reduce the transparency of the peripheral cornea, and also (3) becomes less visible due to glaucoma, arcus sinilis and other conditions related to aging. Thus a strongly articulated limbal ring is a sign of youth and health. The question is: does this rather subtle feature (if you don’t know what you’re looking for, it is very hard to detect a difference between two faces that differ only in the darkness of their limbal rings) control peoples’ judgments of facial attractiveness? To investigate this question, Peshek, Semmaknejad, Hoffman and Foley (2011) had observers compare images of faces, identical except for their limbal rings. (They also included a control condition in which the facial images being compared were identical except that the irises of one of the test faces were larger than those of the other face.) Participants moved a slider in the direction of the more attractive of the two faces in a given trial. They found that for both male and female observers, strongly articulated limbal rings significantly increased the attractiveness of both male and female faces. This effect also held for upside-down faces (although not as strongly as for upright faces). The conclusion: natural selection has sculpted us to be sensitive to the limbal ring as a sign of reproductive fitness.-C.C.

TEMPORAL WINDOW

Losing sense of continuity

Lalanne, L. van Assche, M., & Girsch, A. (in press). When predictive mechanisms go wrong: Disordered visual synchrony thresholds in schizophrenia. Schizophrenia Bulletin.

It is known that patients with schizophrenia have more difficulty than control participants deciding whether two stimuli are presented simultaneously or in succession. In other words, patients with schizophrenia need larger onset asynchronies between stimuli to detect their succession. The question addressed by Lalanne and collaborators is whether this difficulty (impaired sense of continuity) is due to a fusion of events in time, or to a segregation of events and a deficit in coding time-event structure. To address this issue, they used the Simon effect. Two squares were displayed simultaneously or asynchronously on a screen, either on the same side or on different sides (intra- vs. inter-hemispheres). Participants responded by hitting a left or right response key. According to the authors, if patients’ impairment is due to fusion, events should be perceived as identical on both sides and there should be no Simon effect (i.e., no manual response biased to the side of the stimulus). If signals are segregated below threshold, a Simon effect should occur and whether the bias is on the side of the first or second stimulus will provide information about the way information is processed (with or without expectation regarding the second stimulus).

The results revealed that, irrespective of the squares’ position, the time window was larger in patients (higher threshold by about 10 ms, or 20% compared to controls). Most important is the fact that, while controls were biased in all cases to the side of the second square, patients were biased to the side of the first square for asynchrony cases that were judged as synchronous. According to Lalanne et al., this group difference does not depend on a fusion of the stimuli occurring within a temporal window. Rather, the difference would be caused by a fundamental difference in the way patients and controls detect synchronies. It seems as if that, for patients, events remain isolated from each other while controls can use predictive mechanisms for anticipating upcoming events. In brief, it is the patients’ capability to anticipate immediately upcoming events that causes the threshold difference between groups, not an early fusion of signals within the patients’ temporal window. –S.G.