Photo Francoise VIALA service iconographie http://www.ipbs.fr
Untitled Document

  

  Home

  Photos!

  Conference   Registration and
  Abstract Submission
  Travel

  WIKI CLO

presentation Talks abstracts

"Prism-altered distance: A neglected tool to investigate adaptation"
A-E Priot, P Neveu, B Priot, M Philippe, J Plantier, C Prablanc, C Roumes
While the mechanisms of short-term adaptation to prism-altered apparent visual direction have been widely investigated, the processes underlying adaptation to prism-altered perceived distance are less well known. This study used base-out prisms to alter perceived distance by modifying vergence demand. By manipulating visual feedback of the exposed limb, we were able to demonstrate three adaptation levels: 1) changes in perceived distance related to increased tonic vergence induced by sustained convergence effort (eye muscle potentiation); 2) a recalibration of the mapping between the vergence signal and perceived distance; 3) a reorganization of limb motor commands. No limb proprioceptive component was identified. The nature of the adaptive components to apparent distance alteration differs from that classically described for visual direction alteration, which involves essentially proprioceptive and motor components. This difference can be attributed to differences in accuracy between proprioception and vision for localization in depth or in lateral directions. Such findings are in agreement with recent fMRI and clinical studies (optic ataxia) suggesting that the control of movement in the sagittal plane differs from that of movement in the fronto-parallel plane. In addition, these findings indicate that "visual" adaptation actually involves a multiplicity of processes.

"The induced effect revisited: discounting the effects of head orientation"
B Rogers, B Gillam
Mayhew and Longuet-Higgins (1982) showed that vertical disparities provide information about the angle-of-eccentric-gaze, thereby providing an explanation of Ogle's induced effect. Alternatively, tolerance of binocular vertical size differences can be seen as evidence that the visual system is able to discount the effect of eccentric head position with respect to gaze direction. We investigated how good we are at discounting vertical size differences resulting from eccentric head position under static and dynamic conditions. In the static case, the observer's head was oriented at eccentric angles up to ±40? with respect to gaze direction, which was always straight ahead and normal to the projection screen at 57 cm, thereby creating natural vertical size differences. The observer's task was to adjust the horizontal disparity field until the surface appeared to be gaze-normal. In the dynamic case, the observer rotated his/her head to-and-fro around a vertical axis, coupled to an adjustable horizontal disparity field. Observers judged when the surface did not appear to rotate during head rotation. Precision was < ±1? and constant errors were <10% of head eccentricity under both static and dynamic conditions showing that we are very good at discounting the effects of head position when judging surface orientation.

"Empirical horopter explained by the statistics of disparity patterns in natural space"
A Canessa, M Chessa, A Gibaldi, F Solari, S P Sabatini
The horopter, as the locus of point in the 3D space whose projections fall on geometrically corresponding points in the two retinas, usually consists of two parts: the Vieth-Muller circle and the vertical horopter. Psychophysical experiments pointed out that the patterns of empirical corresponding points systematically differ from the geometrical one. The Hering-Hillebrand deviation makes the horizontal horopter to be less concave than the Vieth-Müller circle at near distances and more convex at far distances. The shear deviation causes the vertical horopter to incline top away in the median plane (Schreiber et al., JOV, 8(3):1-20, 2008). Exploiting a high precision 3D laser scanner we constructed VRML models of the peripersonal workspace by a combination of a large number of scanned natural objects. Binocular active fixation are simulated to derive the statistics of the disparity patterns of natural scenes in the peripersonal space. Using the mean disparity patterns as a pattern of empirical corresponding points, it is possible to derive a "minimum-disparity" horopter as the 3D surface whose projections have the minimum disparity angle with the pairs of empirical corresponding points. The resulting optimal surface is a tilted top away surface at a distance of 104 cm, less concave than the Vieth-Müller circle, and its concavity decreases with the fixation distance, in agreement with the experimental observations.

"Practicing to perceive causes cue recruitment"
B T Backus, A Jain, S Fuller
The visual system can learn, or recruit, new visual cues so that they control appearance. For example, training can cause a Necker cube's apparent rotation direction, about a vertical axis, to be controlled by its translational motion up or down. However, cue recruitment was not observed for three "extrinsic" cues: auditory and visual signals that were conveyed by sound or image elements not belonging to the object itself [Jain, Fuller, and Backus, PLoS ONE, 5(10)e13295, 1-7]. We asked whether learning would occur for such cues when training stimuli were difficult rather than easy for the visual system to disambiguate, because "practicing to perceive" under ambiguous conditions has been shown in other recent studies to be effective to increase the rate of cue recruitment. Cubes rotating in 3D were surrounded by an annular random-dot field that rotated either clockwise or counter-clockwise in the 2D plane of the display. At training trial onset, binocular disparity was present for 150 ms to control perceived rotation for the entire trial (duration 1.5 s). Test trials contained the rotating field of dots but not disparity. Under these conditions, the random-dot annulus was recruited, which supports the hypothesis that "difficult" stimuli cause greater cue recruitment.

"Pictorial depth perception due to the alignment of (a minimum of) three visual cues: Linear perspective and cast shadows"
N Cook
The rules for producing the illusion of 3D depth in 2D paintings were formulated by Renaissance artists in the early 15th Century. Psychologists have since identified the relevant visual cues (relative size, elevation, occlusion, linear perspective, shadows, shading, texture gradients, etc.), but have not yet been able to clarify why only Homo sapiens has a robust sense of pictorial depth perception. We have employed the "reverse perspective" illusion of Patrick Hughes to quantify the effects of pictorial depth cues in producing the illusion and have found a remarkable difference in the effects of the alignment of two versus three visual cues (objects). When two-body occlusion is not present, the depth relationship between two cues (objects) is inherently ambiguous, leading to a weak illusion of 3D depth in both flat and "reverse perspective" paintings, whereas the illusion of depth with the alignment of three (or more) non-occluding objects is unambiguous. We conclude that the Renaissance inventions of linear perspective and cast shadows were, in essence, the clarification of the geometry of the three visual cues needed to produce the illusion of 3D depth in a 2D picture (Cook, 2011, Harmony, Perspective and Triadic Cognition, New York, Cambridge University Press).

"Facilitation for distributed detection of 3D patches in the embedding context of a slanted surface"
C Tyler, S Nicholas
Perception of 3D surfaces requires integrating local cues into the perceptual construct of a surface in depth. We studied the integration of surfaces with dual target regions defined by binocular disparity alone in 3D dynamic noise. Target patches consisting of two disk regions were each presented with 50% probability (independently). The stereoscopic targets were embedded either in a stereoscopic plane of the same 3D slant or in a 'snowstorm' of binocularly uncorrelated noise dots. The task was performed for a range of disk presentation durations from 0.05 to 1 sec to determine the perceptual decision times for 75% correct performance. Decision times were ~3x times longer with the uncorrelated than correlated surrounds. 3D slant around the axis through the patches has no significant effect on perceptual decision time (relative to that for the flat planes), but orthogonal slants approximately doubled the decision times relative to the other slant conditions. The different forms of psychometric function observed for each target combination (none, one, or both patches) were well predicted by a novel form of neural decision model, with the response probability limited by asymptotic leaky diffusion noise as defined by the level of binocular correlation within a stereospecific integration field.

² "Disparity domain interactions: Evidence from EEG source-imaging"
B Cottereau, S Mckee, A Norcia
Using high density EEG imaging, we measured in 14 subjects the population response to disparity modulation of dynamic random dots in five visual ROI's: V1, V3A, hMt+, V4 and LOC. The stimulus was a central modulating disk surrounded either by uncorrelated disparity noise or by a correlated 'reference' surround presented at a constant disparity. The responses in area V1 were not consistently affected by the characteristics of surround, which indicated that this ROI is primarily responsive to absolute disparity. The correlated surround significantly increased the amplitude responses in all four extra-striate areas over those measured with the uncorrelated surround. Thus, these extra-striate areas contain some neurons that are sensitive to disparity differences. However, the response in area V3A to a fixed disparity difference (a constant relative disparity) was affected by the absolute disparity of the surround, meaning that the distance from the fixation plane altered the response to relative disparity. We modelled the V3A response as due to an increased gain in its absolute disparity response promoted by the presence of the surround; this model predicted the change in response as the disparity of the surround was systematically varied.

"Transparency in stereopsis: processing multiple planes in parallel"
A Reeves, D Lynch
In the Julesz stereogram, subjects view random-dot stereograms (RDS) in which depth planes are laterally segregated, but previous work [Tsirlin et al., 2008, Journal of Vision 8, 1-10] intermingled different disparities to create transparent, overlaid depth planes. With unlimited time to scan, their observers distinguished up to 6 such depth planes. With 400ms presentations, too brief for scanning, and after much training, our observers can also identify 2 to 6 planes (errors occur above 6 planes) and can distinguish 5 from 6 depth planes in 2AFC. (Disparities for planes 2-8 were 12, 37, 65, 99, 136, 176, and 216 arcmin. Displays contained 8 rows, 12 elements/row; elements were 11x15 min arc crosses.) Opposite contrasts in the two eyes were processed as well as same contrasts. Six planes could still be resolved in 400ms with 3-10deg annular displays. These data imply the parallel processing of multiple independently-labeled, narrowly-tuned, contrast-independent, spatially-overlapping disparity channels.

"Testing the biological significance of colour constancy in an agent based model with bees foraging from flowers under varied illumination"
S Faruq, L Chittka, P Mcowan
The perceived colour of an object depends on its spectral reflection and spectral composition of the illuminant. Upon illumination change, the light reflected from the object also varies. This results in a different colour sensation if no colour constancy mechanism is put in place, that is, the ability to form consistent representation of colours across various illuminants. We explore various colour constancy mechanisms in an agent-based model of foraging bees selecting flower colour based on reward. The simulation is based on empirically determined spatial distributions of several flower species, their rewards and spectral reflectance properties. Simulated foraging bees memorise the colours of flowers experienced as being most rewarding, and their task is to discriminate against other flower colours with lower rewards, even in the face of changing illumination conditions. We compared the performance of von Kries, White Patch and Gray World constancy models with (hypothetical) bees with perfect colour constancy, and colour blind bees. While each individual model generated moderate improvements over a colour-blind bee, the most powerful recovery of reflectance in the face of changing illumination was generated by a combination of von Kries photoreceptor adaptation and a White Patch calibration. None of the mechanisms generated perfect colour constancy

"The interaction between luminance and chromatic signals can improve reading performance"
C Ripamonti, A Aujla
Reading accuracy, speed and comprehension depend crucially on luminance and chromatic contrast, with no sign of additive interaction between the two types of contrast [Legge et al, 1990, JOSA, 7(10), 2002-2010; Knoblauch et al, 1991, JOSA, 8(2), 428-439; Chase et al, 2003, VR, 43(10), 1211-1222]. In this study, we re-examined such interactions using near-threshold luminance and chromatic signals. Observers were instructed to recognise 1 of 3 target-words presented at a random location in a paragraph. Reading performance was measured as accuracy (proportion of correct responses) and rate (normalised reaction times to respond). Text (T) and Background (B) colours were selected either along the same cardinal axes (isochromatic or isoluminant) or along orthogonal axes of the DKL colour space, whose centre corresponded to the mid-grey surround. In experiment one, we measured luminance and chromatic contrast thresholds of a series of isochromatic or isoluminant pairs of T/B, and found the lowest threshold and fastest rate when both the T/B were along the chromatic L/M axis. In experiment two, we tested if performance improved when luminance and chromatic contrast were added together. We found significantly lower thresholds and faster rates, consistent with a positive interaction between luminance and chromatic signals near threshold.

"Correlated variation in temporal and spatial measures that index the relative sensitivities of long- and middle-wavelength cones"
M Danilova, J D Mollon
Within the normal population, observers vary in the relative numbers of long- (L) and middle-wave (M) cones. Does this variation reveal itself in temporal and spatial tasks? Our temporal measure was counterphase modulation photometry, in which the observer finds the balance of red and green counterphased modulations that minimizes flicker (Estévez et al, 1983, Amer. J. Optom. Physiol. Opt.). We relate this measure to spatial measures of sensitivity for targets that isolate L- and M-cones. These targets were Landolt Cs, appearing briefly (100 ms) left or right of fixation at an eccentricity of 5 deg. In Experiment 1, we varied cone contrast for targets of different size, so as to derive a contrast-sensitivity function for correct report of orientation. In Experiment 2, we varied target size at a fixed cone contrast, so as to establish the minimum size at which the observer could correctly report orientation. For ten observers in Experiment 1 we found Spearman's r=0.588 (p=0.04) between settings on the flicker test and the ratio of acuity for L and M cones, estimated from the contrast sensitivity. For twenty observers in Experiment 2, we found Spearman's r=0.816 (p<0.001) between the temporal and spatial measures.

"S-cone signals contribute to colour appearance nonmonotonically"
S Oh, K Sakata
How do the signals from the 3 cone classes encode hue? The prevailing view has been that each cone type monotonically contributes to hue sensation on each spectrally opponent dimension. However, psychophysical evidence indicates that the output of a given cone class should not be strictly equated with fixed hue sensation. Yellow/blue equilibrium judgment revealed that the S-cone signal could change polarity nonmonotonically with respect to the excitation levels of L- and M-cone to maintain the non-yellow/non-blue percept (Knoblauch & Shevell, 2001, Visual Neuroscience, 18, 901-906). The dependence of S-cone luminance input on L- and M-cone excitation has also been reported (Ripamonti et al., 2009, Journal of Vision, 9(3):10, 1-16). In this study, we examined the contribution of the S-cone signal to color appearance. Observers specified the perceived blueness of tritanopic color stimuli in which only the S-cone responses varied. To evaluate blueness, the 2AFC method, instead of the hue cancellation paradigm, was adopted to determine the psychological scale value for each stimulus. The results showed the nonmonotonic behavior of the S-cone signal: increasing S-cone excitation could either increase or decrease blue sensation.

"Colour perception across the visual field: no mastery of sensorimotor contingencies"
A Bompas, G Powell, P Sumner
With the growing emphasize on the roles of learning, plasticity, calibration, Bayesian inference and sensorimotor contingencies in vision, many of us take for granted that distortions introduced by the eye should not be expected to affect the world we perceive. For example, light reaching the centre of our eyes will differ in spectrum from light falling 2 degrees away, but in everyday life uniformly coloured surfaces do not seem to gradually vary in colour with eccentricity, nor do objects seem to change colour whether we look directly at them or not. Indeed, this very example explicitly inspired O'Regan and Noe's influential "sensorimotor account of vision and visual consciousness". Alas, this prediction is compromised by (overlooked) evidence that such spectral variation does affect colour perception when measured in controlled laboratory conditions. We replicated this finding in settings closer to natural experience 1) with eye movements, 2) with real surfaces, and we modelled cone responses to the spectra of our stimuli. We show that, despite centre-periphery variations being very systematic on every-day surfaces, perceptual judgments reflect plain receptoral adaptation but no learning. We conclude that there is no mastery of sensorimotor contingencies here despite years of consistent exposure.

"Colour categorisation in mandarin-english speakers: Evidence for a left visual field (LVF) advantage"
S Wuerger, K Xiao, Q Huang, D Mylonas, G Paramei
Colour categorisation is faster when colour stimuli straddle a colour boundary. Recently a temporal advantage has been shown for stimuli presented in the right visual field (RVF) consistent with a left-hemisphere dominance for language processing [Gilbert et al., PNAS, 103, 2006]. In the present study 40 late Mandarin-English bilinguals residing in the UK performed a colour categorisation task, in L1 and L2. Colour stimuli near the blue-green boundary were presented in isolation for 160 ms, randomly in the LVF or the RVF. After stimulus presentation, colour names 'Blue' and 'Green' were presented (in L1 or L2) on top or bottom of the screen and observers indicated their response via a button press. We report two results: While the location of the blue-green boundary is independent of the visual field and the language, reaction times at the colour boundary were about 70 ms shorter in the LVF, but only in Mandarin. This LVF advantage for Mandarin bilinguals in their native language is at odds with the reported RVF advantage for English monolinguals. We speculate that mandarin logographic characters may be analysed visuo-orthographically, in the right fusiform gyrus - compared to phonological analysis of English alphabetic characters in the left hemisphere fusiform gyrus.

"Neural locus of colour after-images"
Q Zaidi, R Ennis, B Lee
To measure color afterimages, colors of two halves of a disk were modulated sinusoidally to different ends of a color axis (e.g. grey>red>grey and grey>green>grey). The two halves appeared identical initially, increased in difference, decreased to no difference, then increased in opposite phase to negative afterimages, the half modulated through red appeared green and vice versa when the physical modulation returned to grey. The physical contrast between the two halves, when they appeared identical, provided a measurement of the adaptation underlying the afterimages. To locate an early neural substrate, macaque parafoveal retinal ganglion cell (RGC) responses were measured during similar modulation of uniform circular patches. The responses of Parvocellular and Koniocellular RGCs tracked modulation in their preferred direction, but decreased to the pre-stimulus rate 1-2 sec before the physical modulation returned to mid-gray, dipped below this level and then recovered. Cell responses to the opposite direction showed the reverse pattern. Responses were well described by a cone-opponent subtractive adaptation with a time constant of 4-7 seconds, much slower than photoreceptor adaptation. Slow neural adaptation of the RGC population thus accounted for the after-images: cells responding at mid-grey after the cessation of the modulation, propagate an after-image signal to subsequent stages

"Selective Human BOLD responses to chromatic spatial features"
E Castaldi, F Frijia, D Montanaro, M Tosetti, M C Morrone
Physiological and psychophysical evidence suggests that segmentation of an equiluminant visual scene into distinct objects could be achieved by a local energy algorithm computation, which relies on combining even- and odd-symmetric receptive field mechanisms. We measured BOLD responses to Fourier amplitude-matched edge and line stimuli, and to an amplitude-matched random noise control: all stimuli could be modulated either in luminance or in red-green (equiluminant) colour contrast. For equiluminant stimuli, alternation between congruent versus random-phase stimuli, as well as between edge and lines stimuli, produced no activation of primary visual cortex, indicating the existence of phase-selective colour neurons as early as V1. Only higher hierarchical areas, either along the dorsal pathway (caudal part of the intraparietal sulcus (CIP) and V7) and along the ventral pathway (V4) responded with a preference for edges to line stimuli. The activity elicited by stimuli modulated in luminance confirmed previous results [Perna et al, 2008, Journal of Vision, 8(10): 15, 1-15]. Overall the results suggest the existence of equal numbers of neurones with even and odd receptive fields, for both luminance and colour stimuli in V1, as well as a specialization both along the dorsal and the ventral pathways for the detection of edges, both colour and luminance.

"Dead zone of visual attention revealed by change blindness"
I S Utochkin
Three experiments investigated spatial allocation of attention during search for visual changes under flicker conditions. It was hypothesized that objects of central interest produce a 'dead zone' for the closest marginal objects, a region where attention is less likely to go and stay to create coherent representation. This should produce pronounced change blindness in that region. In Experiment 1, observers had to detect a single change in images. Changes could be in central object, near or far from center. Observers demonstrated slower responses, higher miss and misidentifications rates for near as compared to far condition. In Experiment 2, observers had to detect central change first. This manipulation called additional attention to the center. After central change detection observers continued to search for a marginal change that could be either near or far. This yielded further temporal inhibition of search for near and had little effect on far objects. In Experiment 3, observers performed similar double change detection but both close and distant changes were presented concurrently after the central one. Observers detected far changes more frequently than near changes. Results support the notion of a dead zone surrounding central objects and provide strong evidence that this dead zone is due to attention.

"Distractor familiarity improves tracking of unfamiliar targets"
L Oksama, Y Pinto, P Howe, T Horowitz
Multiple object tracking is easier when all of the stimuli are familiar, suggesting that visual short term memory (VSTM) plays an important role in tracking [Pinto, et al., 2010, Journal of Vision, 10(10): 4; Oksama, & Hyönä, 2008, Cognitive psychology, 56(4), 237-283]. What are the respective roles of target and distractor familiarity? It seems likely that familiar targets are easier to track. However, the Theory of Visual Attention [TVA; Bundesen, 1990, Psychological Review, 97(4), 523-547] predicts that familiar distractors should impair tracking of unfamiliar targets, because familiar items compete more strongly for VSTM access. We varied target and distractor familiarity independently in three experiments. In Experiments 1 and 2, stimuli were pictures of objects, chosen randomly on each trial (unfamiliar) or repeated on every trial (familiar). In Experiment 3, familiar objects were letters, and unfamiliar objects were pseudo-graphemes. Observers tracked either 3 of 8 objects (Experiment 1) or 4 of 8 objects (Experiment 2 and 3). Familiar targets were easier to track across all experiments. Experiments 1 and 2 yielded only weak distractor familiarity effects. However, in Experiment 3, familiar distractors actually improved tracking of unfamiliar targets, disconfirming the TVA prediction. We discuss the implications for dynamic visual attention.

"Divided attention to grasp-relevant locations"
R Gilster, H Deubel
In two studies we investigated the spatial deployment of attention during manual grasping movements. In the first study, subjects grasped a vertically oriented cylindrical object with thumb and index finger. A perceptual discrimination task was used to assess visual attention either prior to the execution of the grasping movement, or when no movement was initiated. Eight mask items located at the border of the cylinder changed into a discrimination target and seven distractors for 100 ms. Results showed improved discrimination for the index finger location but no change for the thumb location, while perceptual performance decreased at other, non-grasped locations. In a second study, a same-different task was used to confirm that attention is deployed in parallel to both grasp-relevant locations. Taken together, our results confirm the import role of attention in grasp preparation and provide direct evidence for divided attention in grasping with two fingers.

"Is word superiority effect the same for attended and inattended words?"
E Gorbunova, M Falikman
The word superiority effect (WSE) refers to the increase in efficiency of letter identification within words as compared to random letter strings [Cattell, 1886, Mind, 11, 63-65]. There is still a controversy whether attention can modulate this effect. To address this issue, we directly compared the WSE under full attention and diverted attention using a hybridized Reicher-Wheeler paradigm [Reicher, 1969, Journal of Experimental Psychology, 81(2), 275-280] and Posner's endogenous cuing paradigm [Posner, 1980, Quarterly Journal of Experimental Psychology, 32, 3-25]. Words, pseudowords and nonwords were presented 7 deg. left or right from the central fixation cross, followed by backward masks. Participants were instructed to follow the central cue (which was valid on 75% of the trials) and to report on a middle letter of each string using a 2-AFC procedure. The significant WSE was obtained for stimuli cued both validly (full attention) and invalidly (inattention). Attention was shown to modulate the accuracy of letter identification for all three types of letter strings. However, no significant interaction between the two factors was revealed. Interestingly, significant pseudoword superiority was observed under full attention but not under inattention, thus revealing attentional modulation in the processing of orthographically regular but meaningless strings.

"The telltale heartbeat (outlier)"
T Horowitz, E Kreindel, J M Wolfe
Consider a radiologist examining CT images of a beating of the heart for rhythmic abnormalities. The heart is a complex spatiotemporal object, with different points moving in different phases, potentially with different frequencies. Does organization into a single object change the way observers search for motion outliers? We designed a visual search experiment in which the "heart" was a circle of dots 4, 6, 8, or 16 dots oscillating in alternating phase at 1 Hz, connected by splines to form a single, pulsating object. The radial condition omitted the splines, while the grid condition randomly positioned the dots on a 4 x 4 grid. On 50% of trials a single target dot oscillated more slowly (0.25 or 0.50 Hz) or faster (1.50, and 2.00 Hz) than distractors. In the radial and grid conditions, we obtained the usual pronounced asymmetry: slow targets yielded inefficient search (> 40 ms/item) while fast targets yielded moderately efficient search (< 20 ms/item). However, when all points were linked into a single object, slope was approximately 20 ms/item for all frequencies. Search for a frequency outlier on an organized spatiotemporal object appears to be strikingly different from search among individual items.

"Searching for invisible objects in visible scenes"
M Vo, J Wolfe
Object search is usually guided by a combination of visual object features, the scene's spatial layout, and general knowledge about probable objects locations (e.g., pots on stoves). Normally, a scene contains both object feature and scene structure information. We isolated the scene structure contribution by making the objects invisible. We tracked observers' eye movements while they repeatedly searched scenes for objects that were visible in some scenes or invisible in others. The designated targets were always present. To locate an object, visible or invisible, observers fixated a candidate location and pressed a button. This continued until the target was found (max. 30sec). We observed that for some objects (e.g. faucets) scene guidance allowed the target to be found in one or two tries. Others benefited extensively from feature guidance (e.g. a cork screw). In an unannounced second block, the same scenes were searched again, however, now all objects were invisible to observers. Compared to initial searches, we found search benefits for previously invisible objects indicating that observers incidentally stored locations of invisible search targets. Search for previously visible objects was impeded by their "disappearance", but still reflected search performance of Block1: Easily found objects remained easily found despite their "disappearance".

"Increased pupil size distinguishes familiar from novel images"
M Naber, U Rutishauser, W Einhäuser
Pupil size has been associated with cognitive processes such as decision making, arousal, attention, emotional valence, and target selection. Here we examined whether pupil size relates to recognition memory. Observers' pupil size was recorded while sequences of images of different difficulty (man-made objects or natural scenes) were presented. First, observers were asked to remember 100 target images shown in sequence (memorization phase). After a delay and distracter task, observers' recognition memory was tested by presenting the target images intermixed with 100 novel images (rehearsal phase). For each image, observers indicated whether the image was old (i.e., a target) or new together with a confidence judgment (sure, less sure, guessing). We found that, independent of task difficulty, observers had larger pupils after the presentation of old images as compared to novel images. There was also a clear differentiation in pupillary responses for high and low performing observers. Furthermore, pupil size was larger for intermediate levels of confidence (less sure) than for high and low levels of confidence (sure and guessing). In conclusion, we found that pupil size reflects observers' image familiarity and task-engagement.

"Pupil responses depend on task requirements irrespective of (some) stimulus attributes"
T Stemmler, M Fahle
It is well known that pupils react to luminance changes and emotional stress, while it is less well known that pupil diameter also changes at isoluminant stimulus changes. We investigated whether pupils also react to the task to press a button as such, and whether this reaction - if it exists - is modulated by the amount of luminance change. Grating stimuli changed both luminance and orientation at random intervals. Luminance changed by 40cd/m2, 70cd/m2, or 130cd/m2, from bright to dark and back, after an interval. Subjects either observed these changes passively, without any reaction, or else pushed a button at the transition time. By subtracting the pupil response for the passive case from that for the button presses, we were able to isolate the part of the pupil response elicited by the button press. Fortunately, this difference was constant, i.e. independent of the luminance change, and pupils started to dilate even before the button presses. This result indicates a constant contribution of task requirement on pupil responses, irrespective of stimulus luminance. This contribution increases variance in pupil responses for example during binocular rivalry, its elimination should lead to better correlate pupillary responses during subjective rivalry.

"Perceptual integration of luminance increments and decrements across space and time"
M Rudd
The appearance of luminance decrements depends on spatial contrast, or the luminance ratio with respect to the surround (Wallach). But for increments, such contrast effects are either weak (Bressan and Actis-Grosso, 2001, Perception, 30, 889-897; Rudd and Zemach, 2005, Journal of Vision, 5(11):5, 983-1003) or absent (Gilchrist et al, 1999, Psychological Review, 106, 795-834). This asymmetry has been interpreted in terms of lightness anchoring: the absence of contrast effects for increments implies a single highest luminance anchor (Gilchrist); weak contrast effects imply multiple anchors (Bressan). Here I argue for a different, neural, interpretation in which visual cortex computes lightness by integrating luminance steps across space and time in a strategic manner that is under the control of attention and influenced by the observer's perceptual goals (Rudd, Journal of Vision 2010, 10(14):40,1-37). Luminance 'steps' are computed at different spatial scales and orientations by V1 neural receptive fields. The increment-decrement asymmetry results from a difference in the inherent strengths of neural lightness and darkness induction signals, which in turn determines how far lightness and darkness effects 'radiate' perceptually across time and space. The theory provides a single overarching account of such diverse perceptual phenomena as simultaneous contrast and temporal light adaptation.

"fMRI of the rod scotoma: Filling-in, rod pathway projections, and how it informs plasticity"
B Barton, A Brewer
Introduction: The phenomenon of perceptual filling-in, where a region of the retina does not transmit visual information, yet we perceive visual information based on what information is available from nearby retinal locations, has been studied extensively at the blind spot (e.g., Ramachandran, 1992) and after inducing "artificial scotomas" (retina-stabilized adaptation) in the periphery (e.g., Ramachandran, 1991, 1993). Hubel (1997) reported that a line passing through the rod scotoma does not complete (as in other scotomas), and Hadjikhani and Tootell (2000) reported no perceptual or functional magnetic resonance imaging (fMRI) evidence of filling-in. However, filling-in and scotopic afterimages of an extended surface over the rod scotoma have been recently reported (Hubel et al, 2009). Methods: We measured angular and eccentric retinotopic organization across human visual cortex using fMRI under scotopic conditions. Retinotopic stimuli consisted of black and white, drifting radial checkerboards comprising (1) wedges and rings (3°, 7.4°, and 11° in radius) or (2) drifting bars (11° in radius). The data were analyzed using population receptive field (pRF) modeling (Dumoulin & Wandell, 2007). Results/Discussion: Here, we report new perceptual and fMRI evidence for perceptual filling-in at the rod scotoma under scotopic conditions using drifting bar, rotating wedge, and expanding ring stimuli.

"Dynamics of cross-modal memory plasticity in the human brain"
L Likova
Recently we demonstrated that human primary visual cortex (V1) plays a role in implementing Baddeley's concept of a visuo-spatial sketchpad for working memory (WM), and also showed that it can operate as an amodal spatial-sketchpad (Likova, 2010). To test the amodal sketchpad idea, we eliminated not only visual input, but all levels of visual processing, by training a congenitally blind novice to draw from tactile- memory. A task of tactile exploration and memorization of the image to be drawn and a control scribbling task were also included. The three experimental conditions, each of 20sec duration, were separated by 20sec rest intervals. FMRI (Siemens 3T scanner) was run before, after a week of drawing training, and after a prolonged consolidation period. A fiber-optic motion-capture system recorded the drawing movements. Remarkably, the primary visual cortex (V1) in this totally blind individual, which exhibited no specificity for tactile-memory drawing before training, was massively recruited after training. The findings support the WM amodal sketchpad concept and its implementation in V1, revealing a rapid learning-based plasticity in recruiting unused visual cortex resources. Furthermore, response pattern and time-course analysis across the learning sequence, provide the first specific insights into the dynamics of plastic re-assignment in V1.

"Visual sensitivity explained"
D Pelli
Since 1800, perhaps the main goal of visual science has been to explain visual sensitivity, i.e. the reciprocal of threshold contrast for object recognition (Fechner 1860;Rose 1948;Campbell & Robson 1968;Barlow 1977;Walraven,Enroth-Cugell,Hood,MacLeod,& Schnapf 1990). However, despite Fechner's excellent start, successive attempts, incorporating diverse new findings, grew ever more complicated.With no sign of convergence, reviews expressed despair, and the topic dropped out of the conversation.I am happy to report the unexpected convergence of two lines of research that together explain visual sensitivity. Sensitivity can be factored into two smaller questions: equivalent noise and efficiency (Pelli & Farell 1999).Raghavan & Pelli (to-be-submitted) have made systematic measurements of the observer's equivalent input noise, showing that it's the sum of photon noise and cortical noise.Efficiency has been a mystery.Why do we need so much contrast (relative to the ideal observer) to identify a complex object? Dubois, Poeppel, & Pelli (to-be-submitted) measure the cost of binding features and discover that most of the cost (in sensitivity) for identifying a complex object (e.g. a four-letter word) lies in binding its features.Thus, human visual sensitivity is explained by two factors: 1.The noisiness of cortical neurons and photon absorptions, 2.Inefficient binding of features in object recognition.

"Flanker's paradigm reveals attention gender differences in action's judgments"
C Bidet-Ildei, C Bouquet
Several studies have revealed gender differences during the observation of human movements, but the underlying basis of these differences remain unclear. By using the Flanker's paradigm developed by Eriksen & Eriksen [Eriksen and Eriksen, 1974, Perception & Psychophysics, 16, 143-149] the present study aimed to analyze attention responses of male and females during the judgments of running movements. Three running actions appeared simultaneously on a computer screen: the target in the centre and the flankers in periphery. Target and flankers could be compatible (or not) concerning both the direction of the movement and the gender of the runner. The results indicate that flanker processing is different in male and female participants. Whereas the female participants were perturbed systematically by incompatible direction of the flankers, the male participants were only perturbed when the flankers were male. This finding provides novel basis to explain gender differences observed during the judgments of human action.

"Configural vs. Motion processing in audiovisual enhancement of speech"
P Jaekl, A Alsius, A Pesquita, K Munhall, S Soto-Faraco
In noisy environments, seeing a speaker's facial gestures significantly improves speech comprehension. Previous investigations are mixed regarding the type of visual processes that may underlie such multisensory improvement. In particular, data are mixed regarding the role of configural processing versus analysis of local motion cues about the talker's face. Using a novel approach, we compared the contribution of configural and motion cues in an audiovisual word identification task by employing a point-light talker along with auditory speech in noise. Trials with dynamic configural information, in the absence of local motion cues, were achieved with an isoluminant display. In other trials, motion processing was enabled by adding contrast information to the point-light animations. Word identification performance in dynamic audiovisual conditions was compared to a baseline, in which the visual point-light display remained static. Interestingly, improvement was found in both the configural (isoluminant displays) and configural + motion conditions (contrast based displays). Motion based point light displays, however, afforded a stronger multisensory benefit than configurally based stimuli. In conclusion, although motion analysis appears to make the greatest contribution, we show that configural cues alone can enable significant improvement in understanding speech.

"Initial impressions: manipulating dynamic body stimuli changes perceived personality"
J C Thoresen, A Atkinson, Q Vuong
Personality trait attribution can underpin important social decisions and yet requires little effort from an observer; even brief exposure to static information from the face or body can generate lasting first impressions. Less is known, however, about how body movement cues personality judgments, although this channel is readily available to observers, even at greater distances. We present three experiments designed to assess to which degree observers make reliable judgments on a number of traits based on exposure to point-light walkers and show how kinematic parameters can be manipulated to influence observer judgments. Our findings indicate that observers do make reliable, albeit inaccurate, trait judgments. We show how point-light walkers can be successfully reduced to a small number of components using Principal Component Analysis (PCA). Motion parameters associated with certain trait impressions (e.g., Extraversion, Trustworthiness) were identified using PCA. To strengthen these findings, trait ratings were obtained on new point-light walkers created by exaggerating the identified motion parameters. As predicted, manipulation of the main component had a significant effect on perceived Openness, Extraversion, Trustworthiness and Agreeableness. Findings can be of importance in computer modelling of avatars or in explaining and manipulating gait potentially important for real-life human interactions.

"Motion implied by static line-drawing image of visual art activates the human motion-sensitive cortex: an fMRI study"
N Osaka, D Matsuyoshi, T Ikeda, M Osaka
Visual artists have developed various visual cues for representing implied motion (IM) in two-dimensional art. Blur, action lines, affine shear, instability, and superimposition are possible technical solutions for representing IM . In a realistic painting, artists have tried to represent motion using superimposed or blurred images. Recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving IM. We report functional magnetic resonance imaging study demonstrating activation of the human motion-sensitive cortex by static images showing IM because of instability of cartoon images. We used static line-drawing cartoons of humans by Hokusai Katsushika, an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found these cartoons with IM by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive areas including MT+ in the human brain, while an illustration that does not imply motion, for either humans or objects, did not activate these areas.

"Dynamics of cortical MEG response to human body motion"
M Pavlova, C Bidet-Ildei, C Braun, A N Sokolov, I Krägeloh-Mann
Brain imaging points to brain regions engaged in visual processing of body motion (BM). However, temporal dynamics of brain activity remains largely unknown (but see Pavlova et al, 2004, Cerebral Cortex, 14(2), 181-188; Pavlova et al, 2007, NeuroImage, 35, 1256-1263). Here we focus on the relationship between the behavioural measures of performance and cortical magnetoencephalographic (MEG) response to BM. Adolescents aged 14-16 years judged the presence of a point-light walker embedded in a scrambled-walker mask. At early latencies of 180-244 ms, the sensitivity to BM negatively correlates with root-mean-square (RMS) amplitude over the right occipital, temporal and frontal cortices and the left temporal cortex. At latencies of 276-340 ms, it negatively correlates with the RMS amplitude over the right occipital and bilateral temporal cortices. At later latencies of 356-420 and 420-500 ms, there is still a strong negative link between the sensitivity and temporal activation in both hemispheres. No association occurs between the sensitivity and the RMS amplitude in adolescents with periventricular lesions affecting brain connectivity. For the first time, MEG activation unfolding over time and linked to performance reveals temporal dynamics and connectivity of the entire cortical network underpinning BM processing. The outcome represents a framework for studying the social brain in neurodevelopmental disorders.

"Giving life to circles and rectangles: animacy, intention and fMRI"
P Mcaleer, M Becirspahic, F E Pollick, H M Paterson, M Latinus, P Belin, S Love
The perception of animacy occurs when moving geometric shapes are perceived as alive, having goals and intentions, with neuroimaging studies highlighting a network of brain areas involved in this percept. Two issues prevent linking this percept in shapes to that in human movement: the use of synthetically-generated stimuli with limited relationship to human movement; the lack of control over low-level visual cues. The present study addresses these issues by combining two stimuli sets: synthetically-generated stimuli with controlled motion properties; displays derived from human motion (McAleer and Pollick, 2008, Behavior Research Methods, 40 (3), pp 830-839). Synthetically-generated stimuli allowed for manipulation of animacy via orientation changes whilst controlling motion properties: confirmed via a rating-based behavioural task. The stimuli were then used in an fMRI study locating regions activated by stimuli perceived as "alive". Within these regions, analysis was performed looking at brain activation when viewing displays derived from human actors. Stable activation in hMT+ confirmed the controlled motion properties of the synthetic stimuli, while pSTS showed increased activation for stimuli perceived as "alive". Regions located via the synthetic stimuli were consistent with previous findings, and analysis within these regions showed increased activation for displays from human movements with an ambiguous intent.

"Binocular fusion, suppression and diplopia: scale invariance and the influence of relative contrast between the eyes"
S Wallis, M Georgeson
With different images in the 2 eyes, one may experience fusion, suppression of one eye's view, or diplopia. To understand better the underlying spatial mechanisms and interocular interactions, we studied the influence of binocular disparity, spatial scale and relative contrast of edges shown to each eye. Single, Gaussian-blurred, horizontal edges (blur B=2 to 32 minarc) were presented at various vertical disparities and contrast ratios. Observers indicated '1 central edge', '1 edge above, '1 edge below' or '2 edges'. We defined the subjectively 'balanced' contrast ratio as that yielding the greatest proportion of '2 edge' responses. Next, we used disparities of 0 to 8B, and several contrast ratios (relative to 'balanced'). At balanced contrasts, there was little or no interocular suppression at any disparity. As disparity increased, the proportion of fusion (or diplopia) responses fell (or rose) monotonically, and the fusional disparity range was nearly proportional to edge blur (about 2.5B, implying scale invariance). However, with unbalanced contrasts, the (relatively) lower contrast edge tended to be suppressed at larger disparities (>=5B). Fusion responses were little affected by contrast imbalance. Thus, a contrast imbalance between the eyes increases interocular competition (suppression), and so reduces diplopia, but leaves Panum's fusional range largely unaltered.

"Remote interactions in contour detection"
L O'Kane, R Watt, T Ledgeway
We have explored the visual process of linking remote contour fragments. Previous studies using the gabor path paradigm examined performance with contour elements that are adjacent. We measured detection performance when the contour is interrupted by gaps filled by random background elements. To do this a contour fragment was created by aligning two adjacent gabors to be fixed in spatial and orientation alignment. Correct performance was determined by clicking on a region around a pair, and pair(s) were embedded in a background of 900 gabors. In one experiment the number of target Gabor pairs varied, each having the same orientation alignment relative to the image but randomly positioned. In the second experiment target Gabor pairs were placed at remotely spaced places along a contour. Results indicate performance is massively enhanced by the presence of the remote contour configuration in an image. As little as 3 pairs are required for good performance, much less than the 20 required when the pairs do not form part of a contour. Further experiments examined adding curvature into the contour and results reveal that performance is tightly tuned to straight global contours. We discuss the implications of our results in terms of non-neighbourhood neural operations.

"Functional architecture for binocular summation of luminance- and contrast-modulated gratings"
A J Schofield, M A Georgeson
Combining signals from the two eyes is fundamental to stereo vision. We studied binocular summation for horizontal luminance gratings (L), luminance gratings in noise (LM) and second-order gratings (contrast-modulated noise, CM). Noise centre frequency was 6 c/deg, rms contrast 0.2 or 0.1. We measured 2AFC detection thresholds for gratings (0.75c/deg, 0.2s duration) shown to one eye, to both eyes in-phase, or both eyes in anti-phase. Mean binocular thresholds were 5-6dB better than monocular thresholds for L, LM and CM: close to linear summation across the eyes. We found full binocular summation for CM even when the 2-D noise carriers were uncorrelated or anti-correlated between the eyes, implying binocular combination of envelope, not carrier, responses. Similarly, binocular summation for CM occurred with uncorrelated, oblique 1-D noise carriers at both parallel and orthogonal orientations, suggesting that CM summation is insensitive to carrier similarity or composition. In all cases anti-phase signals showed no summation and very little cancellation, consistent with half-wave rectification before binocular summation. These results suggest an extended filter-rectify-filter model with two filter-rectify stages preceding binocular summation (FRFRB), similar to the architecture proposed for cat area 18 (Tanaka & Ohzawa, 2006, Journal of Neuroscience, 26, 4370-4382).

"The time course of perceptual grouping: a high density ERP study"
V A Chicherov, G Plomp, M H Herzog
Performance on a target can be strongly modified by context. For example, vernier offset discrimination is strongly deteriorated by neighboring flankers. Performance is worst when the flankers have the same length as the vernier. Surprisingly, performance improves for longer and shorter flankers [Malania et al., 2007, Journal of Vision, 7(2):1, 1-7]. It was proposed that interference is strongest when the vernier and the flankers group, and weaker when they ungroup. Here, we used high density EEG to investigate the time course of this contextual modulation. A vernier was flanked on both sides by ten lines which were either shorter, longer, or of the same length as the vernier. Performance was worst for equal length flankers, and best for longer flankers. The P1 amplitude monotonically increased with flanker size, reflecting the stimulus layout. The N1 amplitude was highly correlated with performance and, hence, with the strength of grouping: longer flankers elicited the highest amplitude of the N1 wave, shorter flankers medium amplitude, and the equal length flankers elicited the lowest one. Hence, perceptual grouping occurs before the N1 onset, i.e. before 150-160 ms.

"Summary statistics of edge information predict categorization of naturalistic images"
I I Groen, S Ghebreab, V A Lamme, H S Scholte
Within a few hundred milliseconds, we transform patterns of light falling on our retina into percepts of objects and scenes. The computational efficiency of this process is striking because the visual world is complex and highly variable. How do we extract content information from natural images so quickly? One possibility is that low-level properties of natural images carry content information directly, thereby facilitating rapid perceptual categorization. Here, we show that simple statistics of the output of low-level edge detectors - modeled after LGN neurons - predict human categorization accuracy of naturalistic scene images. The same statistics also explain a high amount of variance of early visual EEG responses (ERPs) to these images, indicating a plausible neural mechanism. Moreover, we show that categorical clustering of ERPs emerges from corresponding differences in statistics of low-level responses between individual images. In comparison, second-order statistics derived from the Fourier transform are superior to our LGN model in terms of image classification, but do not correlate with human behavior or with clustering-by-category in ERPs. These converging neural and behavioral results suggest that statistics of low-level edge responses convey relevant information for perceptual categorization.

"Sensitivity to local higher-order correlations in natural images"
H Gerhard, F Wichmann, M Bethge
We measured perceptual sensitivity to higher-order correlational structure of natural images using a new paradigm, with which we also evaluated the efficacy of several successful natural image models that reproduce neural response properties of the visual system. To measure sensitivity to local correlations in natural images, stimuli were square textures of tightly tiled small image patches originating from either: 1) natural scene photographs, OR 2) a model. On a trial, observers viewed both texture types and had to select the one made of natural image patches. In a series of experiments with 22 subjects, we tested 7 models, varying patch size from 3x3 to 8x8 pixels. Results indicated high sensitivity to local higher-order correlations in natural images: no current model fools the human eye for patches 5x5 pixels or larger, and only the model with the highest likelihood brings performance near chance when patches are 4x4 pixels or smaller. Remarkably, the psychophysical functions' ordering matched the models' ordering in likelihood of capturing natural image regularities. Subjects' performance on binarzed textures approached ideal observer efficiency, where the ideal observer has perfect knowledge of the natural image distribution. In four control experiments, we explicated the knowledge observers use to detect higher-order correlations.

"The functional organization of hand and tool responses in visual cortex reflects the organization of downstream action networks"
S Bracci, C Cavina-Pratesi, M Ietswaart, A Caramazza, M Peelen
Most accounts of the functional organization of human high-order visual cortex refer to objects' visual properties, such as visual field, form and motion properties. Here, using fMRI, we provide evidence that the distribution of object category responses in high-order visual cortex may additionally reflect the (non-visual) organization of downstream functional networks. Multivoxel pattern analysis in left lateral occipitotemporal cortex (OTC) showed a high degree of similarity between the representations of hands and tools. This similarity was specific to a region in the left lateral OTC and could not be explained by the objects' visual form or motion properties. Importantly, functional connectivity analysis showed that left OTC was selectively connected, relative to neighboring regions, with regions in left intraparietal sulcus and left premotor cortex previously implicated in hand-tool interactions. Altogether these results indicate that the functional organization of OTC partly follows non-visual object dimensions. We propose that this is due to the constraint to connect both hand and tool representations in OTC to a common left lateralized fronto-parietal action-network. More generally, these results suggest that the functional organization of OTC partly reflects the organization of downstream functional networks due to regional differences within OTC in the connectivity to these networks.

"Matching judgments for high probability orientations are more precise, but only on the side of the responding hand"
B Anderson, M Druker
Can perceptual judgments be improved through learning the probability of stimulus features? We designed a task where a gabor patch is briefly (70 ms) displayed 6 degrees to the left or right of fixation. Participants then adjust a line to reproduce the stimulus orientation. We have recently used this paradigm to show that non-informative luminance cues improve the precision of orientation judgments. For these experiments, we had stimuli appear equally often at both locations. Stimulus orientation was uniformly sampled, but orientations of 0 - 90 degrees (right tilt) were displayed four times as often as orientations of 90 - 180 degrees (left tilt) for one side and vice versa for the other. Participants were not explicitly aware of this manipulation. Participants were significantly more accurate in reproducing orientations in the high probability range. Interestingly, this benefit was restricted to stimuli that appeared on the same side as the responding hand. This effect cannot be explained by a general spatial improvement for orientation judgments, nor by an overall response bias. We conclude that we are sometimes able to selectively improve our perceptual judgements in ways that match environmental probabilities.

"Dissociable prior influences of signal probability and relevance on visual contrast sensitivity"
V Wyart, A C Nobre, C Summerfield
Signal detection theoretical analyses have shown that while visual signals occurring at behaviourally-relevant locations are detected with increased sensitivity, frequently-occurring ones are reported more often but are not better distinguished from noise. However, conventional approaches that estimate sensitivity by comparing true- and false-positive rates discard signal-like fluctuations in noise that might contribute to perceptual decisions. Here we reassessed the respective prior influences of signal probability and relevance on visual contrast sensitivity using a new reverse-correlation technique that quantifies how trial-to-trial fluctuations in signal energy predict behaviour. We found that cues promoting signal relevance increased sensitivity most at high energy levels, consistent with a change in the precision of visual processing, whereas cues predicting greater signal probability enhanced sensitivity only at low energy levels, consistent with an increased baseline activity in signal-selective cells. Together, these results support 'predictive-coding' theories of perception which propose distinct top-down influences of expectation and attention on visual processing.

"Knowing the error of our ways yet being unable to correct for it"
L Van Dam, M Ernst
When performing a sensorimotor task, e.g. pointing to visual targets, we constantly make errors. Those errors can be random as a result of noise or they can be systematic due to sensorimotor miscalibrations. It is generally assumed that in the absence of visual feedback we are unaware of the noise-related random pointing errors. Here we investigated this assumption. Participants performed a rapid pointing task to visual targets presented on a touch screen earning points when hitting close to the target. Visual feedback was prevented at movement onset (open-loop). After the movement was completed participants indicated whether they believed to have landed left or right of the target. Results show that participants' left/right-discriminability was well above chance. It was still above chance when participants were instructed to point slowly, enabling them to correct for any unintended movement error. Yet when participants were allowed to make corrections after the initial movement toward the touch screen was completed knowledge about the error was lost. Surprisingly, in this condition participants often did not correct at all and the corrections that they did make were relatively small. Together, this indicates that we have knowledge about the random error yet are unable to use it.

"Informational affordances: Evidence of acquired perception-action sequences for information extraction"
I Reppa, W Schmidt, R Ward
It is now quite common to speak of "perception for action", emphasising that many of our perceptual systems serve an ultimate role in guiding action. However, we might also legitimately speak of "action for perception", in which action can serve the more proximate goal, of allowing us to better perceive an object's properties. The current study examined whether object perception can automatically prime actions leading to efficient information extraction. Participants in Experiment 1 learned to rotate a cube in a specific way with the end goal of efficiently revealing object-identifying information. In Experiments 2 and 3 the end goal of producing object-identifying information was removed but the stimulus-response associations were preserved. In a subsequent test phase, where the object was irrelevant, only object views associated with actions learned in the context of obtaining object-identifying information caused response interference. These results demonstrate the existence of informational affordances: perception-action sequences acquired with the goal of information extraction that are automatically primed during later exposure to the object. Our results show one way that perception and action are linked in recursive fashion: by way of perception utilizing action in order to facilitate the goal of perceiving.

"Dissociating visual confidence from performance"
P Mamassian, S Barthelmé
High stimulus uncertainty not only reduces performance in a visual task but also one's own confidence about being correct in the task. As a result, performance and confidence are usually tightly related. However, it is still not clear how confidence is estimated, and under which circumstances performance and confidence can be dissociated. To address the latter question, we trained observers in a motion direction discrimination task under two conditions that differed in predictability. In the most predictable condition, motion directions were sampled from a narrow distribution, thereby reducing the uncertainty of the next stimulus (according to Bayes' rule). However, another consequence of using a narrow distribution is to bias the perception of the next stimulus towards the mean of the distribution, and thus reduce the discriminability of the stimuli. Therefore, if observers base their confidence on the uncertainty of the stimulus in the context of the previous ones, their confidence will be high when their performance is low. For those observers who displayed learning characterized by a decreased performance in the most predictable condition, we found indeed that their confidence increased. This result suggests that visual confidence is better related to the observer's internal uncertainty than to his anticipated performance.

"Brief adaptation-induced plasticity of perception of number"
D Aagten-Murphy, D Burr
Exposure to dot patches of high or low numerosity produces a shift in the perceived numerosity of subsequently viewed patches, biased away from the numerosity of the adapter patch. Here we show that substantial numerosity adaptation can occur with very short periods of adaptation, as short as one second (compared with the typical 30-60 seconds adaption). This brief adaption produces large effects in perceived numerosity that last over extended delays (10-20s) between adaptation and test, with measureable effects persisting for hours after initial adaptation. The adaptation effect was found to be both retinotopic, remaining strictly in retinotopic space after a saccade, and spatially specific, with numerosity judgements at multiple locations within the visual field able to be differentially modulated simultaneously. Similar experiments with brief adaptation to spatial frequency (texture density) revealed much smaller effects, providing further evidence for the independent analysis of numerosity and texture density, and suggesting that numerosity activates processes that are particularly susceptible to short adaptation. These results are of interest both in that they implicate a highly plastic mechanism for numerosity perception, adaptable within 1 second, and that they suggest a quick and efficient paradigm for use in clinical testing of numerosity.

"Computational basis of the visual number sense"
I Stoianov, M Zorzi
Visual numerosity estimation is a fundamental capacity of survival significance in animals and humans (Nieder, 2005, Nature Rev. Neurosci. 6, 177). Susceptibility to adaptation suggests that numerosity is a primary visual property (Burr and Ross, 2008, Curr. Biol. 18, 425). The computational basis of this "Visual Number Sense" is still controversial (Durgin, 2008, Curr. Biol. 18, R855). One critical factor is object-size variability. A prominent theory requires object individuation and size normalization (Dehaene and Changeux, 1993, J. Cogn. Neurosci. 5, 390). Occupancy-based theories (Allik and Tuulmets, 1991, Percept. Psychophys. 49, 303) circumvent the size-variability problem. Here we propose a simple, biologically plausible neural mechanism that statistically extracts numerosity without individual object-size normalization. The model noisily encodes numerosity on a logarithmic scale, featuring response profiles consistent with neurophysiological recordings in monkey (Roitman, Brannon, and Platt, 2007, PLoS biology 5, e208). It also supports numerosity comparison with behavioral profile close to that of animals and humans (Piazza et al., 2010, Cognition 116, 33), whereby number discriminability obeys Weber's law. Thus, the model provides a computational account of the Visual Number Sense.

"Perception of social inappropriateness: deactivations in the prefrontal cortex"
E Lepron, J F Démonet
Could one share the misery of someone else if it was not justified? Empathic response and prosocial behavior may not occur if the emotional cues provided by someone else are unexpected. In other words, we have no idea of how to react to inappropriate behaviors from others. Based on previous observations in the fields of moral dilemmas and social unfairness, we hypothesized that the orbito-frontal cortex (OFC), the anterior cingulated cortex (ACC) and the medial prefrontal cortex (mPFC) should be deactivated when perceiving an inappropriate emotional reaction. Twenty five participants underwent [15O]H2O PET scan. They were presented with visual scenarii that described a social interaction and ended with a virtual character video-clip whose facial emotional response toward the participant was either congruent or incongruent in relation to the situation. We observed a decreased activation of the OFC, ACC and mPFC in incongruent condition compared to congruent, associated with a deactivation of the OFC and the ACC in the incongruent condition compared to rest. The lesser activation of these structures can be related to the absence of plausible behavioral response, as observed for unacceptable tradeoffs in moral dilemmas. Effective connectivity analyses give further details about the network involved in this inhibition.

"What determines presence when viewing a movie?"
T Troscianko, S Hinde
Previous work has established that subjective measures of presence while watching a movie correlate with the degree of arousal experienced by the viewer, and that screen size has a significant effect on the depth of presence. We now ask how presence is affected viewing parameters, such as "3D" and colour. We also ask how presence depends on shot length in movies. The results suggest that stereoscopic ("3D") viewing leads to a relatively small but significant improvement in presence which is maintained for the half hour of testing; in contrast, viewing a movie in black-and-white increases presence for a short period only, pointing to an effect of novelty. Results for shot length suggest that presence and, by implication, arousal, are greater when mean shot length is short. This suggests that short shots may be arousing in their own right, and skilful editors co-vary this with increased excitement in the narrative, causing a heightened degree of modulation of emotion. We discuss implications for models of visual and semantic processes of these effects, which are experienced by every viewer but have rarely been studied scientifically.

"Are attentional blink and creative reasoning related through general cognitive flexibility?"
S Jaarsveld, P Srivastava, M Welter, T Lachmann
We recently showed that performance on creative thinking tasks depend on the ability to shift between divergent and convergent thinking. We developed the Creative Reasoning Task (CRT) that scores this ability with a divergent and a convergent sub score. In this task, participants create a puzzle in a 3x3 matrix, similar to Raven's matrices. In the present study we investigated whether the performance on the CRT is related to that in the Attentional Blink (AB) task. We assume that performance on both the tasks, although evolving in different time scales, might be based on a general cognitive flexibility. Accordingly, we observed a negative correlation between AB magnitude and creative thinking abilities. Participants with high CRT scores showed attenuated AB, indicating that individuals with more effective temporal shifting also shift more flexibly between divergent and convergent thinking in creative reasoning. This suggests that both performances require a heightened general cognitive flexibility.

"Interactions between luminance contrast and emotionality in visual pleasure and contrast appearance"
M Chammat, R Jouvent, G Dumas, K Knoblauch, S Dubal
Recent studies indicate that modulating low-level visual characteristics of emotional images influences their emotional processing (Rey et al. 2010, Psy.Res., 176, 155-160) but that emotion can also influence contrast sensitivity (Phelps et al.,2006, Psychol Sci., 17, 292-299.). We used conjoint measurement to explore how pleasure intensity and image contrast affect visual pleasure and contrast perception (Ho et al., 2008, Psychol Sci, 19, 196-204). A set of 80 grey-scale images for which the level of pleasurable response (4-point ordinal scale) had been pre-determined on a normative sample was used. In a session, image pairs were presented (1.5 sec intercalated between a 2sec blank screen), each with randomly chosen pleasure and contrast levels. In separate sessions, observers judged either which image was of higher contrast or of higher pleasure intensity. Two types of images were used: social or outdoor scenes. Judgments were analyzed using Maximum Likelihood Conjoint Measurement. The results display an effect of contrast on emotional perception for both scene types and, interestingly, an effect of emotion on contrast perception for outdoor scenes. The latter result shows conditions under which the emotional level affects appearance per se.

"Nature of the mechanisms for the motion processing of second-order stimuli in colour vision"
L Garcia-Suarez, K T Mullen
There is good evidence for the perception of motion of higher order stimuli in colour vision, but it is unknown whether this is based on second-order processing or on higher order cues such as feature-tracking. Here, we aim to distinguish between genuine second-order motion of chromatic stimuli and feature-tracking. Stimuli were contrast modulations of red-green or achromatic noise (spatially bandpass). The contrast envelope was a drifting Gabor (0.25cpd) of different temporal frequencies (0.5-4c/sec). We used the pedestal paradigm of Lu and Sperling (Vision Research, 35, 2697-2722, 1995) to distinguish between second-order and feature tracking: performance on motion direction is unaffected by the addition of a stationary pedestal of the same spatial frequency as the drifting envelope if analysed by a motion-based system, but is affected if analysed by feature-tracking. We measured motion-direction thresholds with and without the pedestal. Motion-direction performance was poor in the colour condition for all four observers. With the added pedestal, motion performance was unimpaired in the achromatic condition but decreased in the colour condition. This confirms that motion is analysed by a second-order system in achromatic vision, but suggests that motion of chromatic second-order stimuli is not second-order but based on third-order feature tracking.

"Psychophysical evidence for interactions between visual form and motion signals during motion integration in cortical area MT"
A Pavan, G Mather, R Bellacosa, C Casco
Motion-streak created by rapid movement improve psychophysical performance (Geisler, 1999, Nature, 400, 65-69). Cells in V1 show tuning for orientations parallel to their preferred direction. However, some cells in area MT exhibit the same selectivity (Albright, 1984, J. Neurophysiol., 52, 1106-1130). To determine whether motion and form interact at the level of motion integration (MT), Motion after-effects (MAE) following adaptation to moving dot fields were measured in the presence of flickering gratings. In experiment 1 a vertical or horizontal grating was superimposed on a horizontally moving dot field. MAE durations were significantly longer when using parallel gratings during adaptation, consistent with motion-streak facilitation. In Experiment 2 horizontal and vertical gratings were used, but the adapting stimulus contained two fields of dots moving orthogonally (Transparent motion). The resulting MAE following this adaptation is horizontal. Crucially, if motion-form interactions occur at the level of V1 (signalling each motion component), then there should be no difference between conditions involving vertical and horizontal gratings, whereas if interactions occur at the level of MT (integration), then MAEs should be longer for horizontal (parallel to integrated direction) gratings. Results favoured horizontal gratings, indicating the presence of motion-form interactions at the level of MT.

"Advantage for spatiotopic coordinates in motion perception during smooth pursuit"
T Seidel Malkinson, A Mckyton, E Zohary
Accurately perceiving the speed and direction of a moving object, while tracking it with the eyes, is a complex challenge. Under precise smooth pursuit (SP) conditions, although the tracked object is moving in the world, it is practically still on the retina. How can we perceive motion of a visual stimulus that doesn't move on the retina? We used motion aftereffect to study the coordinate frame of motion computation during SP. In the "retinotopic" configuration, the adapting stimulus was moving on the retina, but was stationary in the world. Thus the adapting motion could only be computed in retinal coordinates. In the "spatiotopic" configuration, the adapting stimulus was moving on the screen, but was stationary on the retina, due to an identical SP vector. Hence, the adapting motion could only be computed in non-retinal coordinates. We find that non-retinal motion leads to significant adaptation. Moreover, this adaptation is greater than that induced by a stimulus moving only on the retina. We suggest that motion computation occurs in parallel in two distinct representations: a low-level, strictly retinal-motion dependent mechanism, and a high-level representation, in which the veridical motion is computed, through integration of information from other sources.

"fMRI correlates of visual-vestibular interactions in self motion perception"
M W Greenlee, S M Frank, O Baumann, J B Mattingley
We explored brain activations underlying the integration of visual and vestibular information in self-motion perception. Participants viewed 200 white limited-lifetime randomly moving dots in the dark (with 10% left or right coherent direction). Using a custom-built micro-pump system, hot (50°), cold (0°) or warm (36°) water flowed through left and right ear pods leading to differential caloric vestibular stimulation simultaneously with the visual stimulation. The BOLD response (3T scanner) was contrasted between 2 conditions of visual motion (coherent leftwards or rightwards) and 3 conditions of caloric stimulation (both sides neutral, left hot - right cold, left cold - right hot). A localiser run with 100% coherent motion vs. static dots was conducted to define MT+. Participants responded on each 15s-trial in a 4-AFC fashion: no sense of self motion with coherent visual motion to left or right, self motion in the same or opposite direction as the coherent visual motion. Motion direction was reported reliably (>90% correct) for visual and vestibular stimuli. Increased BOLD activity was observed in insular cortex (including PIVC) and the anterior part of MT+ when participants reported self-motion. This response depended on the congruency between the directions of visual and vestibular signals.

"Motion strength influences perceived depth order in transparent motion"
A C Schütz
When viewing two superimposed random-dot kinematograms (RDKs), moving in different directions, observers perceive two transparent surfaces in two depth planes. As the display is two dimensional, the depth order of the surfaces is ambiguous. Here we show that motion strength is used as a cue for depth order. We presented two overlapping RDKs, which were moving in directions offset by 45 deg. The overall dot density was fixed at one dot/deg² and both RDKs were fully coherent. We manipulated the relative strength of motion by changing the assignment of dots to the RDKs from 0.5 to 0.8. The results show that observers reported more often the stronger RDK, when asked for the motion direction seen in the back and more often the weaker RDK, when asked for the motion direction seen in the front. Early pursuit followed the stronger RDK, while late pursuit mimicked the perceptual preferences. In a separate experiment, one motion direction was adapted before each trial. This direction was more often seen in front and the opposite direction was seen more often in the back, both relative to an orthogonal direction. These results emphasize the close interaction of motion and depth in the brain.

"Taking the energy out of motion"
L Bowns
A simulation of the Component Level Feature Model (CLFM) of motion processing, (Bowns, 2002,Vision Research 42, 1671- 1681), has been tested. Although similar to standard spatio-temporal energy models, it does not compute motion energy. A sequence of images is convolved with a bank of filters tuned for orientation and spatial frequency. A two-max rule is applied to the response outputs at fixed time intervals. The outputs are required to be from two different oriented Gabor filters and have a similar spatial frequency. Thresholded zero-crossings are then extracted from these outputs. The zero-crossings correspond to the velocity constraint lines used to compute the intersection of constraints. Thus tracking any intersecting zero-crossing over time corresponds to the velocity predicted by the IOC. These zero-crossings create motion streaks the length of which corresponds to the IOC speed and the orientation corresponds to the IOC direction. The model computed the direction of 200 randomly generated plaids. The output linearly matched that predicted by the IOC. The model was also able to predict when plaids are not perceived in the IOC direction, and why multiple directions are occasionally perceived. The model is invariant to contrast and phase and is consistent with physiological observations of V1 and MT neurons.

"Motion correspondence shows feature bias in spatiotopic coordinates"
E Hein, P Cavanagh
How is the visual system able to maintain object identity as the objects or the eyes move? Recent studies have shown that feature information can influence correspondence (e.g., Hein & Moore, 2009, Journal of Vision, 9(8), 658). Here we investigated if this feature influence is effective in a retinotopic or spatiotopic frame of reference. We used a variation of the Ternus display, an ambiguous apparent motion display, in which three discs, aligned vertically, were presented in alternation with a second set displaced vertically so that the bottom two discs of the first set line up with the top two of the other set. The discs could be perceived as moving all together (group motion) or as one disc "jumping" up and down while the other discs remained stationary (element motion). We biased the percept toward element motion by anchoring a disc's surface features to a fixed spatial location. In addition, participants had to make horizontal saccades across the Ternus display such that the two Ternus frames were horizontally offset at the retina, but not in spatiotopic coordinates. We found that the element bias persisted independently of the retinal position change, suggesting that the feature influence on motion correspondence is spatiotopic.

"Exploring motion-induced illusory displacement using interactive games"
I M Thornton, F Canaird, P Mamassian, H H Bülthoff
Motion-induced illusory displacement occurs when local motion within an object causes its perceived global position to appear shifted. Using two different paradigms, we explored whether active control of the physical position of the object can overcome this illusion. In Experiment 1, we created a simple joystick game in which participants guided a Gabor patch along a randomly curving path. In Experiment 2, participants used the accelerometer-based tilt control of the iPad to guide a Gabor patch through a series of discrete gates, as might be found on a slalom course. In both experiments, participants responded to local motion with overcompensating movements in the opposite direction, leading to systematic errors. These errors scaled with speed but did not vary in magnitude either within or across trials. In conclusion, we found no evidence that participants could adapt or compensate for illusory displacement given active control of the target.

"iHybrids reveal global and local information use for face identification"
S Miellet, R Caldara, P G Schyns
We introduced a new methodology combining gaze-contingent and hybrid (based on spatial-frequency bands decomposition) techniques to examine whether local or global information supports face identification. The iHybrid technique simultaneously provides, at each fixation, local information from one face and global information from a second face, ensuring full-spectrum stimuli (See movie for an illustration of the fixations dynamic over one trial; the dot represents the fixation location: http://www.psy.gla.ac.uk/~miellet/iHybrid_example.mov). iHybrids revealed the existence of two distinct, equally frequent and equally effective information sampling strategies. The local strategy involves fixations on the eyes and the mouth; the global strategy relies on central fixations of the face. All observers used both strategies depending on the trial first fixation location. A validation in natural viewing conditions confirmed our results. Supplementary Figures can be seen at http://www.psy.gla.ac.uk/~miellet/supplementary_ECVP2011.pdf for a sketch of the methodology and an illustration of the local and global information use. These results clearly demonstrate that the face system is not rooted in a single, or even preferred, information gathering strategy for face identification.

"Conjoint and independent neural coding of bimodal face/voice identity investigated with fMRI"
M Latinus, F Joassin, R Watson, I Charest, S Love, P Mcaleer, P Belin
Multimodal areas can be made up of mixed unisensory neural populations or of a multisensory population. We used an fMR-adaptation design with face/voice stimuli in order to differentiate between areas containing independent unisensory populations and multisensory areas per se. The audio and visual signals from the recordings of two actors were morphed independently: 25 congruent and incongruent, audiovisual movies were created by combining each voice and face morphs. 14 participants, familiar with the actors, were scanned using an event-related design whilst performing an identification task. Brain activity time-locked to stimulus onset was modelled against the 1st and 2nd order polynomial expansion of several regressors: 2 modelled changes along each single dimension, and the Euclidean contraction modelled changes along the two dimensions (Drucker et al., 2009, Journal of Neurophysiology, 101: 3310-3324). Positive loading on the Euclidean contraction, indicative of conjoint coding, was observed in left hippocampus whereas, negative loading was found in right posterior superior temporal cortex (pSTC). Consequently, right pSTC was involved in independent representations of face and voice identity. However, in the left hippocampus, both dimensions of the identity space were represented by a unique neuronal population.

"Face colour, health, lifestyle and attractiveness"
D Perrett, D Re, R Whitehead, I Stephen, V Coetzee, C Lefèvre, F Moore, D Xiao, G Ozakinci
Skin colour has a marked influence on facial appearance: enhanced skin redness and yellowness increases perceived health and attractiveness. We have investigated the associations of skin colour (measured spectrophotometrically) with health and lifestyle 5 studies of Caucasians. In studies 1-2 (n=89, 50) we found raised baseline cortisol levels were associated with decreased skin redness and self-reports of current illness (colds and flu). Thus low skin redness indicates physiological stress or illness. In studies 3-4 (n=93, 37) we found participants reporting modest exercise levels - one bout of vigorous exercise per week - differed in skin colour from those reporting less exercise, in a manner consistent with increased resting skin blood flow. In study 5 (n=38) we found that skin yellowness was associated with dietary intake of fruit and vegetables: increased consumption enhanced skin yellowness within 6 weeks. We used a successive presentation paradigm to measure perceptual thresholds for change in facial attractiveness due to change in skin colour. Colour thresholds were equivalent to a diet change of 1.2 portions of fruit and vegetables per day and possibly 1 hour vigorous exercise a week. We conclude that even small improvements to diet or exercise may induce perceivable benefits to skin colour and attractiveness.

"Eyes like it, brain likes it: Tracking the neural tuning of cultural diversity in eye movements for faces"
R Caldara, J Lao, L Vizioli, S Miellet
Eye movement strategies deployed by humans to identify conspecifics are not universal. Westerners preferentially fixate the eyes and the mouth during face recognition, whereas strikingly Easterners focus more on the face central region. However, when, where and how Preferred Viewing Locations (PVLs) for high-level visual stimuli are coded in the human brain has never been directly investigated. Here, we simultaneously recorded eye-movements and electroencephalographic (EEG) signals of Westerners and Easterners during face identification of learnt identities. After defining 9 equidistant Viewing Positions (VPs) covering all facial internal features, we presented the learned faces centered on a random VP for 100ms. We then extracted from prior free-viewing fixation maps the average Z-scored fixation intensity for the non-overlapping facial VP regions (VPZs). Finally, we computed a component-free data-driven spatio-temporal regression between the VPZs and EEG amplitudes. This analysis revealed a universal direct relationship between VPZ and EEG amplitudes over the face-sensitive N170 network at around 350ms, an effect unrelated to a burst of microsaccades occurring in this time-window. Our data show that the distinct cultural fixation preferences for faces are related to a universal post-perceptual tuning in the occipito-temporal cortex. Culture shapes visual information sampling, but does not regulate neural information coding.

"Holistic perception of faces: direct evidence from EEG frequency-tagging stimulation"
A Boremanse, B Rossion
EEG frequency-tagging (Regan & Heron, 1969) was used to investigate how the human brain integrates object parts into a global visual representation. Twelve participants saw a face whose left and right halves flickered at 4 and 5Hz (counterbalanced). In other conditions, the face halves were separated by a gap of 0.29 and 1.74 degrees of visual angle. Electroencephalogram (EEG) was recorded at 128 channels. In all conditions, there were large posterior responses exactly at the fundamental frequencies of stimulation (4 and 5 Hz) particularly at right occipito-parieto-temporal sites. EEG amplitude at the first order interaction term (4 + 5 = 9 Hz) was also increased when the two face parts formed a whole face, but decreased significantly as soon as half faces were moved apart from each other. This effect was not accounted for by considering the sum of EEG amplitude at the fundamental frequencies. This observation was replicated with fifteen more participants using different frequencies, showing a decrease of amplitude in the integration term when inverted and spatially misaligned half faces were presented. Overall, the presence and behaviour of the interaction terms in the EEG frequency-tagging method provides direct evidence for integration of facial parts into a holistic representation.

"Sensitivity to individual face perception is optimal at 6 Hz: evidence from steady-state face potentials"
E Alonso, B Rossion
Human EEG amplitude at a constant frequency of stimulation (3.5 Hz) is much larger over right occipito-temporal cortex when different individual faces are presented compared to when the same face is presented repeatedly (Rossion & Boremanse, 2011). Here we defined the optimal stimulation frequency ranges for this habituation of the steady-state face potential (SSFP). Five observers were submitted to 81s sequences of faces presented at different rates (1 Hz to 16 Hz) while EEG (128 channels) was recorded. There was a significant differential EEG amplitude between the two conditions on right occipito-temporal sites, for stimulation frequencies ranging between 3 Hz and 9 Hz, with a peak at about 6 Hz. This observation indicates that individual faces are best discriminated perceptually when the face stimulation oscillates at about 6 Hz. This fundamental frequency value for face individualization corresponds to cycles of about 165 ms, a time value corresponding to the latency of the early face identity adaptation effect found on the peak of the face-sensitive N170. Hence, while the relation between the N170 and the phenomenon reported here is still unclear, these observations support the view that the human brain requires about 165 ms to process individual facial information efficiently.

"Neuropsychological evidence for a functional dissociation between holistic perception in the Navon hierarchical letter task and holistic perception of individual faces"
T Busigny, J Barton, R Pancaroglu, R Laguesse, G Van Belle, B Rossion
Many studies have shown that acquired prosopagnosia is characterized by impairments in holistic/configural perception. However, it is not clear whether this holistic perception impairment is general or can be specific for faces. Here we tested four cases of acquired prosopagnosia (PS, LR, GG and RG) who have preserved object recognition, and one patient (RM) whose prosopagnosia is part of a more general object recognition impairment (visual agnosia). Interestingly, the four pure cases of prosopagnosia had normal global-to-local interference as measured in the Navon task, their perception of small letters being influenced by the (in)congruent larger letters. However, they did not have a face inversion effect, a whole-part face advantage, or a normal composite face effect. In contrast, RM did not show any effect in all kinds of tests, including the Navon task. These observations indicate that, although holistic perception as measured in the Navon paradigm can be correlated to holistic face perception in normal viewers (Hills & Lewis, 2009; Gao et al., 2011), the two can be functionally distinguished. As a result, pure prosopagnosia may be associated with a selective impairment of holistic individual face perception, in contrast with prosopagnosia in the context of visual agnosia.

"Parallel streams for recognition of face versus non-face objects in human temporal lobe"
S Nasr, L Ungerleider, R Tootell
Previous fMRI studies have revealed two face-selective patches within the temporal lobe: the Fusiform Face Area (FFA) and Anterior Temporal Face Patch (ATFP). However, their distinctive role in face recognition remains unclear. Therefore we measured fMRI activity during 1-back recognition of computer-generated faces and houses (n=14), relative to a 1-back dot-location task, using identical stimuli. As controls, we tested the effects of: 1) face and house inversion and 2) face contrast-reversal; both factors are known to decrease recognition. The facial recognition task produced higher activity confined to FFA and ATFP, relative to the dot-location task. Based on multiple criteria, fMRI activity in ATFP was more specifically related to recognition, compared to FFA. The otherwise-equivalent house recognition task activated a distinctly different set of areas including those selective for scenes (e.g the Parahippocampal Place Area; PPA) and also a previously undescribed patch within the anterior temporal lobe (distinct from ATFP). Again, anterior patches showed more recognition activity than posterior ones (e.g. PPA). These results suggest that recognition of faces and houses is mediated by two parallel processing streams, extending from the posterior through the anterior temporal lobe. In both streams, more anterior areas are more specifically related to recognition.

"Natural vision is geotopic"
M Dorr, P Bex
Our models of visual processing are based on experimental paradigms that are fundamentally unlike natural conditions. Therefore, we studied visual perception while subjects freely watched a blu-ray nature documentary while their gaze was tracked (1000Hz). In a 4AFC task, they were required to locate occasional gaze-contingent (120Hz) contrast increments (Gaussian 2x2 deg/600 ms), centred 2 deg from the current gaze position. Contrast was incremented in one of six spatial frequency bands of a Laplacian pyramid in real time (overall system latency 14-22 ms). Contrast sensitivity was much lower under natural than standard conditions and target localizations were tightly linked to the direction and timing of eye movements. The direction of small saccades was correlated with correct and incorrect responses; large saccades, however, led to target localization errors in the opposite direction. These results suggest that responses were based on the location of the target in the scene at the end of the trial, rather than its position relative to the fovea during the trial. Thus, targets are not perceived in retinotopic coordinates, but assigned to objects that are encoded in geotopic coordinates; this distinction could not be made with traditional methods and highlights the importance of naturalistic stimuli.

"Training the eye: perceptual and motor learning of motion direction"
S Szpiro-Grinberg, M Spering, M Carrasco
GOAL. Perceptual learning studies have shown that motion direction discrimination can be trained and that improvement is limited to the trained direction. There is a close link between neuronal control in early visual pathways of smooth pursuit eye movements and motion perception. Can training modulate both pursuit eye movements and perception? METHOD. We used a perceptual learning paradigm to examine training effects on pursuit and on a direction estimation task. Stimuli were random-dot kinematograms (75% motion coherence drifting at 10º/s). Observers tracked the stimulus with their eyes and then estimated the motion direction by adjusting the direction of an arrow. During the experiment's first and last days, observers were tested on directions deviating from the horizontal axis by 0, ±1, ±3 or ±6º. On the intermediate three days, observers trained on a subset of these angles. RESULTS. Training improved pursuit quality: pursuit latency decreased and initial velocity, initial acceleration, and steady-state gain increased. Simultaneously, training caused a shift in the perceived motion direction: for all angles it increased perceived deviation from horizontal. These results provide the first demonstration, that training can substantially improve pursuit eye movements and alter perception, and suggest that training pursuit may affect direction perception.

"Simultaneous adaptation to pursuit eye movement and retinal motion: Recalibration of pursuit estimates is based on speed and may be spatiotopic"
T C Freeman, J R Davies
Smooth pursuit is normally made across a stationary scene and so produces predictable retinal motion. Repeated exposure to a new relationship is known to alter perceived stability during pursuit. Two mechanisms have been proposed: (1) Low-level adaptation of pursuit and retinal-motion signals feeding the perceived stability stage; (2) Recalibration of eye-velocity estimates, determined by the new relationship. To prevent low-level retinal adaptation, we presented test stimuli in a different retinal location to the adapt stimulus. Adaptation comprised pursuit (+P) to a moving target, paired with retinal motion of -2P,-1P,-0.5P,0P,+0.5P or +1P. Perceived stability was measured using a nulling technique that determined the point of subjective stationarity (PSS) for a moving test stimulus viewed during pursuit. We found PSSs described an inverted U centred on R=0P. This can be accounted for by a recalibration mechanism, so long as the encoded relationship between pursuit and retinal motion is based on speed not velocity. Low-level adaptation of the pursuit signal cannot explain the inverted U; moreover, we found no evidence of phantom retinal velocity aftereffects. Intriguingly, preliminary results suggest greater adaptation when R is presented below the test stimulus compared to above. Adaptation mediated by the recalibration mechanism may therefore be spatiotopic.

"Visual perception during double-step saccades"
E Zimmermann, D Burr, M C Morrone
The perception of space changes drastically at the onset of saccadic eye movements, as visual objects are mis-perceived, compressed towards the saccade target. We investigated perisaccadic mislocalization in a classical double step paradigm with a 15 deg vertical saccade, followed immediately by a 15 deg horizontal saccade. The saccade targets were presented briefly (60 ms each) and turned off again before onset of the first saccade, so the second saccade in this sequence is driven by the memorized position of the second saccade target. We flashed a 2-deg dot for one frame at various times around execution of the saccade sequence. When the dot was flashed around the onset of the first saccade in the double step sequence, stimuli were mislocalized towards the first saccade target (as with single saccades). However, stimuli flashed at the time of the second saccade were not mislocalized. When we introduced a temporal gap between the presentation of the first and the second saccade target, perisaccadic compression was observed for both first and second saccades. Overall, the data indicate that compression occurs at the onset of pre-planning of a sequence of saccades, and is not related to the actual saccadic movement.

"Suppressing saccadic suppression"
M Wexler, T Collins
The displacement of a target during saccades is very difficult to detect compared to the same displacement during fixation (Bridgeman et al., 1975, Vision Research, 15, 719-722). This phenomenon, called saccadic suppression of displacement (SSD), is often supposed to be the basis of spatial constancy during saccades. However, two conditions have been found that alleviate SSD: a post-saccadic temporal blank (Deubel et al., 1996, Vision Research, 36, 985-996), and a target shape change (Germeys et at., 2010, Journal of Vision, 10/10/9). These conditions are remarkable because they decrease thresholds without adding any geometrical information. We have found two more conditions that alleviate SSD without adding geometrical information: a task-irrelevant target step perpendicular to the saccade, while the subject's task is to report the direction of a simultaneous saccade-parallel step; and the addition of a background periodic pattern, with the target step a multiple of the spatial period. These manipulations show that saccadic suppression of displacement is rather the exception than the rule.

"Object structure determines saccade planning but not space representation"
L Lavergne, K Doré, M Lappe, C Lemoine, D Vergilino-Perez
Researches on saccades using spatially extended object(s) have shown that the displayed object structure (one long or two short objects) induces differential two-saccade sequences, in which the second saccade either explores the currently fixated object or targets a new object, leading to different plannings. Indeed, exploring saccades depend on object size, independently of its position which is the main parameter for saccades targeting an object. Moreover, exploring saccades are preplanned before the eyes reach the to-be-explored object, whereas targeting saccades are updated as a function of object position just before execution. Recently we provided new evidence by using the saccadic adaptation paradigm that supports this dissociation. A systematic modification of one of the two parameters induced selective saccadic adaptations, with no transfer between these two saccade types. However, using a paradigm in which a to-be-localized bar flashed around the onset of either saccade of the sequence, we showed that peri-saccadic compression, i.e. systematic localization errors towards saccade endpoint, did not depend on object structure but was affected by future saccade planning. This suggests that whatever the saccade type, different signals such as the upcoming saccade motor feedback and parallel motor planning affect space representation in visual maps.

"Do the doors of perception open periodically? Evidence from pathology and drug-altered states"
J Dubois, R Vanrullen
Does perception rely on discrete samples from the sensory stream? Visual trailing is an intriguing pathological disturbance of motion perception, scarcely noticed by vision scientists (unlike akinetopsia), which could yield insights on this debate. Most LSD users have experienced it: a series of discrete stationary images are perceived in the wake of otherwise normally moving objects. The disturbance may also follow the ingestion of some antidepressant or antiepileptic prescription drugs, and affect certain patients with brain damage or neurological disorders. The discrete and repetitive nature of visual trails could represent the perceptual manifestation of an underlying periodic neural process : this periodicity could be of motor origin, or it could be the consequence of a failure of motion computation mechanisms coupled with inhibitory processes; another possibility is that it points to a general oscillatory process in perception. The lack of quantitative data for this phenomenon prevents discriminating between these speculative accounts. We report an attempt at collecting both qualitative and quantitative data by means of an online survey among more than 300 past LSD users; participants' responses allow us to conclude that the neuronal periodicity responsible for this phenomenon lies in the beta (15-20Hz) range.

"Nonselective properties are immune to the standing wave of invisibility"
K Evans, J Wolfe
Nonselective properties of scenes (e.g. scene "gist") are thought to be processed in the feed-forward sweep of information processing (Evans, Horowitz & Wolfe, 2011). If so non-selective properties should be immune to the standing wave of invisibility illusion (SWI). SWI occurs when a central stimulus, in counter phase with flanking stimuli, is persistently masked by those flankers. It is thought to reflect blocking of reentrant, feedback processing. It can result in the invisibility of the central stimulus for the majority of the time over many seconds. We used SWI to mask real natural and urban scenes. We find that invisible scenes can be categorized as either natural or urban at levels significantly above chance. On 33% of the 4.2-second trials, the central complex scene was rated as completely invisible but when asked to guess the nonselective property of the unseen scene observers were 65% correct (p= 0.000024). This result dissociates the awareness of a scene from the analysis of aspects of its 'gist'. It is consistent with claims that gist can be processed in a feed-forward manner while awareness involves reentrant processes that can be blocked by SWI.

"The Attentional Blink reveals the quantal nature of conscious perception"
R Marois, C Asplund, D Fougnie, J Martin
A fundamental debate in the field of perception is whether consciousness is an all-or-none or gradual phenomenon. Much of the controversy stems from the use of subjective methods of measuring awareness that directly probe the participants' conscious state (e.g. Sergent & Dehaene, 2004; Nieuwenhuis & de Kleijn, 2011). Here we used instead a mixture modeling approach (Zhang & Luck, 2008) to independently estimate the probability of consciously perceiving a target, and the precision at which the target is perceived, in a paradigm well known to reveal limits of conscious perception, the attentional blink (AB, Raymond et al., 1992). The AB refers to the profound impairment in detecting the second of two targets presented serially among distractors when that second target (T2) occurs within about half a second of the first (T1). We found that the probability of consciously perceiving T2 was strongly affected by the SOA between T1 and T2 but, remarkably, the precision at which that target was perceived remained unaffected. This was the case for both simple (colors) and complex (faces) stimuli, and across AB paradigms (RSVP and skeletal ABs). These results clearly suggest that conscious perception, as probed in the AB paradigm, is all-or-none.

"Retro-attention: influencing conscious vision after the stimulus is gone"
C Sergent, V Wyart, F J M Ruby, C Tallon-Baudry
Several theories of consciousness postulate that secondary top-down influence from parieto-frontal areas to visual areas plays a critical role in conscious vision. To test this hypothesis, we used exogenous attention to experimentally trigger top-down modulations of visual areas at various times before or, critically, after visual presentation. In a series of experiments we asked participants to report the presence or orientation of briefly flashed targets consisting of Gabor patches at low contrast, producing 60% detection on average. On each trial, a single target was presented either to the left or to the right of fixation. Involuntary shifts of attention towards or away from target location where induced by brief visual transients at various times around target presentation. In accordance with classical results, when attention was attracted towards target location before target onset, detection and orientation discrimination of the target was improved. Critically, we showed here that attracting attention towards target location within 200 ms after target onset could still improve participants' sensitivity in target detection and categorization. These results suggest that conscious perception builds up several hundred milliseconds after visual presentation and that secondary influence by the attentional system could play a decisive role on how we perceive a stimulus.

"Access vs. phenomenal consciousness: an empirical approach"
M Zehetleitner, M Rausch
Post-decisional wagering, confidence ratings and ratings of the clarity of visual experience have been discussed as measures of visual awareness. According to Block (2002), the first two ratings can be related to access consciousness (AC), and the latter rating to phenomenal consciousness (PC). Here, we investigated whether there is empirical support for this conceptual distinction between AC and PC. In three experiments, observers performed a masked single-item visual discrimination task followed by two ratings, one related to AC and one to PC (Experiment 1), both to AC (Experiment 2), or both to PC (Experiment 3). At medium presentation times (25-106 ms), for correct responses AC-ratings were significantly higher than PC-ratings, i.e., observers became confident about the correctness of their responses at shorter presentation times than when they reported a clear-cut experience of the stimulus. Between two subsequent AC and for two subsequent PC ratings there were no significant differences. A logistic mixed model regression revealed that PC-ratings over and above AC-ratings significantly explained variance in task performance. These findings empirically support that there is a systematic difference between ratings related to PC and ratings related to AC.

"From perception to conception: how the meaning of visual objects emerges over time"
A Clarke, K I Taylor, B Deveurex, B Randall, L K Tyler
Recognising visual objects involves translating the visual input into a meaningful representation. Little is known about the timing or nature of the transition from perceptual to semantic knowledge. We address these issues in an object naming task, using linear regression between single-trial MEG data and perceptual and semantic variables. Semantic variables were based on feature statistics derived from a distributed, feature-based model, with shared features (e.g. has eyes) informative about object category and distinctive features (e.g. has a hump) necessary for object identification. We predicted (a) a processing transition from perceptual to semantic over time, with (b) earlier processing of shared than distinctive information which (c) increasingly involved more anterior temporal regions. Initial perceptual effects after 70 ms were localised to visual and posterior ventral temporal cortex, before rapid semantic effects related to shared features which were localised throughout ventral temporal cortex. Later effects (post-200ms) related to distinctive semantic-features, necessary for object identification, engaged the anterior temporal lobes. In conclusion, meaningful object information is rapidly extracted from the visual input and progresses from reflecting the general, shared features of an object, relevant for categorisation, to its more specific attributes necessary for unique identification.

"Mastering abstract visual concepts with a miniature brain"
A Avargues-Weber, A G Dyer, M Giurfa
Sorting objects and events into categories and concepts is a fundamental cognitive capacity. Abstract concepts, such as 'same' or 'different', amongst others, bind objects irrespective of their perceptual similarity, and are essential in human cognition. We found that the miniature brain of honeybees rapidly learns to master simultaneously two abstract concepts, one based on spatial relationships (above/below and right/left), and another based on the notion of difference. Bees that learned to classify visual targets using this dual concept transferred their choice to unknown stimuli if these complied with both concepts: their components presented the appropriate spatial relationship and differed from one another. This finding provides excellent opportunities for understanding how cognitive processing is achieved by simple neural architectures, and which aspects of concepts are uniquely human achievements.

"Shared visual representations in humans and monkeys"
M Cauchoix, M Fabre-Thorpe, D Fize
Although animal conceptual abilities have been proved at several levels of abstraction, their underlying brain mechanisms are largely unknown. Understanding concept formation mechanisms is necessary to evaluate how homologous such abilities are in animals and humans. Macaque monkeys are able to form 'non similarity-based concepts' (Lazareva & Wasserman, 2008), for example categorizing a large variety of animal pictures among other objects and scenes (Fabre-Thorpe, Richard, & Thorpe, 1998). However, the performance achieved in such superordinate categorization tasks could rely on low-level visual processes that use learned statistical regularities of the photographs; natural scenes statistics have been proved to support accurate object and scene categorisations by artificial systems (Oliva & Torralba, 2001). In the present study monkeys were able to categorize flashed novel scenes as containing an animal regardless of the (natural or man-made) scene background in which animal or man-made objects were alternatively embedded. In such a task artificial systems using scene statistics failed. Importantly, humans and monkeys exhibited highly similar behavioural biases regarding object/context congruency: overall results suggest a large overlap of visual superordinate categorie representations in humans and monkeys.

"Animal or Dog? Vehicle or Car? The answer lies in more information not attention"
M Poncet, L Reddy, M Fabre-Thorpe
Studies have shown that complex visual scenes can be categorized at the superordinate-level (e.g., animal/non-animal or vehicle/non-vehicle) with minimal attention. Reaction times are longer for performing basic-level rapid-categorization of natural scenes, such as dog/non-dog or car/non-car, and might thus involve attention. This hypothesis was tested in the current study with a dual-task paradigm in which subjects performed a basic-level categorization task either alone (single task condition) or concurrently with an attentionally demanding letter discrimination task (dual task condition). If basic-level categorization does not require attention then performance in the single (attention available) and dual task (attention engaged in the letter task) conditions should be comparable. Our results indicate that basic-level categorization of biological (dog/non-dog) and man-made (car/non-car) stimuli can be performed remarkably well even when attention is not fully available. However, categorization at the basic-level entailed longer stimulus presentation times than at the superordinate-level, reflecting an increase in the amount of visual information required to access object representations at the basic-level. Thus, accessing basic- vs. superordinate-level object representation requires more information uptake but can still be performed in the near-absence of attention.

"Learning the long way round: Action learning based on visual signals unavailable to the superior colliculus is impaired"
M Thirkettle, T Walton, K Gurney, P Redgrave, T Stafford
Learning a novel action requires associating volitional movements with consequent sensory events signalling a surprising outcome. We have suggested that the neural substrate of this action-outcome learning exploits phasic signals from dopaminergic (DA) neurons within the basal ganglia [Redgrave & Gurney, (2006). Nature reviews. Neuroscience,7(12),967-75]. Neuroanatomical evidence suggests that phasic DA signals are triggered by direct projections from the superior colliculus (SC). Therefore we hypothesised that stimuli available to visual cortex but unavailable to the SC may be less effective in supporting action acquisition. We tested this using a novel motor learning task and reinforcing signals defined by either collicularly available luminance information, or a cortically available signal solely detectable by the short-wavelength cone photoreceptors and therefore invisible to the collicular pathway. Surprisingly, participants were able to learn using reinforcing signals unavailable to the SC. Equivalent numbers of reinforcing signals were triggered in the collicular and cortical conditions, however, action acquisition was slower, demonstrating that learning of actions was less efficient, with the short wavelength stimuli. We conclude that luminance signals via SC are the most efficient, but not exclusive, reinforcer of novel actions.

"How fast is rapid visual recognition memory?"
G Besson, M Ceccaldi, M Didic, E J Barbeau
Investigations into the neural correlates of recognition memory have shown that there are two major sources of "recognition signals", the anterior visual ventral stream (aVVS; ie perirhinal cortex) and the hippocampus. Evidence from intracerebral electrophysiological recordings suggest that the aVVS supports familiarity, a rapid context-free recognition signal (difference between familiar and unfamiliar stimuli starting ~250 ms), while the hippocampus supports recollection, a slow context-rich recognition signal, requiring longer processing time (difference starting ~320 ms). Nonetheless, it remains controversial whether the aVVS can support recognition memory alone. Using a novel paradigm, the SAB (Speed and Accuracy Boosting procedure), a Go/No-Go task with a response deadline 600 ms after stimulus onset (boosting speed) and an audio feedback for each response (boosting accuracy), we found that recognition of famous among unknown faces occurs behaviourally at 390 ms. Similar results were obtained for both abstract pictures and objects. Considering the minimum ~130 ms required to generate a motor response (Kalaska et Cramond, 1992 in the macaque), it is unlikely that the processing speed for recognition of this context-free material is supported by the hippocampus. These results support the idea that rapid recognition can be based on the aVVS.

"Visual recognition memory: A double anatomo-functional dissociation"
E Barbeau, J Pariente, O Felician, M Puel
There is an ongoing debate regarding the respective role of anterior subhippocampal structures and the hippocampus in recognition memory. Here, we report a double anatomo-functional dissociation observed in two brain-damaged patients, FRG and JMG. They both suffered from complete destruction of left MTL structures. In the right hemisphere however, FRG sustained extensive lesions to the hippocampus sparing anterior subhippocampal structures, while JMG suffered from the reversed pattern of lesion, i.e., extensive damage to anterior subhippocampal structures but preserved hippocampus. FRG was severely amnesic and failed all recall tasks involving visual material, but exhibited normal performance at a large battery of visual recognition memory tasks. JMG was not amnesic and showed the opposite pattern of performance. These results strongly support the view that right anterior subhippocampal structures are a critical relay for visual recognition memory in the human.

"Effects of invisible flankers on invisible adaptor"
C Ho, S-H Cheung
The strength of early adaptation was shown to be reduced by binocular suppression and crowding [Blake et al, 2006, PNAS, 103(12), 4783-4788]. Such reduction was explained as the effects of visual awareness on adaptation. Here we investigated whether flankers could further weaken adaptation when stimuli were rendered perceptually invisible by continuous flash suppression. Four normally sighted observers viewed an adapting grating (4× contrast threshold; 2.5° diameter; 2 cyc/deg) presented to the upper visual field of their non-dominant eye at 10° eccentricity for 5 s. The adaptor was surrounded by four high contrast flankers (8× contrast threshold; 2.7° center-to-center distance) in the crowded conditions. Perceptual visibility of the adaptor (and flankers) was modulated by presenting dynamic noise to their dominant eye. Contrast thresholds for two-interval-forced-choice detection were measured with test gratings in same or orthogonal orientation. The strength of orientation-specific threshold-elevation aftereffect was reduced when the adaptor was flanked. More importantly, attenuated effect of flankers was observed even when both the adaptor and flankers were suppressed from awareness. Flanker interference reduced the strength of early adaptation, regardless of visual awareness. Such interference on neural activity early in visual processing at the site of adaptation was likely to be a bottom-up phenomenon.

"Transient target signals reduce crowding, transient flanker signals do not"
J Greenwood, P Cavanagh
Crowding is the breakdown in object recognition that occurs in cluttered visual scenes. Given the predominant two-stage model of crowding, where object features are detected veridically prior to being pooled, can we restore access to the veridical features and abolish crowding? We suggest that transient visual signals can achieve this. Observers (n=4) judged the orientation of a target Gabor, flanked by four Gabors (2 deg. centre-to-centre separation) at 10 deg. eccentricity. Midway through each 400 ms trial, stimulus elements could 'blink' off for 10-80 ms before returning. These 'blinks' were applied either to the whole array, the target only, or the flankers only. In the whole-array and flankers-only conditions, orientation thresholds were 3-6 times higher than uncrowded thresholds. However, when the target alone blinked, thresholds were restored to near-uncrowded levels. The strong crowding in the flankers-only condition demonstrates that target-flanker similarity (e.g. in temporal frequency) does not mediate this effect. Nor does position cueing suffice, as flashing a ring around the target gave little benefit. Rather, varying the blink duration suggests it is the separable onset of the target that matters. We propose that onset transients allow the correct features to be 'tagged' to their correct locations, thereby avoiding crowding.

"Dichoptic suppression of flanking stimuli breaks crowding"
J Cass, S Dakin, P Bex
Visual crowding refers to the profound impairment in target identification when a peripheral target is presented in the context of nearby flanking elements. It is particularly strong at low spatial frequencies. We sought to determine whether the deleterious effects of crowding may be reduced by dichoptically suppressing flanking stimuli from awareness. Target and flanking stimuli were narrowband-filtered Sloan letters whose spatial frequency (SF) content was independently manipulated, peaking at 1.8 or 7.4 c.p.d. Using LCD shutter glasses, the target letter was presented to a single eye 8.7 degrees above or below fixation. Flanking stimuli with identical phase but distinct spatial frequency content were presented on alternate frames to opposite eyes at visuotopically overlapping locations . Subjects identified the target letter and then, how many high SF flanking elements they observed (0-4). Consistent with previous findings, under rivalry high SF flankers perceptually dominated low SF flankers in the opposite eye. Critically, low SF target identification performance strongly correlated with the number of high SF flankers perceived on any given trial, regardless of ocular condition. That high SF dichoptic suppression is capable of breaking crowding implies that: (i) crowding is contingent upon conscious awareness of flanking elements; and (ii) crowding mechanisms are likely to succeed those of dichoptic suppression.

"Crowding is immune to the pre-saccadic shift of attention"
C Morvan, P Cavanagh
Previous studies have shown that an attentional cue reduces crowding (e.g. Yeshrun & Rashal, JVis, 2010). In a series of experiments we examined whether the attentional shift that precedes a saccade (Deubel & Schneider, Vision Research, 1996) also affects crowding. We presented a crowded display in the periphery and had subjects saccade to its center by flashing a saccade cue prior to presenting the display. The display was removed just before the eyes landed so the visual information was only collected from the visual periphery. We found no improvement in the saccade condition compared to the control fixation condition, leaving performance and the critical spacing unchanged. These results indicate that the pre-saccadic attentional shift does not add to the effect of ordinary cueing in relieving crowding.

"Long range grouping affects crowding"
B Sayim, P Cavanagh
Target stimuli in the periphery are harder to discriminate when flanked by similar items, an interference that is called crowding. For example, identifying a target letter is compromised when it is flanked by close-by letters, but not when the letters are outside a certain region around the target (the critical spacing). Here, we show that items far outside the critical spacing still modulate performance. Observers were presented with an array of three horizontally arranged letters with the target letter at the center. Stimuli were presented randomly to the left or right of fixation. On the contralateral side, a single letter was presented at the same distance to fixation as the target. The primary task was to indicate the presence of the target letter at the central position in the array. The secondary task was to indicate whether the single letter was the same as the central letter (to assure attention to both positions). Sensitivity in the primary task was higher when the single letter was the same as the target letter compared to when it was different. Hence, target repetitions far outside the critical spacing reduced crowding. We suggest that this result is due to grouping processes that precede crowding.

"When text forms a texture, grouping induces crowding and slows reading"
S Rosen, D Pelli
In order to recognize objects we combine visual features. Sometimes we combine features from multiple objects. This allows us to see texture but prevents us from recognizing the individual objects. Unwanted feature combination across objects is called "crowding." Using dissimilar target and flanker objects eliminates crowding. Since reading speed is limited by crowding, using text that alternates in color from letter to letter (black-white-black) should increase reading speed. However, recent findings dash this hope. Text is a texture, a spatial pattern. Does adding black-white alternation to this pattern strengthen feature combination across objects? Here, observers identify a peripheral target flanked by two letters along each of four radii. Crowding is strong when target and flankers have the same color, and weak when target and flankers have opposite colors. When target and all eight flankers alternate in color, crowding is again strong. Using alternating-color text strengthens across-objet feature binding enough to outweigh the unbinding effect of dissimilarity. Alternating-color text groups as strongly as single-color text. We conclude that alternating color and single color are equally good patterns and are equally effective in causing grouping and crowding. As long as text remains a texture we cannot improve reading speed.

"Optimal encoding of interval timing in expert percussionists"
R Arrighi, M Cicchini, D Burr
Jazayeri and Shadlen (Nat. Neurosc. 2010) recently reported that when human observers reproduce time intervals drawn from different distributions production times are strongly affected by the statistics of the past trials exhibiting a systematic regression towards the mean. They explained and model their data with a performance-optimizing Bayesian model. To investigate how expertise in temporal tasks modulates the way context in integrated in current temporal estimates we repeated their study with expert musicians (drummers or string musicians) and non-musical controls. While non-musical and string musicians showed a strong regression, drummers maintained near-veridical performance with virtually no regression to the mean. To assess how much this difference was due to group differences in temporal sensitivity we measured temporal precision in the three groups by mean of a bisection task. We find a strong correlation between the sensitivity in the bisection task and regression towards the mean across groups. We find that the model that produces the best fit for the data is a revised version of the model of Jazayeri and Shadlen where the prior considers only the first two statistical moments of the past observations. We also successfully simulate the data with a simple, biologically plausible, gain-control model.

"Kinetic-depth reversals: some paths are easier than others"
A Pastukhov, V Vonau, J Braun
Visual perception builds on prior experience of the visual world. For example, a planar flow is often perceived as a moving volume ("kinetic depth effect"). Studying spontaneous and forced reversals of illusory kinetic depth in human observers, we found the timing of reversals to be highly selective and to reflect the physical plausibility of each transformation. Spontaneous transitions, in which illusory depth and rotation reverse together, occur almost exclusively when the shape is depth-symmetric. When reversals of illusory rotation and illusory depth are dissociated, they reveal very different dependencies on stimulus phase: reversals of illusory motion are inversely proportional to motion speed, while reversals of illusory depth depend on depth symmetry in an all or nothing fashion. A simple formula describes individual and joint probabilities of these reversals. Why should depth and rotation differ so dramatically? We suggest the most plausible explanation lies in the disparate physical plausibility of transforming depth and rotation. Taken together, this shows an even more pervasive role of prior experience than previously thought: it is not just the illusion of volume itself, but also the transition from one illusory volume to another that is conditioned by prior experience of physical transformations in the visual world.

"A new method for measuring binocular rivalry: objective measurement of rivalry suppression without subjective reporting of perceptual state"
D Alais, M Keetels, A Freeman
Incompatible monocular images trigger binocular rivalry, a series of perceptual alternations in which one image is visible at a time and the other suppressed. Rivalry studies rely on subjective reports, with observers monitoring their own changing perceptual state. Apart from being subjective, the criterion for dominance is an uncontrolled variable. To objectively quantify rivalry suppression, we measured monocular contrast sensitivity to brief probes at random times during extended rivalry viewing. The probed eye's state is unknown, but we assume probes occur variously during dominance and suppression. The psychometric function (PMF) linking performance to probe contrast rose steeply, then shallowly, then steeply again. This was well described by a weighted average of two PMFs, allowing separate dominance and suppression PMFs to be recovered. We used signal detection theory to show that weights for each function were defined by the relative dominance of the monocular stimuli. To verify the recovered dominance PMF, we compared it with non-rivalrous probe detection and found near-identical PMFs. Suppression strength, the ratio between the recovered dominance and suppression PMF means, closely matched suppression measured conventionally using self-triggered probes during subjectively reported dominance and suppression. We therefore reproduce conventional rivalry results using robust objective methods.

"Perceptual grouping during rivalry depends on eye-based rivalry alternations"
S Stuit, C Paffen, M Van Der Smagt, F Verstraten
During binocular rivalry, perception alternates between dissimilar images presented dichoptically. Here we ask whether rivalry occurs independently across the visual field. More specifically, if rivalry instigated in different hemifields is independent. We use grouping cues known to affect binocular rivalry: joint predominance increases for identically oriented targets ([Alais and Blake, 1999, Vision Research, 39, 4341-4353]). Targets with identical orientations were presented to (1) the same or different eye, or (2) to the same or different hemifield (left or right of fixation). The results show that joint predominance depends heavily on the rival targets being presented to the same eye or not, but not on whether targets are presented to the same or different hemifield(s). Therefore, rivalry instigated in each of the two hemifields appears to be independent: grouping does not affect rivalry when rival targets are presented different hemifields. Interestingly, this finding concurs with the notion that attentional resources for visual information are independent for the two hemifields ([Alvarez and Cavanagh, 2005, Psychological Science, 16, 637-643]). Our findings imply that higher level grouping cues, argued to reflect higher-level modulation of binocular rivalry, in fact act at a relatively low processing stage: the stage of eye-specific processing.

"Hand transport violates Weber's law - but only with visual feedback and hand relative coding"
N Bruno, V Ferrari, M Bertamini
According to a recent report, the visual coding of size for grasping does not obey Weber's law (Ganel et al, 2008, Current Biology, 18, R599-601). This surprising result has been interpreted as evidence for a fundamental difference between vision-for-perception, which needs to compress a wide range of physical objects to a restricted range of percepts, and vision-for-action as applied to the much narrower range of graspable and reachable objects. To further test this interpretation we studied the precision of hand transport using aiming tasks that varied in the degree of hand-relative vs object-relative visual coding,in the availability of visual feedback during the action, and in the involvement of memory. Speed-precision tradeoffs (Schmidt et al, 1997, Psychological Review 47, 451-451) were controlled. Results indicate that Weber's law holds in all conditions except when the task enforces hand-relative coding and includes visual feedback during the action. These findings suggest caution before we can conclude that actions violate fundamental psychophysical principles.

"Proprioception makes an invisible hand visible"
K C Dieter, B Hu, R Blake, D C Knill, D Tadin
Wave your own hand in front of your eyes - what do you perceive? Certainly you both feel and see your hand moving. This multisensory experience is characterized by consistent pairings of component sensations: vision and proprioception. How might our brains adapt to these reliable co-occurrences of sensory inputs? Specifically, we asked what happens when only one of the normally paired inputs is present? Participants waved their own hand in front of their eyes in total darkness, and were asked to assess their visual experience. Subjective ratings indicated that participants frequently experienced visual sensations of motion when waving their own hand, but did not experience these sensations when the experimenter waved his hand. We objectively assessed these visual percepts by measuring eye movements. Our hypothesis was that participants who experienced a visual percept could smoothly pursue it, while those without a visual percept could not. Results show smoother eye movements (smaller distance between points, and smaller standard deviation of eye movements) for participants with visual percepts than those without. Synaesthetes, who are hypothesized to have increased multi-modal connectivity, experienced the strongest subjective visual percepts, and in some cases were able to track as smoothly in total darkness as in light.

"Genetic correlates of anticipatory smooth pursuit"
J Billino, J Hennig, K Gegenfurtner
Anticipatory pursuit offers an excellent model to study expectations of upcoming events and the capability to prepare appropriate motor actions. Here we were interested in anticipatory pursuit in healthy adults with supposed differences in prefrontal dopaminergic activation. The COMT val158met polymorphism modulates enzyme activity in that met alleles lead to less active dopamine degradation in prefrontal cortex and accordingly to higher dopamine levels. We investigated anticipatory pursuit in 99 subjects and determined the individual genotypes. Subjects were trained to pursue a moving target (17.6 deg/s) and were given the opportunity to build up a stable movement expectation. Subsequently, in 50 % of the trials the target was invisible during the first 500ms. We analyzed the anticipatory response during a time window of 100ms immediately before target onset. We found a significant effect of genotype on anticipatory eye velocity (F(2, 99)=4.29, p=.016, eta2=.08). Met158 homozygotes (n=28) showed an eye velocity more than twice as high as Val158 homozygotes (n=21), i.e. 1.8 deg/s versus 0.8 deg/s. Eye velocity in heterozygotes (n=50) lay in between at 1.3 deg/s. Our results provide evidence that interindividual differences in anticipatory smooth pursuit in healthy adults are associated with genetic parameters which modulate prefrontal dopaminergic activity.

"Selective visual motion blindness in developmental dyslexics with DCDC2 gene alteration"
M C Morrone, M G Cicchini, M Consonni, F Bocca, S Mascheretti, P Scifo, C Marino, D Perani
Deficits of visual motion have often been reported in developmental dyslexia (DD), but the effects are often small and present only in a sub-population of subjects. We studied motion perception in a selective population of 10 dyslexic subjects with an alteration of the gene DCDC2 (crucial for neural migration during embryogenesis) and correlated the visual perceptual deficit with morphological changes of white matter tracts. All subjects showed a severe deficit in contrast thresholds for discriminating drift direction: 9 out of 10 patients failed to discriminate motion direction for gratings of spatial frequency higher than 2 c/deg (8 Hz, 200 ms exposure), no deficit was observed for static stimuli. Diffusion tensor imagining revealed significant alteration of fractional anisotropy (FA) in several white matter tracts representative of the dyslexia phenotype. Crucially, we show significant correlations with FA and the magnitude of the motion deficit. Of particular interest was the correlation with FA in the inferior longitudinal fasciculus, at two positions: one just anterior to MT+ (R +40 20 0; L -36 -26 2) and the other slightly more ventral (R +58 -40 -22). These results indicate that the observed motion deficits may reflect abnormalities of the magnocellular pathway and of the innervations of MT+ in DD.

"Individual differences in the construction of meaning from noise and their relation to the development of visual hallucinations"
S J Cropper, T R Partos
This study investigated unusual visual experiences in a non-clinical sample in relation to the personality dimension of schizotypy, to understand how visual hallucinations might develop. In a novel approach to this issue, observers were presented with calibrated images of faces of varying visibility embedded in pink noise and were required to respond whether they saw a face. The faces were upright or inverted, phase-reversed and famous or unknown in a series of experiments. The results in all conditions indicated high positive schizotypy was associated with an increased tendency to perceive complex meaning in images comprised purely of random visual noise. Individuals high in positive schizotypy seemed to be employing a looser criterion (response bias) to determine what constituted a 'meaningful' image, while also being significantly less sensitive at the task than those low in positive schizotypy. Findings also indicated that differences in perceptual performance for individuals high in positive schizotypy were not related to increased suggestibility or susceptibility to instruction. Instead, the observed reductions in sensitivity indirectly implicated subtle neurophysiological differences (along with differences in cognitive style, indicated by response biases) associated with the personality dimension of schizotypy that are theoretically pertinent to the continuum of schizophrenia and hallucination-proneness.

"Perceptual learning in amblyopes: the good, the bad and the ugly"
L Kiorpes, P Mangal
Perceptual learning is gaining recognition as a potentially beneficial treatment for adults with amblyopia. However, it is unclear how general the training benefit is. We investigated this question in amblyopic non-human primates (Macaca nemestrina). We used a random dot motion training stimulus that required integration of information over space and time, and also required threshold-level direction discrimination at a range of directions and dot speeds. We then tested for improvement in performance on a (different) motion task, a Glass pattern discrimination, Vernier acuity, and contrast sensitivity, post-training. We also assessed any changes in performance of the untrained, fellow eye. Four amblyopic monkeys and one visually-normal control were tested. The results showed that 1) at least 20,000 learning trials were needed for substantive improvements to occur; 2) learning in most cases transferred to the non-practiced motion task; 3) contrast sensitivity improved in one-half of the cases but was poorer after training in the other cases; 4) form discrimination performance was typically poorer after training; 5) Vernier acuity and fellow eye performance were mostly unaffected. These results suggest that further evaluation is needed before it can be assumed that perceptual learning is a broadly effective and beneficial treatment for amblyopia.

"Effects of unilateral brain damage on visual detection and discrimination: Evidence from perceptual thresholds for luminance contrast, texture, motion and colour"
C Grimsen, M Prass, F Brunner, S Kehrer, A Kraft, S Brandt, M Fahle
Unilateral lesions in human primary visual cortex result in contralesional hemianopia, but it remains unclear whether early visual perception is impaired if primary visual cortex is intact and only adjacent visual cortices are damaged. We hypothesized that perceptual thresholds for certain visual sub-modalities (such as texture, motion and colour differences) are impaired in patients with unilateral damage to mid-level visual areas. Perceptual thresholds were obtained for each visual field quadrant individually from 30 patients with intact visual fields and 60 controls using a spatial four-alternative-forced-choice method. Visual detection (task I) and discrimination (task II) were performed in four visual sub-modalities (luminance contrast, texture, motion and colour perception). Despite normal thresholds for detection of luminance contrast in the visual quadrant retinotopically corresponding to the site of lesion, some patients could not detect and/or discriminate the target in other sub-modalities or had significantly increased thresholds. These patients are described in detail and the behavioural deficit is related to the lesion location. A large proportion of patients achieved normal thresholds. This finding is discussed in terms of plasticity and compensation after damage to mid-level visual areas in the human brain.

"Poor visual discrimination of size but not orientation in children with dyskinetic cerebral palsy show: failure to cross-calibrate between senses?"
M Gori, F Tinelli, G Sandini, G Cioni, D Burr
Multisensory integration of spatial information occurs late in childhood, around 8 years (Gori et al. Curr. Biol., 2008). Before, touch dominates size discrimination and vision orientation discrimination: we suggest that this dominance reflects sensory calibration. If so, the lack of the possibility for calibration should have direct consequences on children born without the sense that should calibrate. Following this prediction we showed that visually impaired children have reduced acuity for haptic orientation, but not size, discriminations (Gori et al. Curr. Biol., 2010). The complementary prediction is that children with severe motor disabilities should show reduced precision in visual size, but not orientation, judgments. We measured visual orientation and size discrimination thresholds in children with dyskinetic cerebral palsy and normal intelligence quotient. Visual orientation discriminations were very similar to the age-matched typical children, but visual size discrimination thresholds were far worse. This strongly supports our cross-sensory calibration hypothesis: when the more robust haptic sense of size is unavailable to calibrate the visual, the visual discrimination is impaired, in the same way as the more robust visual sense of orientation is necessary to calibrate the haptic one. When either of these is compromised, the sensory system that they calibrate is also compromised.

"Studying visual motion illusions in dyslexics supports Magnocellular deficit hypothesis"
S Gori, E Giora, L Ronconi, R Milena, S Franceschini, M Molteni, A Facoetti
Visual Magnocellular-Dorsal (M-D) deficit hypothesis is gaining an increasing consent in the study of developmental dyslexia. However, several experimental data supporting the M-D deficit hypothesis can also be interpreted as a consequence of a perceptual noise exclusion deficit. In our experiments, we measured sensitivity for two visual motion illusions proved to involve specifically the M-D pathway. The results show that dyslexics need more luminance contrast to perceive the motion illusions, although contrast sensitivity for these specific stimuli (measured by simple stimulus detection) was equal in the two groups. The individual data also confirmed that these two motion illusions are very sensitive in distinguishing dyslexics from controls suggesting that our tasks could become an important tool for early identification of at risk children for dyslexia. Our result is the first to support the M-D deficit hypothesis in dyslexia by measuring sensitivity for visual motion illusions, without involving any signal from noise extraction mechanism.

"Visual plasticity in hemianopic children with perinatal brain lesions"
F Tinelli, R Arrighi, G M Cicchini, M Tosetti, G Cioni, M C Morrone
It has been shown that unconscious visual functions can survive lesions to optical radiations and/or primary visual cortex (V1), a phenomenon termed "blindsight" (Weiskrantz et al., 1974). Studies on animal models (cats and monkeys) show that the age when the lesion occurs determines the extent of residual visual capacities. Unfortunately much less is known about the functional and underlying neuronal repercussions of early cortical damage in humans. We measured sensitivities to several visual tasks in children with congenital or acquired unilateral brain lesions that completely destroyed the optic radiations. Results show that residual unconscious visual processing occurs in the blind field of children with early brain lesions. These children could compare the position of stimuli presented in the blind and spared field with near-normal thresholds, and performed correctly orientation and motion-direction discrimination in the blind field, albeit with higher (ten-fold) contrast thresholds. Children with late brain lesions performed at chance in all visual tasks. Functional imaging data suggest that residual vision in congenital subjects is mediated by a massive reorganisation of the visual system within the spared hemisphere, which develops maps of the ipsilateral, as well as, contralateral visual hemifield.

"Eye movements show the Poggendorff illusion"
M Morgan, D Melmoth
Saccadic eye movements made to the intersection of a pointer with a distant landing line were examined to see whether they showed the same biases as perceptual judgments (Melmoth et al?,2009, The Poggendorff Illusion affects manual pointing as well as Perceptual Judgments ?Neuropsychologia, 47 3217-24). Adding an abutting vertical inducing line making an angle of 45 deg with the pointer led to a large bias in the same direction as the perceptual Poggendorff illusion. Latency and other dynamics of the eye movements were closely similar to those recorded for a control task in which observers made a saccade from the pointer to an explicit target on the landing line. Further experiments with inducing lines flashed briefly at various times during the saccade latency period showed that the magnitude of the final saccade bias was affected by inducer presentation at any time during the movement planning phase right up to the moment of movement initiation. We conclude that the neural mechanisms for extrapolation can feed into the control of eye movements, without obvious penalties in timing and accuracy

"Rapid motor activation by illusory contours"
T Schmidt, A Greenwald
Whereas neurophysiological studies have shown that illusory contours are signaled in early visual areas at very short latencies, it has been concluded from behavioral studies using backward masking procedures that illusory-contour stimuli have to be presented unmasked for at least 100 ms to be perceived and responded to. Here, we employ a response priming paradigm to demonstrate that illusory contours masked at much shorter intervals can impact behavior. In four experiments, participants responded to the shape or orientation of illusory (IC) and real-contour (RC) targets preceded by IC and RC primes at stimulus-onset asynchronies (SOAs) between 35 to 129 ms. Priming effects in response times and error rates were similar for IC and RC primes and remained strong under visual masking. The effect was fully present in the fastest responses. Participants did not respond to the inducing elements instead of the illusory contours: Priming effects were unaffected when the inducing elements changed between prime and target. We conclude that illusory contours can rapidly trigger associated motor responses, consistent with the notion that ICs are extracted during the first wave of processing traversing the visuomotor system.

"Illusory rotation in the haptic perception of a moving bar"
A M Kappers, Z M Kluit
Haptic matching of the orientation of bars separated by a horizontal distance leads to large systematic deviations (e.g., Kappers & Koenderink, Perception, 28, 781-795, 1999). A bar on the right side has to be rotated clockwise in order to be perceived as parallel to a bar at the left side. This finding leads to the following intriguing question which we investigated in this study: Will a bar moving from left to right in a fixed orientation be perceived as rotating counterclockwise? Blindfolded subjects had to touch a bar that moved from left to right or from right to left while it was rotating clockwise or counterclockwise with different speeds or did not rotate. For each trial they had to decide whether the rotation was clockwise or counterclockwise. From psychometric curves fitted to the data, we could determine that the results were consistent with the findings in the static case: A bar moving from left to right has to rotate clockwise in order to be perceived as non-rotating (and vice versa). In other words, a translating bar causes the illusory perception of a rotation.

"Orientation profiles of the trapezium and the square-diamond geometrical illusions"
J Ninio
In previous work, "orientation profiles" (describing how the strength of an illusion varies with its orientation in the plane) were determined for several variants of the Zöllner and the Poggendorff illusions (e.g., Ninio and O'Regan, 1999, Perception, 28(8), 949-964). The study is extended here to two other classical illusions. Illusion strengths were determined for 10 subjects at 16 orientations on 4 variants of the trapezium illusion and 8 variants of the square-diamond illusion. The trapezium illusion was maximal when the bases of the trapeziums were horizontal, and minimal when they were vertical. The oblique sides, but not the bases, were essential to the illusion, suggesting the existence of a common component between the trapezium and the Zöllner illusion. The square-diamond illusion is usually presented with one apex of the diamond pointing towards the square. I found that when the figures were displayed more symmetrically, the illusion was reduced by one half. Furthermore, it is surpassed, for all subjects, by an illusion that goes in the opposite direction, in which the diagonal of a small diamond is underestimated with respect to the side of a larger square.

"The material-size illusion - The influence of material properties on haptic perception of volume"
M Kahrimanovic, W Bergmann Tiest, A Kappers
Numerous studies, in different modalities, have shown that perception of a particular object property can be influenced by other properties of that object, resulting in perceptual illusions. For example, subjects perceive, both visually and haptically, an extent filled with lines as longer than the same but unfilled extent, suggesting that surface texture has an effect on size perception. The present study investigated the influence of surface texture, thermal conductivity and temperature on haptic perception of the volume of objects that could fit in one hand. Blindfolded subjects were asked to explore pairs of cubes differing in their material properties and to select the one with the larger volume. The results showed that, counterintuitively, a smooth cube was perceived as being significantly larger than a rough cube of the same volume. Moreover, cubes with a higher thermal conductivity were perceived as larger than cubes with a lower thermal conductivity, and colder or warmer cubes were perceived as larger than cubes neutral in temperature. These results revealed that haptic volume perception is not veridical and that it is influenced by different material properties. The observed biases could be explained by cognitive processes involved in the integration of information from different peripheral receptors.

"The Ambiguous Corner Cube and Related Figures"
K Brecher
There are many remarkable two-dimensional ambiguous figures. Several striking instances of simple geometrical solids or more complex three-dimensional ambiguous objects also exist. Most of these examples can be perceived in only two different ways: e.g., as up or down (2-D Necker cube); as convex or concave (ambiguous tri-wall, intaglio, hollow mask); or as figure or (back)ground (Rubin's Vase). We have studied examples of 2-D images and 3-D objects that have three or more interpretations. A solid cube with one corner missing is one example that elicits (at least) three different interpretations. Most observers (viewing the object monocularly) report that the missing corner cube is most readily perceived as a small cube protruding from a larger cube; the next strongest impression is the veridical one; and the weakest, a cube sitting within three walls. Question: Why is the protruding cube interpretation preferred? A possible answer is that the natural world contains far more convex than concave objects, so it is the most "likely" interpretation for the human visual system. In this presentation we will show several new examples of ambiguous figures that can elicit even more than three interpretations.

"Welcome to wonderland: The apparent size of the self-avatar hands and arms influences perceived size and shape in virtual environments"
S Linkenauger, B Mohler, H Bülthoff
According to the functional approach to the perception of spatial layout, angular optic variables that indicate extents are scaled to the body and its action capabilities (see Proffitt, 2006, POPS). For example, reachable extents are perceived as a proportion of the maximum extent to which one can reach, and the apparent sizes of graspable objects are perceived as a proportion of the maximum extent that one can grasp (Linkenauger, et al., 2009, JEP:HPP; Linkenauger, Ramenzoni, & Proffitt, 2010, Psychol Sci). Therefore, apparent sizes and distances should be influenced by changing scaling aspects of the body. To test this notion, we immersed participants into a full cue virtual environment. Participants' head, arm and hand movements were tracked and mapped onto a first-person, self-representing avatar in real time. We manipulated the participants' visual information about their body by changing aspects of the self-avatar (hand size and arm length). Perceptual verbal and action judgments of the sizes and shapes of virtual objects' (spheres and cubes) varied as a function of the hand/arm scaling factor. These findings provide support for a body-based approach to perception and highlight the impact of self-avatars' bodily dimensions for users' perceptions of space in virtual environments.

"The visual double-flash illusion: temporal, spatial and crossmodal constraints"
D Apthorp, L Boenke, D Alais
When two objects are flashed at one location in close temporal proximity in the visual periphery, an intriguing illusion occurs whereby a single flash presented concurrently at another location appears to flash twice (the visual double-flash illusion: Chatterjee et al., 2011, Wilson & Singer, 1981). Here we use, for the first time, a two-interval forced-choice method to investigate objectively the temporal limits of the effect, which have not previously been explored. We found the effect was maximal at the shortest separation of 20 ms between the two inducing flashes, and decreased approximately linearly after durations of around 100 ms, controlling for task performance without the inducing flashes. The illusion persisted with more complex objects such as Gabors or even faces. If the second inducing flash differed from the first in orientation or size, the effect was abolished. Playing concurrent brief tones with the flashes also abolished the illusion. We discuss the results in terms of rapid early connections and feedback in visual cortex.

"Attention and parallel perceptual decision making: what small set-sizes can tell"
T U Otto, P Mamassian
Targets in visual search are often defined by a conjunction of two visual features. For distinct sensory signals presented within different modalities like vision and audition, we have recently shown that conjunctions are detected by parallel decision processes that are coupled by a logical AND (Otto & Mamassian, 2011, VSS). A characteristic of this AND-coupling is that response latencies are slowed-down compared to the corresponding single signals. Precisely, the latency distribution follows the maximum function of the distributions with single signals. Here, we tested feature conjunctions using rectangles of different color and orientation. Based on distributions with 3,000 responses per condition, we show that the same AND-coupling as with multi-sensory signals occurs when the features defining the conjunction are presented as rectangles at separate locations. Interestingly, when both features are co-localized at one location, the resulting latency distribution is virtually identical to the slower of the single feature distributions, which is orientation in our experiment. Thus, the visual system has powerful means to detect feature conjunctions faster than expected from AND-coupling. We discuss the possibility that feature based attention to the target color allows the conjunction task to be solved by a single decision process based on orientation.

"Transient attentional enhancement of central stimuli: A role for competition in temporal attention"
A Wilschut, J Theeuwes, C N L Olivers
Transient attention refers to the rapid rise and subsequent decline of performance found in spatial selection, as typically studied by means of peripheral cuing tasks. The present study addresses the question whether the transient pattern is restricted to tasks requiring spatial reorienting, or is a more generic index of temporal attention, by using a paradigm in which only one location was to be attended. In four experiments targets were always presented at fixation while a central cue could appear at different cue-target intervals (SOA). In addition, we varied the presence of target-surrounding distractors. A rapid performance enhancement was found that was maximal at around 100 ms post cue onset. This enhancement was followed by a decline, but only when there was competition from distractors. Performance was more sustained when the target was the only item in the display. The results support the idea that the time course of transient attention response is more generic, as long as multiple items are competing for representation.

"Measuring the spatial resolution and cognitive capacity of visual selective attention"
G Sperling, I Scofield, A Hsu
The spatial resolution of visual functions typically is measured by presenting sinewave stimuli of different spatial frequencies and determining the ability of observers to perform a visual function (such as discriminating the presence or the orientation of a grating) as a function of its spatial frequency. The spatial resolution of attention is similarly measured by requiring observers to attend to alternate strips of a stimulus (squarewave) in which a target to be detected is presented, for example, in one of the even strips while false targets (foils) which must be ignored are presented in the odd strips (Gobell, Tseng, and Sperling, Vision Research, 2004). As the spatial frequency of the requested squarewave distribution of attention increases, performance declines. Based on the spatial resolution of attention so measured, predictions can be made for observers' abilities to distribute attention in arbitrarily complex patterns. These predictions are quite accurate until the requested pattern of attentional distribution becomes too complex, at which point a cognitive limit becomes apparent. The spatial resolution limits and cognitive limits of visual distributions of attention are measured for different observers and are demonstrated for complex required distributions of attention.

"Modulation of the orientation-induced gamma response by visuospatial attention"
L Koelewijn, A Rich, S Muthukumaraswamy, K Singh
The benefits of spatial attention on visual processing are widely known, and are associated with increases in neural activity in many visual areas. Stimulus-induced gamma oscillations are thought to play a functional role in visual perception, but have only been found to increase with attention in higher visual areas. Using MEG, we investigated how induced oscillatory responses to a stimulus optimal for inducing gamma in visual cortex changes with spatial attention. In separate blocks, subjects traced the orientation of either a parafoveal grating patch or a small line at fixation. Both were always present, but rotated independently and unpredictably up to 40 degrees around one of four angles (0, 45, 90, and 135 degrees from vertical). As expected, we found an attention-related decrease in alpha power (5-15 Hz) during the task, but we additionally observed a sustained attention-related increase in early visual cortex gamma power (30-70 Hz), with minimal differences amongst orientations. Source localisation of this gamma oscillation reflected the spatial distribution of attention, shifting more superior and lateral with attention to the grating compared to the central line. These results demonstrate for the first time that early visual cortex gamma oscillations are modulated by visuospatial attention in humans.

"How do we ignore the irrelevant? - Electrophysiological correlates of the attentional set"
A Wykowska, H Müller, A Schubö
Visuo-spatial attention can be guided to task-relevant stimulus attributes in the visual field, owing to top-down control mechanisms that allow for ignoring what is irrelevant in a given situation (e.g., Wykowska and Schubo?, 2010, Journal of Cognitive Neuroscience, 22(4), 640-654; Wykowska and Schubo?, 2011, Journal of Cognitive Neuroscience, 23(3), 645-660). The present study examined the electrophysiological correlates of such a top-down set for particular feature dimensions. Using the ERP method and a visual search paradigm with centrally presented post-display probes, we investigated the effects of relevant search target (shape) and irrelevant (color) features on the subsequent processing of probes that shared features with either the target, the irrelevant singleton, with both the target and the irrelevant singleton, or with the neutral distractors. A search-locked N2pc was observed for the target but not for the irrelevant singleton. Behavioral results and the probe-locked ERPs (N1) showed a benefit for probes sharing relevant target features (shape), independently of their color. This pattern confirms previous findings showing that top-down control can override attentional capture, and provides new insights into the ERP correlates of top-down dimensional-set per se.

"Looking in the brain: tracing neural correlates of priority and gaze behaviour on dynamic natural scenes"
J-B C Marsman, M Dorr, E Vig, E Barth, R Renken, F W Cornelissen
Human observers constantly shift their gaze around to obtain new information about the surrounding world. In any given scene or movie, eye movements are typically consistent between observers, yet there are also individual differences in observers' viewing behaviour. We reasoned that such expressions of individuality could be used to trace the neural correlates of the processes that are used to prioritize information and drive human gaze behaviour in natural scenes. We recorded fMRI data and eye-movements of observers watching videos of natural scenes. We also derived a measure that indicates the extent to which an individual's gaze behaviour conforms to that of a reference group of observers. First, we show that this measure correlates with cortical activity in specific parietal regions (Temporal Parietal Junction and Precuneus) and V5/MT, implying that activity in these regions is dominant in prioritizing information and driving individual gaze behaviour. Next, we show that our measure has only very low correlations with common measures of image-based saliency. This implies that gaze behaviour in natural conditions is primarily driven by factors that are not easily derived from image information, such as scene semantics and contextual effects, rather than saliency.

"White matter in early visual pathway structures is reduced in patients with macular degeneration"
D Prins, A T Hernowo, H Baseler, T Plank, M W Greenlee, T Morland, F W Cornelissen
Central retinal lesions caused by macular degeneration result in depriving visual pathway structures of activity. In a multi-center study, we asked the question whether this is associated with changes in volume of early visual pathway structures. High-resolution anatomical magnetic resonance imaging data were obtained in participants with either the juvenile (JMD) or the age-related variant (AMD) of macular degeneration, as well as age-matched controls. We separated the visual pathway structures from the rest of the brain in T1 MRI volumes. Voxel-based morphometry (SPM8) was used to evaluate statistically the differences between patients and age-matched controls. Comparison of white matter between the groups revealed volumetric reductions in the optic radiations of the patients. This implies that loss of retinal sensitivity results in degeneration of the visual pathway in macular degeneration. Our findings complement recent work showing a decrease in grey matter around the lesion projection zone in visual cortex in AMD, glaucoma and hereditary retinal dystrophies. If these structural changes remain permanent they could limit the success of treatments that aim to restore retinal function.

""Clover Leaf" Clusters: A fundamental organizing principle of the human visual system"
A Brewer, B Barton
INTRODUCTION: One of the more important larger scale organizing principles of visual cortical organization is the visual field map: neurons whose visual receptive fields lie next to one another in visual space are located next to one another in cortex. As increasing numbers of visual field maps have been defined in human visual cortex, one question that has arisen is whether there is an organizing principle for the distribution of these maps across visual cortex. We investigate the possibility that visual field maps are organized into similar circular clusters, which are replicated across visual cortex, oriented independently to one another, and subserve similar computations within a cluster. METHODS: We measured angular and eccentric retinotopic organization and population receptive fields across visual cortex using fMRI and population receptive field (pRF) modeling (Dumoulin, Wandell, 2008). Retinotopic stimuli consisted of black and white, drifting bar apertures comprised of flickering checkerboards (11° radius). RESULTS/DISCUSSION: We identify multiple new visual field maps across occipital, parietal, and temporal cortex, organized with previously defined visual field maps into visual field map clusters. We propose that these 'clover leaf' clusters are a fundamental organizing principle of the human visual system, extending from low-level to higher-order visual processing areas.

"A TMS study of functional connectivity of early visual cortex in the processing of spatiotemporal regularity"
H Roebuck, P Bourke, K Guo
Our visual system exploits geometrical natural regularities to facilitate the interpretation of incoming visual signals. With a dynamic stimulus sequence of four collinear bars (predictors) appearing consecutively towards the fovea, followed by a target bar with varying contrasts, we have previously found that the predictable spatiotemporal stimulus structure enhances target detection performance and can be processed by V1 neurons [Hall et al, 2010, Vision Research, 50(23), 2411-2420]. However, the contribution of V1 long-range horizontal and feedback connections in processing such spatiotemporal regularity remains unclear. In this study we measured human contrast detection of a briefly presented foveal target that was embedded in the same dynamic stimulus sequence. TMS was used to disrupt V1 horizontal connections in the processing of predictors. The coil was positioned over a cortical location corresponding to the location of the last predictor prior to target onset. Single-pulse TMS at an intensity of 10% below phosphene threshold was delivered 30 or 90 msec after the predictor onset. Our analysis revealed that the delivery of TMS significantly reduced but did not abolish the facilitation effect of the predictors on target detection. This suggests that both horizontal and feedback connections contribute to the encoding of spatiotemporal regularity in V1.

"Cortico-cortical population receptive field modeling"
K V Haak, J Winawer, B M Harvey, S O Dumoulin, B A Wandell, F W Cornelissen
We present functional MRI methods for estimating the cortico-cortical population receptive field (CC-pRF). In analogy to a visual receptive field, the CC-pRF is defined as the spatial filter within one topographically organized area (e.g., V1) that best predicts the activity in another region of the brain (e.g., V2) [Heinzle et al, 2011, Neuroimage, in press; Sejnowski, 2006, in: 23 Problems in Systems Neuroscience, T Sejnowski and JL van Hemmen, New York, Oxford University Press]. The new method builds on conventional population receptive field (pRF) modeling [Dumoulin and Wandell, 2008, Neuroimage, 39(2), 647-660], and computes a parameterized model of the CC-pRF from neural activity only. The CC-pRF method is thus agnostic to the experimental paradigm. Using 3T and 7T imaging, we trace the fine-grained topographic connectivity between distant visual areas, and report quantitative estimates of the CC-pRF size in human visual cortex. The CC-pRF method is non-invasive and can be applied to a wide range of conditions when it is useful to trace the functional connectivity between topographically organized brain regions in high detail.

"Contour binding and selective attention increase coherence between neural signals in visual cortex"
A Martin, R Von Der Heydt
Information about an object can be represented by neurons separated by long distances in visual cortex. Synchrony of neuronal firing has been proposed as a way of binding the distant elements of an object together, but synchrony might also result from specific connectivity that enables feature grouping and object-based attention (Craft et. al., 2008, J.Neurophysiol. 97:4310-26). Here we studied coherence in the local field potential (LFP) with a same-object/different-objects paradigm. Single unit activity and LFPs were recorded from two electrodes separated by 3-10mm cortical distance in macaque areas V1 and V2. The monkeys maintained fixation while attending one of several figures with edges presented in the receptive fields of two simultaneously recorded neurons. The edges could either be part of the same or different figures. We found a robust increase of coherence between LFPs with binding (~20%) and an additional increase with attention (~8%). The increased coherence was in a frequency band around 20Hz and lasted as long as 1000ms. We believe that this coherence difference is due to back projections as proposed in the model of Craft et al, a specific network connectivity that is responsible for early grouping of contours and the allocation of selective attention.

"Neural "filling-in"? Increased response in deprived visual cortex when regions adjacent to a glaucomatous scotoma are stimulated"
M Williams, A Rich, A Klistorner, S Graham
Our aim was to examine how a scotoma (blind area) caused by glaucoma affects the response of the visual cortex that normally receives input from this region, specifically during stimulation of the adjacent visual field. Using fMRI, patients were tested monocularly with 16 sec blocks of radially oriented alternating checkerboards placed (1) within the scotoma; (2) at the edges of the scotoma; and (3) across the scotoma, as well as equivalent locations in the other hemifield. The 'good eye' viewing condition acted as an ideal control: the difference between activation through good and affected eyes is an index of the degree of recruitment. As expected, presenting a stimulus to the blind scotoma region resulted in less activation than stimulating the analogous location in the good eye. However, presenting stimuli either from the fovea to the edge of the scotoma, or extending across the scotoma, resulted in significantly greater activation in the cortex representing the blind region than equivalent stimulation in the 'good' eye. These results suggest the visual cortex that loses normal input due to glaucoma 'extrapolates' from information presented around the scotoma. This may be the neural correlate of the 'filling in' that prevents awareness of the scotoma.

"Perceptual learning, roving and the unsupervised bias"
A Clarke, H Sprekeler, W Gerstner, M Herzog
Perceptual learning is reward-based. A recent mathematical analysis showed that any reward-based learning system can learn two tasks only when the mean reward is identical for both tasks [Frémaux, Sprekeler and Gerstner, 2010, The Journal of Neuroscience, 30(40): 13326-13337]. This explains why perceptual learning fails when two differing stimulus types are presented randomly interleaved from trial to trial (i.e. roving), even though learning occurs efficaciously when the stimuli are presented in separate sessions. Hence, the unsupervised bias hypothesis makes the surprising prediction that no perceptual learning occurs when a very easy and a hard task are roved because of their different rewards. To test this prediction, we presented bisection stimuli with outer-line-distances of either 20' or 30'. In both tasks, observers judged whether the central vertical line was closer to the left- or right-outer line. Task difficulty was adjusted by manipulating the center line's offset. Easy and difficult discriminations corresponded to 70 and 87 percent correct respectively. In accordance with theoretical predictions, subjects failed to learn in this roving task for both bisection-stimulus types. Hence, surprisingly, perceptual learning of a hard task can be disturbed by performing a simple, little-demanding task.

"A new perceptual bias reveals suboptimal Bayesian decoding of sensory responses"
T Putzeys, M Bethge, F Wichmann, J Wagemans, R Goris
Much of our understanding of sensory decoding stems from the comparison of human to ideal observer performance in simple two-alternative discrimination tasks. The optimal Bayesian decoding strategy consists of integrating noisy neural responses into a reliable function that captures the likelihood of specific stimuli being present. As only two stimulus values are relevant in a two-alternative discrimination task, the likelihood function has to be read out at two precise locations to obtain a likelihood ratio. Here, we report a new perceptual bias suggesting that human observers make use of a less optimal likelihood read-out strategy when discriminating grating spatial frequencies. Making use of spectrally filtered noise, we induce an asymmetry in the stimulus frequency likelihood function. We find that perceived grating frequency is significantly altered by this manipulation, indicating that the likelihood function was sampled with remarkably low precision. Although observers are provided with prior knowledge of the two relevant grating frequencies on each trial, they evaluate the likelihood of a broad range of irrelevant frequencies. Overall, our results suggest that humans perform estimation of a stimulus variable of unknown quantity rather than evaluation of two known alternatives when discriminating grating spatial frequencies.

"Bayesian decoding of neural population responses explains many characteristics of contrast detection and discrimination"
K May
Contrast thresholds for detecting a signal added to a pedestal generate a "dipper function", which first falls as the pedestal contrast increases from zero, and then rises to give a "near-miss" to Weber's law. The psychometric functions are well-fitted by Weibull functions: the slope parameter, beta, is about 3 for zero pedestal (detection), and falls to around 1.3 with increasing pedestal. All of this can be explained by Bayesian decoding of a population of neurons with Naka-Rushton contrast-response functions [r=rmax*c^q/(c50^q+c^q)], and a rectangular distribution of semi-saturation contrasts, c50, along the log contrast axis. I derive equations that accurately predict the model's performance and give insights into why it behaves this way. For Poisson-spiking neurons, the model's detection psychometric function is a Weibull function with beta equal to the Naka-Rushton exponent, q, which physiologically often takes a value of about 3; for high pedestals, beta is always about 1.3, regardless of the model parameters. As contrast increases from zero towards the bottom of the c50 range, the threshold dips; within the c50 range, Weber's law holds if rmax is constant across the neural population; a shallower/decreasing slope occurs if rmax is scaled to give a fixed response to 100% contrast.

"How much of noise masking derives from noise?"
D H Baker, T S Meese, M A Georgeson, R F Hess
In masking studies, external luminance noise is often used to estimate an observer's level of internal (neural) noise. However, the standard noise model fails three important empirical tests: noise does not fully linearise the slope of the psychometric function, masking occurs even when the noise is identical in both 2AFC intervals, and double pass consistency is too low. This implies the involvement of additional processes such as suppression from contrast gain control or increased uncertainty, either of which invalidate estimates of equivalent internal noise. We propose that jittering the target contrast (c.f. Cohn, 1976, J Opt Soc Am, 66:1426-1428) provides a 'cleaner' source of noise because it excites only the detecting mechanism. We compare the jitter condition to masking from 1D and 2D (white and pink) pixel noise, pedestals and orthogonal masks, in double pass masking and (novel) contrast matching experiments. The results show that contrast jitter produced the strongest masking, greatest double pass consistency, and no suppression of perceived contrast: just as the standard model of noise masking predicts (and unlike pixel noise). We attribute the remainder of the masking from pixel noise to contrast gain control, raising concerns about its use in equivalent noise masking experiments.

"Brain-Computer Interface for the Blind: The benefits from a computer-based object recognition approach"
M Macé, V Guivarch, C Jouffrais
Brain-Computer Interfaces (BCI) for the Blind rely on the electrical stimulation of neurons to induce visual percepts (phosphenes) despite blindness. Whatever the targeted structure for the electrical stimulation (retina, optic nerve, thalamus or visual cortex), the neural interfaces all share the same limitation regarding the number of implantable electrodes and hence the number of available phosphenes (up to a hundred at most). Progress in the density of electrode matrices has been slow due to technological and physiological factors. With the exception of limited clinical trials, no visual neuroprosthesic device that is usable in daily life has been produced and implanted so far. To circumvent the issue of the phosphenes density being too low to represent the visual scene in a meaningful way, we propose to use object recognition algorithms to extract high level information from the image, and render this information to the blind person via a limited number of phosphenes. In an evaluation based on the simulation of a visual neuroprosthesis via a Virtual Reality helmet, we showed that the required number of phosphenes to perform successfully a reaching task towards different objects at unknown locations could be as low as 9; corresponding to a number of electrodes easily implantable with present day techniques.

"Writing with the eyes"
J Lorenceau
The eye balls are controlled by three pairs of antagonistic muscles allowing fast and accurate eye-movements toward any target displayed in an observer's visual field. Eye-movements are mainly composed of ballistic saccades quickly moving the sight from one position to another in order to bring the image of a peripheral visual target onto the fovea where visual acuity is best, and of pursuit eye-movements allowing maintenance of a moving target onto the fovea. Although highly flexible, consuming little energy and involving in large palette of cognitive processes, the oculomotor system is hardly driven by pure volitional control in everyday life, as experimental studies indicated that saccadic eye movements are mostly driven and constrained by the structure, salience and meanings of images and, for pursuit, by perceived, although sometimes illusory, invisible or auditory, motion. Here, I shall demonstrate it is possible to drive one's eye-movements so as to write and draw at will, despite the lack of a structured image or of a driving target. This new device, extending previous psychophysical studies, open news scientific issues related to oculomotor control and can provide disabled people with a mean to express themselves and communicate in an unconstrained, personal and emotionally rich way.

"Eye muscle proprioceptive manipulation confirms spatial bias in visual attention towards the perceived direction of gaze"
D Balslev, W Newman, P Knox
Extraocular muscle (EOM) proprioception is thought to help locate visual stimuli relative to the head and body to serve visuomotor control. Because 1 Hz-rTMS over EOM proprioceptive representation in somatosensory cortex causes not only visual localization errors [Balslev and Miall, 2008, J Neurosci 28,8968-8972], but also introduces a spatial bias in the accuracy of visual detection, it has recently been suggested that eye proprioception plays a role in visual attention [Balslev et al, 2011, JoCN,23(3),661-669]. However, this link between eye proprioception and visual attention is indirect, e.g. the effect of somatosensory rTMS on attention could have been caused by a spread of current to adjacent parietal areas. Here we manipulated the EOM proprioceptive signal directly, by passive eye rotation of one eye [Knox et al, 2000, Investigative Ophtalmology&Visual Science, 41(9),2561- 2565]. This shifts perceived gaze direction of the viewing eye in the direction of the rotation of the rotated, non-viewing eye [Gauthier et al, 1990, Science, 249(4964),58-61]. We found that among retinally equidistant objects, those presented nearer the perceived direction of gaze were detected more accurately. Thus, eye proprioception not only participates in visual localization for movement control as previously thought, but can also dictate priorities for visual perception

"Instruction to relax enhances visual search performance by altering eye movements"
D Yates, T Stafford
Anecdotal evidence suggests that we are better at lab-based visual search tasks when we are relaxed, or adopt a more passive strategy. In 2006, Smilek and colleagues sought to demonstrate this effect by giving subjects either passive or active instructions prior to a visual search task [Smilek et al., 2006, Visual Cognition 14(4-8), 543-564]. They found that subjects given passive instructions (i.e. letting the unique item "pop" into mind) performed significantly better compared to subjects instructed to search actively. Smilek concluded that the active instructions slowed performance by interfering with the rapid and automatic mechanisms involved in passive search. However, the results do not allow us to conclude whether passive instructions made the subjects better, active instructions made them worse, or both. In the current study we establish a baseline result by adding a third group of subjects who were given neutral instructions. The results show that subjects adopt an active strategy by default and can be made to improve by simply being instructed to search more passively. Furthermore, eye tracking reveals that the passive instructions lead to systematic differences in the way the subjects search the display. The potential implications for visual search in the real world are discussed.

"Eye movement control during a bimanual, high-speed sensorimotor task: From expertise to championship"
R M Foerster, E Carbone, W X Schneider
Expertise in sensorimotor tasks can be characterized by spatial and temporal regularities of eye movement control. Earlier work from our lab [Foerster et al., submitted] investigated the so-called speed stacking task, a bimanual, high-speed sensorimotor task requiring grasping, moving, rotating, and putting down cups. A major result was that the overall number of fixations within a stacking trial decreased with practice. We asked whether and how these regularities change when experts become champions. For the current study, we recruited the European champion, the World champion, and practiced experts of speed stacking. Experts had been trained previously until they reached a certain performance plateau. While all participants looked at task-relevant locations in a similar order, both champions performed faster and made fewer fixations on the path than the trained experts. More precisely, champions performed only one fixation in two task steps, while experts needed about one fixation for each task step. However, champions performed more fixations in key steps of the task. Fixation duration and fixation rate did not differ between participants. These results indicate that champions of speed stacking may rely to a larger degree on long-term memory and to a lesser degree on sensory information than experts.

"Eye movement control in movies: The effect of visual and narrative continuity across a cut"
Y Hirose, B W Tatler, A Kennedy
Movies play an important role in contemporary society as a medium of communication of information, yet current understanding of how moving images are processed is under-developed. Editorial cuts in movies present specific challenges in scene perception: both eye movements and object memory show that viewpoint changes in movies selectively disrupt the spatial understanding of objects in the scene. Using a reverse angle shot, one type of editorial cuts involving a 180-degree camera angle change, we examined the oculomotor consequences of disrupting visual and narrative information by manipulating the camera angle and the position of actors. Reverse angle shots disrupt two sources of visual information (the left-right relationship of the scene elements and the background content), whereas changing the relative position of the actors across a cut produces narrative inconsistency. The results showed that the consistency of visual information (but not narrative structure) affected the spatial allocation of attention immediately following a cut. However, the subsequent inspection duration was influenced by both visual and narrative continuity, and these effects varied across different parts of the scene. The results imply that the integration of visual and semantic changes into representation appears to take place on a different time course when watching movies.

"A functional link between image segmentation, lightness perception and eye movements"
M Toscani, K Gegenfurtner
The visual system samples the local properties of objects by moving the eyes around. We have recently shown that these eye movements have an effect on lightness estimation of complex targets (Toscani, Valsecchi & Gegenfurtner, VSS 2011). Here we investigate whether eye movements also affect the analysis of layered image representations, which have been shown to be of great importance for brightness perception (Anderson and Winawer, 2005, Nature). In a first experiment we recorded eye movements during a lightness matching task of a visual stimulus developed to reveal the effect of layered representations. The perceptual effect of image segmentation on perceived lightness was highly correlated with the average luminance sampled by eye movements. In a second experiment we manipulated the observers' fixation strategy using a gaze contingent display in order to causally prove an effect of eye movements on the perceived lightness in this context. The luminance of the fixated regions highly influenced lightness perception. Fixating brighter parts of the stimulus lead to brighter matches, and vice versa. We propose that the segmentation process drives eye sampling strategy and that this sampling causally influences the apparent lightness of the stimulus target.

"Spatial and temporal correlations between eye and hand vary as a function of target visibility during rapid pointing"
A Ma-Wyatt, R Laura
As people reach to point or pick up objects, a saccade is typically deployed just before the hand movement is initiated. While it has been generally observed that eye and hand are coordinated during goal-directed movements, the underlying mechanisms remain unclear. We investigated the contribution of target visibility to eye-hand coordination. Participants pointed rapidly to a small, high contrast target presented on a touchscreen; eye movements were recorded throughout each trial. Using a block design, the target was displayed briefly or until touch, at locations eccentric to central fixation (range: 2-12 deg). For brief targets, pointing precision declined with increasing target eccentricity. There was no significant correlation between saccade accuracy and saccade latency at greater eccentricities, but there was a significant correlation at smaller eccentricities. Eye-hand latencies were generally bimodal at smaller eccentricities. For brief targets, final eye position was significantly correlated with finger touch position at greater eccentricities (10-12deg), but not at smaller eccentricities. However, when the target was present until touch, correlations were comparatively weak between final eye and hand position. These results demonstrate that the coordination of eye and hand movements can be significantly altered in response to changes in target visibility.

"O brother, where art thou? Locations of 1st and 2nd order objects are represented in the same way but at different times, as revealed by single-trial decoding of EEG signals"
R Chakravarthi, T Carlson, J Chaffin, J Turret, R Vanrullen
Objects occupy space. How does the brain represent their location? To answer this question, on each trial we present one of six kinds of first- and second-order (texture) stimuli in the periphery (left/right). Using linear pattern classifiers we evaluate whether the pattern of EEG amplitudes across the scalp at various time points encodes object location. Peak classification performance (accuracy=78%, n=11) occurs at 140 ms after stimulus onset for high contrast (first-order) stimuli, with above-chance classification as early as 70 ms. Peak performance shifts lower and later for other objects, with identity-based textures (an object made of T's among L's) having the weakest performance (accuracy=58%) at the longest latency (300 ms). Observers' mean reaction times and d-primes are highly correlated with the classifier's peak latency (r=0.99) and peak performance (r=0.93), respectively, across object types. Training the classifier on high contrast stimuli and testing on other objects reveals that the information used to represent locations is the same for all objects, but this information is available later and in a more noisy form for texture objects. These results indicate that textures, while not represented in the bottom-up sweep, are encoded by patterns resembling the bottom-up ones, suggesting a role for feedback mechanisms.

"Spatiotemporal object formation: Contour vs. Surface interpolation"
T Ghose, G Erlikhman, P J Kellman
Visual object formation from fragmentary information depends on two complementary processes: a contour interpolation process that interpolates between visible edge fragments and a surface interpolation process that connects similar regions across gaps. It has been suggested that each process can operate in the absence of the other, but this hypothesis has received little experimental study. Here we investigate spatiotemporal object formation (completion across gaps in both space and time) when contour- and surface-based processes are congruent or incongruent. We used the shape discrimination task of Palmer, Kellman & Shipley (JEP:General, 2006, Vol-135, 513-541) to investigate the degree of unit formation. In this paradigm, shape discrimination is enhanced when visible object fragments fulfill the geometric conditions for contour interpolation ("spatiotemporal relatability") relative to control ("non-relatable") displays. In the present study we investigated incongruent conditions: (1) non-relatable displays with coherent surface properties (same color, texture or shading pattern) and (2) relatable displays with bits with different surface properties (e.g. three different colors http://www.sowi.uni-kl.de/wcms/fileadmin/wpsy/public/STI/STI_Surface/STI_Surface.htm ), along with congruent displays. Results showed that shape discrimination performance was completely predicted by unit formation due to relatability of contours and not by surface properties. These results indicate the primacy of contour interpolation in determining object shape.

"Influence of object pose on contour grouping"
J Elder
Models of contour grouping are typically based upon local Gestalt cues such as proximity and good continuation. Recently, Elder et al. [VSS 2010] reported evidence for the involvement of higher-order properties of natural contours. In the present study, we consider specifically tuning to the canonical pose of natural shapes. Observers were asked to detect briefly presented target contours represented as sequences of short line segments, embedded in randomly positioned and oriented noise segments of the same length. We used QUEST to estimate the threshold number of noise elements at 75% correct performance in a yes/no task. Targets were the closed bounding contours of animal shapes drawn from the Hemera database, presented at 4 orientations in interleaved blocks: 0, 90, 180, 270 deg. We note that the statistics of line segment orientation are the same for the 0 deg and 180 deg conditions, and the statistics of first-order and in fact all higher-order shape cues are identical for all conditions. Nevertheless, we found that thresholds were significantly higher for the 0 deg (original pose) condition than for the rotated conditions. This result suggests that contour grouping depends not only upon the natural statistics of object shape but also of object pose.

"3D Object recognition: Where do we look in depth?"
F Cristino, C Patterson, C Leek
Although object recognition has been studied for decades, very little is known about how eye movements interact with the encoding and recognition of objects in 3D. We recorded eye movements in an object recognition task; firstly while subjects learned a subset of anaglyph stimuli, and then while they were asked to recognise these stimuli during a test phase. Participants either performed the task monocularly or stereoscopically. We compared eye movement patterns against three theoretical models of shape analysis (Saliency, Curvature Maxima/Convexity, Curvature Minima/Concavity). The curvature models were computed directly from the 3D mesh objects. A novel technique was created to extract depth from the eye movement data, allowing us to analyse their spatial distributions in three dimensions (3D heatmaps). Behaviourally, subjects were more accurate and faster during the recognition task when viewing objects stereoscopically, making fewer saccades and longer fixations. The results showed that the distribution of eye movements in object recognition is: (i) structured and consistent across observers, (ii) similar between the Learning and Test phases, (iii) unchanged if viewed stereoscopically and (iv) best accounted for by the surface concavity model. Indeed, the Saliency model performed no better than chance in predicting the spatial distribution of eye movement patterns.

"Misbinding object location information in visual working memory"
Y Pertzov, N Gorgoraptis, M-C Peich, M Husain
Misbinding visual features that belong to different objects - illusory conjunctions - is known to occur under specific experimental conditions, e.g. brief presentations (Treisman & Schmidt, 1982). A recent series of studies has shown that misbinding features also happens frequently when visual working memory is loaded (Bays et al, 2009). Here we examine how well observers bind objects to locations under more natural viewing conditions. Several complex objects were presented on a screen, and after a blank interval of one second, they reappeared at random locations. Participants were required to drag the objects to their original locations using a touch-screen, while gaze position was recorded. We found that when 3 or more objects were presented, subjects tend to "switch" the locations of different objects with a significantly higher probability than predicted by chance, especially between objects that had not been fixated recently. Such misbinding was not very age-dependent, although overall general accuracy of localization was significantly lower for older participants. The number of "switches" was abnormally high in patients with lesions in the medial temporal lobes, suggesting that this region might play a role in binding objects to locations even over the short time scales associated with working memory.

"ERP evidence that semantic access occurs for objects that are suggested but not perceived on the groundside of a figure"
M A Peterson, J Sanguinetti, J J Allen
A repetition paradigm was used to test whether unseen familiar objects that lose the competition for figural status are nevertheless processed at a semantic level. Observers made object decisions regarding familiar and novel silhouettes. The borders of half of the novel silhouettes (experimental silhouettes) suggested portions of familiar objects on the outside. The borders of the remaining novel silhouettes suggested novel objects on the outside (control silhouettes). Subjects perceived the outsides of all silhouettes as shapeless grounds and were unaware of any potential objects there. On the second presentation of both familiar silhouettes and experimental novel silhouettes, the FN400 ERP component was reduced, ps < .03, indicating that these stimuli accessed semantic representations on both presentations. The FN400 component was not reduced for control silhouettes, ps > .10. These results support the idea that potential objects on both sides of borders can give rise to semantic processing, even in the absence of winning the competition for figural status. The P100 component, initially larger for experimental than control silhouettes, was reduced with repetition for experimental silhouettes but not for control silhouettes, suggesting that the competition for figural status across the borders of experimental silhouettes was reduced with repetition.

© 2010 Cerco Last change 28/08/2011 19:39:08.