Current Research Projects

Visual control of walking to a target

Contributions of visual and non-visual information to control of walking

When walking, there is visual information about self-motion from optic flow, and also nonvisual information from vestibular, proprioceptive, and motor systems. This project uses virtual reality to dissociate optic flow from physical walking, enabling us to measure the relative contributions to control of walking. [Figure

Temporal changes in the contribution of optic flow to guiding walking

Some initial results from our lab suggest that the contribution of optic flow to control of walking is not constant, but rather increases over time. This project will measure the goal of the control system at different moments in time, and whether the goal changes in a way consistent with an Kalman filter model. [Figure

Accuracy of walking to a target with limited visual feedback

Many previous studies have investigated ability to perceive direction of self-motion from optic flow, but there is surprisingly little data on how well we can perceive our direction of self-motion while walking without vision. In this study, we tested accuracy of walking to a target with no visual feedback at all, or only a brief amount of optic flow. The results are used to predict performance in cue conflict conditions.

Perception of 3D surface slant

Underestimation of slant from texture

A texture gradient can elicit a percept of slant in depth, but perceived slant tends to be less than actual slant. A possible explanation within a Bayesian framework is that texture information is combined with a prior function that is weighted toward frontal slants. These studies test quantitative predictions of this model of slant underestimation.

Structure-from-motion from small and large changes in viewpoint

Motion parallax is a strong cue to 3D slant in depth, but many studies have found that observers cannot make reliable and accurate metric judgments. We varied the amount of viewpoint change in SFM stimuli, and found that increasing the amount of change improved judgments, approaching veridical accuracy.

Role of symmetry in percetion of 3D shape

Symmetry and shape constancy across rotations in depth

We have found that symmetry improves 3D shape discrimination. One explanation for this benefit is that symmetry constrains the 3D interpretation of an image, and thereby provides a 3D cue. By this hypothesis, the advantage from symmetry would depend on how a symmetric object is oriented relative to an observer. This study tests whether the benefit of symmetry is orientation-dependent in the predicted way. [Figure

Symmetry and shape from shading

Shading provides a powerful cue to 3D shape, but shading has ambiguity when the illumination is not known, and some previous studies have found that perceived shape can be biased depending on light source direction. Recent work in computer vision has shown that an assumption of bilateral symmetry can resolve the ambiguity in shading information. This project tests whether symmetry can reduce perceptual biases in shape from shading.

Perception of 3D shape from monocular cues

Shape from highlights

When shiny curved surface are illuminated, highlights provide a potential cue to 3D shape. Previous studies have found that highlights can allow accurate 3D shape discrimination from a monocular images When a shiny object is viewed binocularly, highlights have binocular disparity that specifies locations behind a surface, which could complicate use of highlights as a shape cue. This project tests whether highlights remain effective in binocular viewing, and whether this depends on factors like surface texture or color.

Shape from sparse texture

Surface markings can produce a strong percept of shape-from-texture, and support accurate 3D shape discrimination. In some cases, human shape-from-texture is more robust than existing shape-from-texture models. One explanation is that texture information is redundant, and that the brain takes advantage of this redundancy to achieve robust percetion of shape. This study will text perception of shape from texture when only subsets of texture are visible. We will analyze how shape judgments degrade from decreased texture, and whether certain regions are especially important.

Visual processing of 3D shape for grasping objects

Grasp points for 2D shapes viewed at a slant

People tend to grasp objects along particular axes that allow a stable grip. The ideal grasp axes vary depending on the exact shape of an object, so this is a non-trivial task for the visual-motor system. Testing grasp points provides a way to investigate 3D shape processing for purposes of motor control. This study will test whether subjects continue to grasp along optimal axes when a shape is viewed from a slant, with varied amount of information about 3D orientation of the object.

Grasp points for 3D smoothly-curved random shapes

Previous studies of grasp points have used 2D planar shapes. This project will test grasp points for random volumetric 3D shapes, under conditions with varied 3D information.