Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

In a traditional Ames room, perception of size is distorted by the observer's assumptions about parallel and perpendicular lines. For the illusion to be convincing, the room must be viewed from one vantage point. Using an immersive virtual reality system, we generated a dynamic version of the Ames room illusion. Subjects were free to move around and yet they experienced gross failures of size constancy. Head position and orientation were tracked to generate binocular views of a virtual scene, presented in a head mounted display. As the subject walked, the entire scene was scaled (expanded or contracted) about the cyclopean point (midway between the eyes). The instantaneous change in scale was not perceptible. Subjects compared the size of a reference object viewed in one part of the room (where the scale of the entire room was small) with the size of a test object viewed in another part of the room (where the scale was up to 3 times larger). Neither test nor reference object were visible as the subject walked between the two locations (2.5m apart) but the rest of the room could be viewed freely. Using a forced choice procedure, we found that subjects perceived the test and reference objects to be the same size when their sizes actually differed by more than a factor of two. Under normal conditions (when the scale of the room remained constant), subjects were able to make correct size matches. In the dynamic Ames room, the subject's assumption that the room remains a constant size has a powerful influence on size judgements, overcoming consistent and correct information from binocular disparities and motion parallax.

Original publication

DOI

10.1167/3.9.490

Type

Journal article

Journal

Journal of Vision

Publication Date

01/12/2003

Volume

3