The lifetime visual exposure of an adult observer contains statistical regularities across a diverse hierarchy of image properties. From canonical retinal sizes and viewing distances (e.g., the sun is always viewed at infinity and human faces are viewed most often at about 1 meter) to frequently encountered stimulus traits (e.g., cardinal orientations dominate image features, and own-race faces dominate the face-diet), these regularities pervade our visual experience. The human visual system takes advantage of the reduction in uncertainty that is afforded by these regularities, using it to translate the visual input into recognized labels or categories and to infer the state and nature of the distal stimulus more successfully. In this talk, I will give a short overview of some recent data from my lab on statistical regularities in adult face exposure and their relationship to face-recognition performance. I will also describe how we have used machine-learning tools, such as convolutional neural networks (CNNs), to aid us in understanding the human visual system. In an excursion from this main focus, I will briefly outline our work using deep learning CNNs to extract traits and disease status from retinal fundus images that are not conventionally visible to the human expert eye.
RSVP here. (No need to RSVP if you already receive IAM seminar announcements by email.)