Video Gallery

About this Video

Traditionally, cameras have been designed to create images for our human tastes and consequently try to capture the world as our own eyes see it. In recent years however, algorithms have started to rival humans as the main consumers of images, as in applications like facial recognition, self-driving cars, or virtual try-ons.

These computer programs may not care about the same things that humans do, yet we still use the same types of cameras to cater to both. By customizing the camera with the specific application in mind, we can design specialized cameras for artificial intelligence that allow for improved performance and more efficient computation.

In this talk, Julie shares her recent research on this topic, specifically in a system called an optical-electronic neural network.


About the Speaker

Julie Chang

Julie Chang

Julie Chang is a PhD student in Professor Gordon Wetzstein’s group at Stanford University, where she works on the design, validation, and physical prototyping of computational imaging systems. Julie’s general research interests include computational optics, computer vision, deep learning, and image processing. Her previous work, apart from the current talk, includes diffraction-limited light field photography and microscopy, as well as techniques for imaging through scattering media.

Julie is also working with A9–Amazon’s team┬áthat develops search engine and search advertising technology–developing visual search solutions using deep learning and computer vision for the Amazon fashion shopping experience.