Skip to content Skip to navigation

EngX: A mini-conference on leading ideas from Stanford Engineering

Stanford Engineering logo
May 20, 2014 - 7:00pm
Nvidia Auditorium, Huang Engineering Center

The Digital Sensory System

Hear from Stanford researchers who are creating technologies that mimic human capabilities like seeing, touching or learning. Introduction by Dean Jim Plummer. Networking reception to follow lectures.

Speakers

Seeing
Fei-Fei Li, an associate professor of computer science whose research focuses on vision, particularly high-level visual recognition. [See Abstract]

Touching
Allison Okamura, an associate professor of mechanical engineering whose research focuses on using haptics to generate a touch sensation in humans. [See Abstract]

Learning
Christopher Manning, a professor computer science and linguistics whose research focuses on systems that can intelligently process and produce human languages. [See Abstract]

Parking is free in Parking Structure 2 (located near Huang Engineering Center) after 4 p.m.

If you are unable to attend the event in person, please register for a live webinar hosted by the Stanford Center for Professional Development

Abstracts

A Quest for Visual Intelligence in Computers
Fei-Fei Li, Associate Professor of Computer Science

More than half of the human brain is involved in visual processing. The remarkable human visual system evolved over billions of years, but computer vision is one of the youngest disciplines of Artificial Intelligence (AI). The central problem of computer vision is to turn millions of pixels of a single image into interpretable and actionable concepts so that computers can understand pictures just as well as humans do. Such technology will have a fundamental impact in almost every aspect of our daily lives and on society as a whole, in spheres that range from digital health and medicine to autonomous driving to national security. In this talk, Prof. Li will provide an overview on computer vision and its history, and share some of her recent work to enable large-scale object recognition.

Haptics: Engineering Touch
Allison Okamura, associate professor of mechanical engineering

The sense of touch is essential for humans to control their bodies and interact with the surrounding world. Yet there are many scenarios in which the sense of touch is typically lost: when a surgeon teleoperates a robot to perform minimally invasive surgery, when an amputee uses a prosthetic arm, and when a student performs virtual laboratory exercises in an online class. Haptic technology combines robotics, design, psychology, and neuroscience to artificially generate touch sensations in humans. Okamura will describe how haptic technology works and how it is being applied to improve human health, education, and quality of life.

Texts are Knowledge
Christopher Manning, professor of computer science and linguistics

Both people and computers now have access to virtually all of the world’s knowledge. For humans, this access is marvelous. Unfortunately, computers still have trouble comprehending this gift that they have been given. How can we get computers to understand and use all this knowledge? Should the goal be to try to formalize this knowledge into more structured forms or would it be to better appreciate the flexibility and power of human language knowledge representations? How can computers make use of context for pragmatic interpretation as humans do? Professor Manning will talk about how his lab has been building statistical models of language to extract both facts and nuances from human language communication.

Speaker Bios

Fei-Fei Li is an associate professor of Computer Science and director of the Vision Lab. Before Stanford, she was on the faculty of Princeton University and University of Illinois Urbana-Champaign. Professor Li’s main research interest is vision, particularly high-level visual recognition. In computer vision, her interests include image and video classification, retrieval, and understanding. In human vision, she has studied the interaction of attention and natural scene and object recognition, and decoding the human brain fMRI activities that are known as "mind reading" of the brain. Professor Li is a recipient of the 2011 Alfred Sloan Faculty Award, 2012 Yahoo Labs FREP award, 2009 NSF CAREER award, the 2006 Microsoft Research New Faculty Fellowship and a number of Google Research awards.

Allison Okamura is an associate professor of mechanical engineering (and of computer science, by courtesy). She develops haptic (sense of touch) technology for use in novel applications such as robot-assisted surgery, prosthetics, rehabilitation, teleoperation in space, and education. She is committed to sharing her passion for research and discovery, using robotics and haptics in innovative outreach programs to groups underrepresented in engineering. Her awards include the National Science Foundation CAREER Award, the Robotics and Automation Society Early Academic Career Award, and the Technical Committee on Haptics Early Career Award. She is an IEEE Fellow.

Christopher Manning is a professor of computer science and linguistics. His research goal involves computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and hierarchical deep learning for Natural Language Processing (NLP). He is an ACM Fellow, an AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

Contact Email: 
ahanson6@stanford.edu