Skip to main content Skip to secondary navigation
Main content start

What if we could shape ideas the way a sculptor molds clay?

An engineer designs computers that let us think with our hands.


	
		
			
				Professor Sean Follmer, right, uses a new 3D display in a networked collaboration. | Photo courtesy of TMG MIT Media Lab
			
		
	

 





Professor Sean Follmer, right, uses a new 3D display in a networked collaboration. | Photo courtesy of TMG MIT Media Lab




 

Computers have been great for crunching numbers. Now Stanford engineers want to make them better tools for creativity. To do this they are liberating data from flat screens by inventing three-dimensional display technologies that would enable us to shape ideas the way a sculptor molds clay.

“We think not only in our minds and not only about what we see, but by manipulating things and by interacting with the world around us,” explains Sean Follmer, an assistant professor of mechanical engineering.

Follmer and his collaborators are developing display technologies that create three-dimensional representations of objects or data defined in software. Using such a three-dimensional display, a designer in one place might reshape a software-defined object and virtual team members anywhere in the world would be able to see and touch the results on their own, similar displays.

In the embedded video Follmer highlights ongoing research with colleagues at Stanford Engineering and summarizes a talk he delivered last October at TEDxCERN that highlighted past collaborations with researchers at MIT’s Media Lab.

“The ways that we interact with computing need to fundamentally change to embrace the ways that we interact in the physical world,” Follmer says.

The heart of his idea is that computers have a static interface that depends primarily on visual input. Today’s computers don’t take advantage of the physical intelligence that we’ve evolved over millions of years of manipulating objects in the physical world.

To bring the sense of touch to computing, Follmer is developing dynamic shape displays: Imagine thousands of physical “pixels” arranged in a rectangular mat in front of a computer. Each pixel is powered by a small motor that can move it up or down to create a three-dimensional representation of whatever object is being defined by the computer’s software. Sensors built into the pixels detect the user’s touch.

An example might help. Let’s think about a group of collaborators spread across three continents. Every member of this virtual team has a dynamic shape display attached to their computer. If the software-defined object being discussed was a hill, and one user pressed down on the crest, all the other displays in the touch conference would show the hand imprint.

The researchers believe that physical computing will change the way we work. Today, remote collaboration primarily occurs via videoconferencing. Touch systems would literally add a dimension to such collaborations. Doctors studying MRI data, architects designing buildings, or urban planners analyzing cities would all be able to feel as well as see what they’re dealing with.

Going beyond three-dimensional display technologies, Follmer envisions even more futuristic applications, such as wearable devices that allow you to touch and feel virtual reality and tactile displays for the visually impaired.

 “We’re just at the beginning of how we interact with computers,” Follmer said. “As we move forward, I believe that we’re going to want to have computers that are not in the cloud but that are in the world around us that we can interact with, but that can also interact with us.”

Related Departments