Ever had a video conference go so well you just wanted to stand up and slap a big high five on your distant colleague?
Well, now you can.
Researchers at MIT’s Tangible Media Group have created a way to render the shape and movement of objects into real-world, 3D physical space and project it via video conferencing.
It means your video collaboration of the future could begin with a handshake, center on real-time sculpting of that next breakthrough in ergonomic design, and end, hopefully, with a whole-hearted hug.
Creating Three-Dimensional Shapes
The physical telepresence technology works by using a Kinect scanner–yes, the same Kinect hardware that adorns Microsoft’s game consoles–to digitize the movement of objects and then render them in 3D on a high-powered inFORM “table” at the other end.
Every contour and movement of the object creates a 3D imprint on the motorized pins lining the table surface. It works in much the same way that a child’s pin-screen toy allows you to make a 3D model of your hand by pressing it into a bed of shifting pins.
Of course, no child’s toy was ever this cool.
Keeping track of things by watching on a 2D digital screen of their own, users can create an impression of any shape, human or otherwise, and use it to manipulate objects placed on the table at the other end. The current sensors can even transmit color.
3D Collaboration Across the World
The driving force behind the physical telepresence project is the desire to collaborate.
Shared objects rendered at both ends of the video conference potentially allows people to manipulate the same model at the same time without needing to board a plane and physically share a studio.
It’s the dream of industrial designers and architects everywhere. Now you can pick-up and view in 360 degrees remotely housed models and prototypes that previously existed only in blueprints and on 2D computer screens. With a little computer memory and a convergence with current 3D printing hardware, the model formed during collaboration could be reproduced at will on either end of the project.
Further refinement of the pins that make possible the shape display will bring dramatically increased fidelity, allowing for the subtler representation of color, curvature, and intimate detail.
One obvious current limitation is that each individual pin can move only upward, forming a solid column; they can’t bend like a cresting wave or the clenching fingers of a human hand.
A distant enhancement of the inFORM technology that allowed for multiple, hinged pins to fire from each shaft in the table and then be projected first vertically and then horizontally would allow users to perform a grip. It could pave the way for the first glass of water to be poured from 300 miles away.
More importantly, perhaps, it would allow the user to exercise all the evolutionary power of the human pinch and grip to operate remote tools safely, or perform finer tasks like teasing out the peaked roof of an architectural design. Much like slipping on a digital glove, users on either end of the conversation could reach into an otherwise inaccessible world and sculpt solutions.
The MIT team’s invention does not stop at a strict 1-to-1 recreation of objects, either. The technology can already modify the scale of an object once it completes its journey to the other end of the video call.
As such, human hands can be greatly magnified to take control of objects normally beyond the grasp of mere mortals. Investing a little piston power into the table could potentially allow people to remotely move large objects such as pieces of machinery or building components with the dexterity and care of a human hand.
And, the technology is sensitive enough to recreate only the desired portion of any object, meaning hands could be disembodied from arms to allow for more dexterous motion and manipulation.
The idea means that rather than snake a wrist and forearm around potentially fragile or dangerous piece of equipment such as you’d find on a computer motherboard or electrical switchboard, the expert’s hand could simply be rendered at the exact spot of the problem.
Imagine sliding such a device into the vicinity of a suspicious package and letting the bomb disposal front-liner’s hands move straight to the red wire without risking lives or disturbing anything potentially more explosive.
Remember, this isn’t a robot or a remotely controlled machine, this is the physical recreation of remote objects in real-time. It offers the promise that one day experts in any hands-on field will be able to mold mutually accessible objects with the dexterity of human fingers.
Going from 2D to 3D
As the receiving end of this physical teleconference is responding to digital information, the resulting sculpture need not be derived from an original 3D object at all. Instead, simple computer graphics with 3D-encoded design instructions could be used to transmit and render anything from statistical bar graphs and pie charts to linear data analysis and company logos.
Imagine the response to a boardroom video conference that features this quarter’s dramatic profit increase rising as a 3D sculpted number from the center of each executive’s desk.
While the physical telepresence technology is not currently available to anyone outside the MIT lab, the researchers are working toward commercializing the product–though they do say it won’t happen in the foreseeable future.
In the meantime, we can all ponder that day when a more sophisticated scanner can reproduce an entire human form and allow us that hug from the boss at the end of a spectacular video conference presentation.