It is hard to see eye-to-eye on a video conference call. Literally.
I’ve made hundreds of video calls in my time, ranging from social calls to professional business meetings and every one of them suffers the same awkward fate–it’s just difficult to know where to look.
The basic problem is that there are two distinct visual sources to a video call (this problem is called the parallax effect):
- The camera that sends your image to your online colleagues
- The screen that sends their likeness back to you
Because of this, direct eye contact video conferencing is not a working reality. It’s just not possible to focus on two things at once. In my experience, people don’t like staring into the dead heart of a camera lens; it simply doesn’t feel natural and nullifies the main benefit of video conferencing, which is to see and respond to the other person’s face.
That’s why we’re pleased to report there are two new software solutions on the way that could fix the problem. We’re going to reserve final judgment on both until we see them in action, as the optics are a little concerning, but at least someone out there with the expertise to do something about the problem is having a go at it.
Intel’s Eye Contact Solution
We’re aware that it’s our brand bias doing the thinking, but we’re encouraged that the team behind the first of these purported new direct eye contact video conferencing solution is operating under the Intel badge. Intel has a long history as arguably the leading manufacturer of desktop processors, which means they dwell on the computing cutting edge. Also, in video conferencing circles their RealSense 3D hardware is widely acknowledged as the most advanced version of extra dimension technology.
Intel’s technology digitally repositions the eyes of a video caller.
Both facts provide a reason to suspect that recent reports that Intel is working on an eye-to-eye video calling solution may lead to a functional commercial result. However, we’re also a little wary of the method Intel has chosen to use. You can see for yourself in this Intel promotional video. Skip ahead to the 25-second mark to see exactly how this is designed to work:
Intel’s technology digitally repositions the eyes of a video caller to focus on the center of their webcam. This has uncanny valley concerns that outweigh those surrounding the current Lion King remake. While Disney’s perceived failure was within a wholly controlled environment, Intel is trying to bring computer learning to bear on completely uncontrolled human beings. The technology behind the attempt sounds impressive, but it will have to be truly ground-breaking for this solution to work comfortably.
Direct Eye Contact Video Conferencing Technology
Intel’s eye contact correction model uses advanced computer learning to refocus the video caller’s attention without any need for input on the logistics of where the screen and camera are placed. Instead, it uses a deep convolutional neural network–a type of number-crunching network used for analyzing visual information–to “warp and tune” the eyes frame-by-frame. The software uses a database of synthetically generated eyes to learn when the gaze is off-center and input a corrected version. It is said to be smart enough to recognize when a person is blinking and whey they are looking off-camera, and to only make corrections when the person is speaking.
Our concern, having seen the test models, is that the intimate intricacies of the human eye may be too complex to be effectively replicated when viewed at the proximity of a video call. That, and the fact that it just looks a tiny bit wonky in the actual footage.
Intel is still working on the product so it’s possible they will improve the visuals somewhat before going public. In the meantime, Apple is going to beat them to the punch with its version of eye contact correction in FaceTime.
iOS 13’s FaceTime Fix
iOS 13’s fall release is one of the most anticipated tech events of the year…as it is every time Apple rolls out a new service. As usual, there are leaks about upcoming features, and this time there’s the promise of a FaceTime Attention Correction feature. The feature performs the same function that Intel’s solution does, rerouting errant gazes toward the center of the video calling screen.
The FaceTime version uses augmented reality to map the face and place a digital mask over the eyes.
This is not the deep-thinking solution Intel is pursuing. The FaceTime version uses augmented reality to map the face and place a digital mask over the eyes–it’s the same tech that turns you into a live-action Animoji.
We assumed that Apple’s version would be a little more cartoonish than the Intel app, but we’re impressed with the FaceTime visuals we’ve seen and anyone who has played around with the FaceTime masks knows they can be quite responsive. We have to add, as well: isn’t FaceTime enjoying a nice return to video calling relevance with its recently expanded group calls and now this potential leap?
Again, we’re withholding a final verdict until we can get some screen time with both, but it looks like there are potential digital solutions on the way for digital communication’s most persistent flaw…uncanny valley be damned!