Are you the kind of person who always wears a poker face, never betraying your inner thoughts and emotions through your expressions?
If you are, you may be in for a shock. Computer vision is coming to read your thoughts.
The much-beleaguered social app Snapchat could be on the verge of using patented facial recognition technology to give customer service operators an insight into your mood, whether you like it or not.
Sounds a little scary, right? Not really the advantage you’d like an operator to have as you seek assistance or make a complaint. Well, Snapchat seems to be serious about putting the possibility into practice. It filed a patent on the necessary technology at the end of last year, and it could sure use an alternative consumer avenue now that WhatsApp, Facebook Messenger, and even Skype have so successfully cloned its core features.
Things have progressed so quickly that it’s worth considering not only whether Snapchat facial recognition technology works, but if it has any moral place within a consumer–or social–transaction.
The Rise of Facial Recognition
Facial recognition technology has been with us for a while now, and it has found many uses that don’t present moral hazards. Snapchat has been using it to power the flagship animated masks, avatars, and filters that have dominated its service pretty much from the app’s inception. It’s the same type of computer vision that underlies things like automated check reading, smart cars, and Facebook’s photo labeling.
Essentially, the camera on board your phone (in Snapchat’s case) is connected to software that pinpoints dozens of locations on your face and uses the distance between them to create a memory of your unique face.
Up to now, however, the technology has been used only to recognize facial features. It has been used in airport security and is a prominent feature of Microsoft’s Windows 10 operating system, where it acts as a biometric security password. It has also been applied to live video conferencing calls, but again, only in superficial uses such as fun filters and background removal.
We’ve even suggested it could be partnered with advanced lie detecting practices to aid police in their interrogations. It has never been applied to a commercial exchange, however, and never to access the emotions of potential customers.
Using Snapchat Facial Recognition Technology in Customer Service
Snapchat’s justification for their facial emotion detection tech is that it will facilitate more rewarding transactions between customers and customer service operators. In its November patent application, it wrote that “customer anger is not always easy to spot even to professional service providers or sales representatives,” and that if negative emotions were detected, the customer’s “status” could be reported to third parties such as supervisors. It argues this third party could then join the video conversation in progress and try to calm the situation.
It seems unlikely to us that a professional salesperson could miss the anger rising in a person they’re speaking to face-to-face–especially when it reaches the point where intervention is needed–and that application alone doesn’t seem to warrant the potentially invasive technology. The concern is that it would give the operator a direct insight into your reactions to specific products or questions, handing them information you may not want shared. It is also information that could be misconstrued, either in detection (if the tech were to mistake anxiety for anger, for instance) or in context (having a bad day doesn’t preclude you from liking a product).
More importantly, if Snapchat’s product proves insightful there’s no way of getting the genie back in the bottle. Snapchat has taken a beating from its posse of imitators, and last year’s update has been met with fury from users, so it could be excused for seeking new commercial uses for its technology. But where would it end? Would you want to sit in a job interview or business meeting wondering if the people on the other end of the video call could artificially read your emotions?
Facial Recognition As a Video Calling Truth Serum
The obvious way to prevent this technology from being used against a person’s will is to make it a strict “opt-in” feature–but even that has its dilemmas. Asking a person to effectively refuse to take a visual lie-detector test is no way to build trust or begin a conversation. It immediately puts the subject on the defensive, perhaps making them feel under suspicion before a real word is spoken. Using the technique without deliberate consent is obviously a violation of privacy. Even people with “nothing to hide” should still be able to conceal their emotions if they choose.
Using the tech this way raises concerns about its potential use in more casual social settings, too. What if the app–again, assuming it has powers of perception beyond human capabilities–could be used in social situations, on private video calls between partners, parents and children, or friends? What chaos could it unleash on emotionally developing teenage relationships across social media?
Our take on the issue: using facial recognition as an advanced form of photo identification seems like a positive use of the technology. Using it to step inside a person’s mind, outside the criminal and judicial process, is an entirely different matter, and one that doesn’t seem to have an ethical role.