The dreaded problem that kills AI projects

Have you ever looked at something and were creeped out by its almost human-like appearance, but not quite? Be honest – does Sophia’s robot scare you? The feeling of crawling can range from robots to CGI animation or animation as you see it in theme parks, dollhouses, or even digital assistants. This concept is called Strange Valley. And, believe it or not, this is actually a particularly important reason why many AI projects fail.

The Paranormal Valley is the relationship between the degree of similarity of a being to being human and then humans’ emotional response to that object. Basically, people find things that look like humans, but aren’t actually human, as frightening. Specifically, we’re talking here about the physical aspect of something like robots.

If you are looking for an industrial robot, this is generally not considered too creepy. They have no faces. Do not take the shape of the human body. Then if you become a little more human like WALL-E, people will find him kinda cute. But as soon as you get into the world of anime or robot Sophia where they try to appear and act human, they start falling into the supernatural valley where many people find them scary and uncomfortable around them.

Super Data Valley

In general, the concept of a supernatural valley applies to both human and embodied physical elements. However, the supernatural valley concept as a “creepy” concept can be applied to data as well. Because of the comfort/privacy trade-off, people are sometimes willing to sneak a little in if it means extra comfort, but there’s a line once it crosses that it’s hard to regain people’s trust. You might be thinking that you’re not building a robot, so you don’t have to worry about the uncanny valley. It’s not just humans that can be scary. There is also a data version of the uncanny valley. This is often overlooked.

If you push that line too far and fall into the uncanny valley, you will likely cause the AI ​​project to fail. If an app hacks people, they won’t use it, and that leads to failure. The uncanny valley is an interesting way to fail an AI project because we don’t generally consider psychological responses as a reason for an AI project to fail. Suppose, for example, that a museum or hospital has built an artificial intelligence robot to interact with customers or patients. If people don’t want to use those robots and patients don’t want robots to come into their rooms because they find them creepy, then you’ve wasted time, money and other resources on an AI project that ultimately fails. It could be because you didn’t have a solid business understanding when starting the project and didn’t take these psychological responses into account.

Organizations, companies, government agencies, and institutions are collecting more data than ever before. They use this data to help them understand their customers better, gain additional insights and gain a competitive advantage but often people don’t know how their data and information is being used. Some organizations collect and use your data to improve customer experience and make helpful recommendations and people are often comfortable with that because it works for them.

However, if companies are looking at your entire buying behavior and starting to make recommendations about things you weren’t looking for but might have been considering buying, they can quickly plunge into the uncanny valley and people start to think it’s intimidating. People have different thresholds for what they consider to be a supernatural valley creeper which makes finding the line a delicate balancing act. You want to provide enough customization and convenience but you don’t want to over-engage and you seem to know a lot which causes trust to deteriorate and people feel uncomfortable using technology. Once you go to the uncanny valley, the benefits that you would otherwise have eroded from that technology have been eroded.

Super Valley IRL

Talking about this concept in theory is one thing, but seeing it in action is another. In Japan, a hotel chain called Henn na Hotel was created that was mostly made up of robots to help with a variety of tasks people would otherwise have done. Things like welcoming and checking in guests, bringing bags to rooms, and making wake-up calls. This showed immediate ROI through savings on labor costs, staffing issues, was very effective, and had a gimmick of being a “robot hotel”.

However, as the months went by, there were issues proving this hotel was slipping into the uncanny valley. Some of these were technical issues such as robots inside the room that woke guests at night because they mistakenly confused snoring with talking. Other issues have surfaced around guests having difficulty entering their rooms due to faulty facial recognition issues. Many people complained about the slow movement of the robots when delivering bags to their rooms. However, you can argue that the technology can be replaced and modernized, so this was not the reason for the failure of this project. Although it was a very functional hotel, these loopholes were making guests uncomfortable, providing unpleasant experiences. The hotel eventually decided that people would be better at doing these tasks. In the end, the hotel didn’t realize how uncomfortable people were with the hotel which is made up of about 90% robots and only 10% humans.

How to solve the super valley case

There is no flat and fast line when it comes to the uncanny valley. You may not want a bot to come to you and say, “Hey, how can I help you?”. But if you’re at McDonald’s, there’s a reasonable chance you’re willing to use their self-service kiosk to order your food. The difference between the two systems is that the kiosk does not look like a human and the kiosk is controlled by the human. You don’t have to get into a conversation with the kiosk nor try to make the kiosk do more than its primary function. It is easily controllable and very predictable and seems to solve 90% of these problems. Same goes for data, if you log a lot of data and you’re too passive, people will stop using your service because they are uncomfortable with your perceived lack of privacy.

Some people are more comfortable and less “crawling” with technology than others. That’s why organizations need to provide alternatives to systems that may be very close to potential trigger points where people might sneak in. One component of iterations Project management methodologies for artificial intelligence It is to test different approaches in real-life pilots who see how people will react. If you see that people are experiencing a negative reaction to the data or the physical system, you can either “mitigate” that creep or provide less scary alternatives that still provide the value of the AI ​​system. There are many big reasons why AI projects fail, but you definitely don’t want one of them to be psychological creep of your AI solution.

Leave a Comment