AI Can Now Tell If Youre Lying


first_img McDonald’s Plans to Serve AI Voice Technology at Drive ThruCIMON Returns to Earth After 14 Months on ISS Benjamin Franklin allegedly once said that there were only two certainties in life — death and taxes. In the modern era that might well be modified to include data/identity theft. It’s a huge problem and an apparent unavoidable evil these days, but a group of Italian researchers think they might have a solution and it involves AI.Online it’s tough to prove someone is who they say they are. In person, you can at least match someone to a photo ID, but online anyone could fill in boxes on an Amazon order. But what if an AI could read a user’s mouse movements and determine if they were being truthful? That was the question Giuseppe Sartori, a forensic neuroscientist and study author wanted to answer.To test this, he asked a number of volunteers to either memorize a fake identity or be truthful about themselves. Subjects were then asked a series of yes-or-no questions on as computer test. The questions were simple like “Were you born in [year]?” But, mixed in with those simple ones, were slightly more complex ones “Is [x] your zodiac sign?”The hope is to throw off would-be identity thieves just by catching them off guard. A fraud might memorize the basics — name, birthdate, address — but not connecting thoughts. If you were born in Oklahoma and you’re pretending to be someone in California, and someone asks you the capital of your home state, you’ll have to stop to think for a moment. That hesitation was evident in cursor movements.Experimenters found that when they fed their machine learning algorithms data on the subjects’ mouse paths, they were able to catch liars an incredible 95% of the time. It’s a lot like Google’s new “I am not a robot” button. Many bots tend to move in straight, clean lines, while humans are a lot more… imprecise. By reading basic cursor data, it’s not hard to sort out the humans from the software. Similarly, if the team can get it just a bit more accurate, this could be a valuable new tool in the fight against identity theft. And, for once, I think I won’t make the panicky case against allowing AI research to develop. I’ll just let this be a relatively happy story. That is until this tech gets adapted for use in robo police, who will no doubt use it to oppress the populace by enforcing laws too strictly. Then we’ll find ourselves in a dystopic police state rules by the mach– oh shit, I did it again. Dammit, I’m really sorry. Bots just freak me out. Stay on targetlast_img