Why AI Might Fear Us
I chuckle every time I see an article on AI article saying “We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species!” Not because it’s not possible that we would create technology that might kill us all (we’re good at that). I laugh because I may know why an artificial intelligence might want to eliminate our species.
We’re jerks. Let me explain.
Over the last year, I’ve watched a lot of early user interactions with AI/Bots thanks to Ben Brown and his team at Howdy as one of their advisors. Howdy makes a workplace automation bot that you can train to do repetitive tasks like hold stand up meetings or ask your team where to get lunch every day (if you train it to do that). Pretty great, right?
Yet, I see a lot of first time users start off their interactions with a bot go like this:
It blows my mind the number of times our conversation with some AI (Slack bots, Siri, Alexa, et al) devolve into some nasty tirade. We ask a computer to do a task that it was not designed or trained to do, it tells us it does not know what we’re asking it to do, then we immediately get all Anna Wintour on this digital assistant. How did we get so jaded that we’re not still blown away that you can talk to a freaking computer?
In the grand scheme of things, AI is a mere toddler in terms of technology development. Outside of the original Slackbot, most of the other Slack bots are only a few months old since being launched in December of 2015. Siri launched in 2011 and Alexa came out a little over a year ago. Yet, here we are yelling deragatory questions at them:
And it’s not just that we are verbablly abusive to AI. We also act like violent baboons when we interact with AI in environments like Virtual Reality. I’ve seen this firsthand while doing demos with Will Smith for his new VR company. After showing our demo, people will ask us to show other VR experiences we like. One of our favorites is the awesome “Gourmet Chef” by Owlchemy Labs. The Gourmet Chef experience is set in 2050 where robots have taken all of our jobs and “for fun” we are taught by a bot how to cook. The game inside the VR experience is to listen to the bot and learn how to cook in VR.
But do you know what half the people do the minute the experience starts? They start breaking things and throwing food at the robot! So here are these investors, lawyers, and our tech friends -theoritically smart, well-educated people — who within a few minutes abandon the learning part of the game and immediately start going apeshit on the robot….like baboons.
We saw this savage, destructive behavior in literally 1/2 of all the people we ran through the demos. Will and I would say “oh, you’re one of those people” as someone in the demo went about destroying this virtual kitchen. I remember thinking “huh, my friend Bob might be a potentially violent guy”. Don’t you think the AI will think this as well as it looks back on all of it’s interactions? Would you blame an artificial intelligence for starting to think at least half of our species was angry, violent and potentially life threatening based on millions of these interactions over time? I mean, it would be the rationale conclusion.
So, maybe we should dial it down a bit.
What if we act like these digital assistants will develop into really helpful things that might possibly make the world a better place? I’d like to think we can find a little patience and spend some time trying to teach these AI how things work- and how we should act with each other. That’s what happened in WarGames which is the whole reason I think it is so cool that we’re actually getting to build this AI stuff now. You’ve got to think feeding it years of vitrolic diatribes and barbaric encounters can’t be the right database of history for the AIs to learn from. No wonder that one of our older AIs is already starting to get sassy with the knuckleheads who keep provoking it.
So next time you want to yell at Siri or start firing off explective-laden DMs at your Slack bot, maybe think twice and be nice.