The dark side of AI: How our uncontrolled desire for advancement could lead to our demise
By James Deakin
ONE FOR THE ROAD
What if I told you just as you were about to board a flight that 50 percent of the engineers who made the plane you’re about to get on said that there’s a 10 percent chance that everybody goes down? Would you still get on that plane?
This was a question raised by Tristan Harris and Aza Raskin, creators of the award-winning Netflix documentary The Social Dilemma, after discovering that 50 percent of AI researchers they spoke to believe there’s a 10 percent or greater chance that humans go extinct due to our inability to control AI. Yes, you read that right. Half of AI researchers believe there’s a 10 percent or greater chance that humans will go extinct due to their inability to control AI.
Welcome to the double exponential era. One where our unquenchable desire for superiority and advancement over everything else could drive us into early extinction if we don’t start pumping the brakes before it's too late.
Nobody is saying that AI doesn’t offer tremendous benefits. Far from it. This is solely about the trade-offs that it creates to achieve that. We haven’t even figured out how to write into law — or at least balance outage ills of what the attention economy and the engagement economy has taken from us. And now we are opening a new door that has far, far greater repercussions if not handled properly.
One of Harris and Raskin’s key points during their address to AI engineers in their presentation at the Center for Humane Technology was that when you invent a new technology, you uncover a new class of responsibility.
For example, we didn’t need the right to be forgotten to be written into law until computers could remember us forever. Or we didn't really need the right for privacy to be written into law until mass-produced cameras came onto the market, right? It's not in the original Constitution.
Also, aside from new levels of responsibility, a lot of what the AI community worries most about is when there’s what they call takeoff. This is when AI becomes smarter than humans in a broad spectrum of things and begins the ability to self-improve. Because it is operating on artificial intelligence alone, it may come up with some bizarre solutions to commands. Say you wish to be the richest person, so the AI kills everyone else. It’s that kind of thing.
As it is now, we’ve already seen automated exploitation of code and cyber weapons, exponential blackmail and revenge porn, automated fake religions that can target the extremists in your population and give automated personalized narratives to make the extreme even more extreme. AI can also recreate your voice from a three-second sample and have you say anything it wants. We’ve already read reports of extortion or ransom from AI-generated phone calls from a loved one pretending to be in trouble and asking for money. Then there are the deep fake videos. And because this is all coming at us at a double exponential rate, we as humans are not equipped to deal with these threats. It’s simply coming in too fast.
So that’s why we need to hit the brakes and focus on the question: How do we upgrade our 19th-century laws as well as our 19th-century institutions for the 21st century? And because AI is something that affects all of humanity, we must form a treaty, much like we did with nuclear weapons. Because if history has taught us anything about technology, it is, if that technology confers power, it will start a race. And if you do not coordinate, the race will end in tragedy.