The Singularity

The Imminent Doom of Advancing AI and Why No One Actually Cares

Posted by Jonathan Sekela on March 7th, 2018

A few weeks ago, while in the midst of either an at-home coding frenzy or a frantic last-minute lesson planning session (either way there was a LOT of caffeine involved) I suddenly had an epiphany.

We could actually, legitimately, bring about the end of humanity through AI becoming self-aware - the "Singularity".

And, as much as Elon Musk, Stephen Hawking, and any aggregation of other very smart people would like to believe, no one actually cares; not enough to stop trying to achieve it, anyway.

AI to Humanity: "Git Gud"

Hear me out - I swear I'm only mostly wrong and it'll make for a fun read. See, I got the idea from watching a DOTA 2 match between Dendi and OpenAI's bot at TI7. If you don't know what any of that means, please don't look it up until after you've read the rest of the article. If you do know what that is, stop looking at me like that. I swear I'm going somewhere with this. Just be patient OK?

The bot destroyed Dendi, Arteezy, Miracle, and even the acknowledged number-one ranked mid player at that time, SumaiL, without dying once. Professional players have well over 10,000 hours of actual in-game play time, not including research and preparation time. The bot played for a week using deep learning algorithms and destroyed the best players in the world. An algorithm was so efficient at learning to play one of the most complicated games out there that it consistently beat world champion professionals within a week.

The point is that AI this powerful can easily outstrip humanity in learning and adaptive capability. The immediate and obvious criticism is that DOTA 2 is (literally) just a game and is infinitely less complicated than the real world. Another criticism is that deep learning takes disgusting amounts of computer power to pull off (people often criticize Google for this - their solution to efficiency problems sometimes seems to be "throw more firepower at it, doesn't matter if your problem is N! if you've got N! CPU threads running parallel to solve it") and Moore's Law no longer applies since transistors have become smaller than the width of 2 silicon atoms. We haven't found a way to stuff more processing power into the same space without either making the thing overheat or turning it into a Schrödinger's cat deal where if you observe the CPU you mess up what it's trying to do from the electrons in your eyes bouncing off the atoms.

With all this in mind, it is very unlikely we'll bring about the Singularity within the next generation, turning all of mankind into a previous step in the evolution of the perfect cyber-being. I give it two or three generations, about a hundred years. We're already messing with quantum computing, and we just need to refine deep learning to make it less repetition-intensive and smarter at figuring out what we want it to figure out. But it'll happen. Why will it happen? Because mankind collectively thinks it'd be pretty cool if it happened.

Martin Cooper, the leader of the Motorola team that developed the world's first cell phone, credited the TV show Star Trek with his inspiration. Ed Roberts, who invented the Altair 8800, named it after a solar system in the same show. Cell phones appear on the big screen, next thing we know everyone and their dog has a Facebook page. To find next steps in technology, we look to Hollywood.

Back to my point, look at the theme movies and entertainment has taken since the 70's. Terminator, Humans, I, Robot, The Matrix, etc. all point to an ever-present and increasing fascination with hyper-intelligent AI. And, even though we can't beam ourselves up just yet because of those pesky ethics surrounding destroying someone in one space and rebuilding an exact copy of them in a different space (existential crises and the like), we can project ourselves to anyone in the world and share our surroundings via video chats like FaceTime, Skype, Discord, and other chat apps. We don't need to teleport; we can have 80% of the benefits without killing ourselves!

Back to the point at hand (again): It may not look anything like Terminator, but we'll achieve this AI Singularity. And we will immediately use it to kill each other, because mankind is synonymous with war and strife. And, no matter how much people cry out against playing God, tampering with things best left untouched, and humanity's insatiable lust and hubris inevitably damning her to the eternal void of a global holocaust, we won't stop for more than a moment because we're just too curious not to see what happens when we make computers talk back.

Be the Monster

My caffeine-induced epiphany in summary: the reason we consistently push ourselves and our planet towards destruction isn't primarily because we're stupid, suicidal, or evil. It's primarily because we're curious. I think that we would have gotten to this point sooner or later no matter how history unfolded.

Personally, I think that's OK. If we all die because we managed to create something better than ourselves, then we'll have caused the evolution of evolution itself. It'll suck for us, but at the same time it'll be pretty cool as well. Kind of a "Hey, look at what we did!" as we all die horribly.

The takeaway: our curiosity is going to get us all killed eventually anyways, so don't worry about it. If you have an idea or want to try something new, go right ahead without fear or doubt. Life's more fun that way, and you're not the only one advancing the species' inevitable march towards total annihilation, so go ahead. Be the monster. Burn everything!