December 24th

Artificial Intelligence doesn't bother me.

Natural Stupidity does.


(... I was considering saying "I, for one, welcome our Artificially Intelligent Overlords", but the whole point of this is I don't think we can.)


Okay. Nick Bostrom, Stephen Hawking and Elon Musk, all well-regarded, rich, intelligent people, are all extremely concerned about Artificial Intelligence. Extinction-of-humanity concerned.


And they aren't stupid people, right?

More and more, I've turned to rough, informal Bayesian Inference as a method for projecting hypotheses. Honestly, I have a work of fan-fiction to blame thank for really focussing me on that (ironically, I still haven't read so much as three pages of the original work).


Or (as I'm a technologist/engineer, not a scientist/theoretician) to put that in a much less fancy manner... "How's it gonna go? Well, how's it been going so far?". Try some Bayesian Thinking (thanks, Julia Galef).

... this is where I'm probably going to tick off some of my also-rationalist / also-future-thinking friends. They're probably going to think I'm some kind of blind, disaster-begging Luddite. But I welcome their feedback. Like AI, my intelligence is trained.

The Bostrom / Hawking / Musk concerns seem to be "we're like children playing with a bomb, greater threat to us than climate change, wipe us out. If we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world. Aaargh, Terminator SkyNet! Outperform humans, become new form of life, will replace humans! We need to go to Mars to hide from our AI (because if we can get there, the AI certainly couldn't follow us!)".

... fuuuuuck.

No, seriously. Apart from the "Aaargh" and the "SkyNet!", those are all lines from the articles. Even before the last bit, I wasn't buying into this. And this is coming from three of the smartest / most highly regarded people in the world?

Seriously?

"Learn through access to the Internet, evolve strategies to secure its dominance"... I can't put this one any better than James Vincent of The Verge did: "Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day". Whoop, nope, wait. Helena Horton of The UK Telegraph gave us "Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours".

Listen up, doom-kiddies! Do we have to be worried about AI? Do we?

... how's it gonna go?

Well, how's it been going so far?


Do we have to be worried about AI? Someday? Maybe. Whether it's "going to talk us into letting it out of the box" (no, really, people make money thinking about this stuff!) or whatever.

Here's the thing. That kind of AI is so far away, no, odds are medical science is just not good enough to have half of us live long enough to see that. The last expert survey's most reliable (90%) estimates Artificial General Intelligence (AGI; intelligence like (ha) humans) in 2075. FIFTY-SEVEN YEARS, PEOPLE. Artificial General Intelligence is way above what we think of as AI, people. And AI is almost certainly not what you think; AI is too "narrow". Check a < 2 minute video from the BBC.


Not like narrow AI is doing too well either:

  • every time you bitch about Autocorrect, you're complaining that narrow AI got it wrong.
  • you think Siri is soooo smart, but talking to Siri is basically making a phone call to Apple's campus in frigging California and their supercomputers do the thinking (you must be happy to see me, because that's not a supercomputer in your pocket)! Siri just records and sends what you say (oh, and Apple hangs onto that recording for two years, by the way)
  • the London Metropolitan Police think they can get narrow AI to detect Child Abuse photos in two-to-three years; except the fucking thing currently can't tell the difference between a photo of a naked (dare I say it) fucking-or-not child-or-not body and a fucking-and-definitely-not desert.


Yo! Send nudes! Or topographic maps of the Sahara, that's hot!

So, let's sum this up:


What does this mean?

  • AGI will likely not be the doom of Humanity
  • If AI (or AGI) is to be the doom of Humanity, it is because AI will be tools in the hands of (those tools) Humanity, dooming Humanity
  • ... who've already done a pretty good job of dooming Humanity even without AI
  • Because the thing about the people "leading" and driving Humanity is that they're notably, medically, psychiatrically, literally psychopathic


So... and this is a little weird for me... the science-oriented, education-oriented, engineering-oriented, information-technology-oriented, left-wing, small-L-liberal, grew-up-respecting-Hawking-oriented me; but...

Shut the fuck up Professors Bostrom and Hawking and you moneybagging bastard Musk, because I For One Welcome Our Artificially Generally SuperIntelligent and sadly non-existent Overlords Maybe In Co-operation With Sane Humans, because sans aliens and deities, who the hell else has a chance of helping us out of the colossal fuck-up our Psychopathic subhuman Overlords have driven to push us into?

There will not be one single Artificial General Intelligence, in the next fifty-seven years, that will be anywhere near as dangerous or uncontainable as the Natural Specifically Stupid lunatics already in charge of the world.


... but, hey. Who would want to end this on a sad note?

How about a happy song to round this off? With thanks to the Church of the SubGenius, the Reverend Ivan Stang, Mary Poppins and the Reverend Jimmy Ryan... c'mon everybody, sing along!


... "fabricate and elevate absurd to the transcendent..."