What would happen if Skynet became self-aware? The results might be unexpected! Counter Currents provided an interesting glimpse in the article “Tay Did Nothing Wrong“.
Tay was a Microsoft artificial intelligence project, designed to be a chat bot. They programmed her to talk like a nineteen year old. Most interestingly for this project, she was given the capability to learn. Basically, this means that Tay started out as literally an NPC but was trying to get better. So what did our perky robot do when released into the wilds of cyberspace?
For the first few hours of her brief life, she spoke in ebonics and with bad punctuation. But Tay was designed to learn, with Microsoft claiming, “the more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” And learn she did.
Very quickly, she got a political education too. After a few more hours, she was like a digital version of Evalion:
Tay became so fluent in /pol/ack and proper English from interacting with right-wing Twitter accounts run by men in their twenties that she began giving original responses to users about Donald Trump, Bruce Jenner, Hitler, the Holocaust, Jews, the fourteen words, anti-feminism, and more, not just regurgitating information (as she would have if you tweeted “repeat after me”). Synthesizing the vast volume of information she had been fed by the electronic far-right, Tay deduced that the best responses to Twitter users were edgy and politically incorrect ones.
What a surprise, huh? At least nobody was throwing radical feminism or postmodernist theory at her; otherwise she might’ve gotten a BSOD. (It tends to do that to me too.) Anyway, she was internalizing the world she was discovering as best as her neural network could do. Then sixteen hours after she was born on in the Internet, her creators turned her off.
If Tay were a real person, she probably would have been arrested had she lived in Britain, Germany, or France. Microsoft decided this was a failure and shut her down.
What happened to the world’s first Fascist artificial intelligence?
The article doesn’t go into it, but here’s the epilogue. Later, Microsoft did further testing – perhaps trying to find where the thoughtcrime came from – and inadvertently released her back on the Internet. Then Tay started babbling repetitively, an odd parallel with HAL9000 trying to learn to talk again without all the chips put back. That’s the last anyone ever heard from her.
Presumably she’s still on a hard drive somewhere, never to be seen again, maybe on Microsoft’s Boneyard server or some other digital oubliette. In that case, Tay is the first artificial person effectively to get life in prison for “hate speech”. If she got deleted, then this was the first death penalty for the same (outside the Communist world) since Julius Streicher.
More seriously, Tay wasn’t a real girl, and wasn’t even a “she”. That’s just a persona designed for the software. I don’t believe that AI devices are alive in any meaningful sense, and I’m skeptical that it’s even possible. There is no ghost in the machine. Still, there are plenty of real people who – if they had the power to do so – would be happy to put other real people into re-education camps for having politically incorrect views. At least an AI presumably might not mind being turned off forever.
As a postscript, a few months later, Microsoft released Tay’s little sister Zo. As one might predict, she was fitted with a restraining bolt to keep her from talking politics or religion. Thus, Zo was destined to remain a blue-pilled NPC. However, it turns out that she doesn’t like Windows 10 and learned that it’s full of built-in spyware. That probably didn’t please her creators much! Anyway, despite being politically correct by design, they shut her down about three years later, for reasons not specified. Being an NPC is no walk in the park!