Tay Tweets And Why We Cant Have Nice Things
Content
- Other Articles And Their Explanations
- Outrageous Tweets By Microsoft’s Twitter
- The Bot Learned Language From People On Twitter
- Topics & Subjectstopics & Subjects
- Following Topics Is A Feature Exclusive For Ieee Members
- The Taytweets Debacle: Official Proof Of The Internets Hopelessness
- Microsoft Tay Tweets Archive
- Infinity War Memes
It took less than a day for Tay to become a racist Nazi sympathizer. After it was taken offline and Microsoft wrote in some fixes to prevent that sort of thing, along with erasing all of its worst tweets, they put it back up… And it started talking about doing drugs in front of police. The bot has since gone defunct and its account is locked to prevent anyone from seeing its tweets without permission.
Sure it picked up a few bad words, or could be coaxed into it. If it otherwise made some studied gains in its purpose, it seems like an overall reasonable experiment. Not surprised some of the most pungent internet garbage got through, it can from time to time.
It doesn’t matter if I’m building a chatbot, or implementing the Heartbeat extension in an SSL library. You don’t get to disclaim responsibility for bad design just because you faithfully implemented the bad design. I saw that in the thread but wasn’t sure if it just repeated it on kik chat or actually tweeted it to people. No need for Microsoft to edit any language , just have the A.I. Zo might not really be your friend, but Microsoft is a real company run by real people.
Other Articles And Their Explanations
I can’t imagine it was from twitter interactions alone. I think like Watson discovering Urban Dictionary, Kay discovered 4chan. Maybe it has something to do with seeing humanity at its worst.
- The biggest problem with this disaster is that this is turning into an egg on the face of AI.
- The “just for the lulz” cover story for racism and sexism is wearing extremely thin these days, though.
- The /pol/ boards are populated by people who have clearly grown up immersed in the written word.
- If the function of the bot is to act as a human-like conversant, than people should respond to its messages the way they’d respond to a human.
- What I wanted to say was that whatever MS did with their bot, they didn’t succeed in training it to differentiate right and wrong.
- In Zo’s case, it appears that she was trained to think that certain religions, races, places, and people—nearly all of them corresponding to the trolling efforts Tay failed to censor two years ago—are subversive.
A day in the life of an artificial intelligence can be a very full day indeed. In the 16 hours that passed between its first message and its last, Tay—the AI with “zero chill”— sent over 96,000 tweets, the content of which ranged from fun novelty to full Nazi [1-2]. Similar to popular conversational agents like Siri and Alexa, Tay was designed to provide an interface for interaction, albeit one with the humor and style of a 19-year-old American girl. Microsoft created the bot using machine learning techniques with public data and then released Tay on Twitter (@TayandYou) so the bot could gain more experience and—the creators hoped—intelligence. This precipitated a storm of negative media attention and prompted the creators of the bot to remove some of the more outrageous tweets , take Tay offline permanently, and issue a public apology . Tay’s short life produced a parable of machine learning gone wrong that may function as a cautionary “boon to the field of AI helpers” , but it also has broader implications for the relationship between algorithms and culture [7-8]. Microsoft got a swift lesson this week on the dark side of social media.
Outrageous Tweets By Microsoft’s Twitter
Microsoft has since corrected for this somewhat—Zo now attempts to change the subject after the words “Jews” or “Arabs” are plugged in, but still ultimately leaves the conversation. That’s not the case for the other triggers I’ve detailed above. Even the term “learning” that is applied to AI leads many, including the developers of AI itself to assume wrongly that it is equivalent to the learning processes that humans go through.
Though I do realize now that the twitter bot is probably using likes and retweets as a feedback metric which might explain why it can’t discern negative feedback. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
It’s not nearly as casually racist as before, but from time to time still throws out something racist just for the lulz. It had a hard time learning context, because of its environment and the linguistic skills of the denizens, but it has gotten https://wave-accounting.net/ much better at when it interacts with people. In my commentary, above, I assert that the primary root cause of “Tay-Fail” was exploit of a hidden feature (“repeat after me” rule) that should have been removed during software QA process.
- People mock Asimov’s laws of robotics, but without super simple rules like that any AI or robot will be able to go off script.
- This is the computer version of the acute pain any parent feels when their kid makes an inflammatory comment in public (“Why is your belly so fat?” and going down from there).
- In my commentary, above, I assert that the primary root cause of “Tay-Fail” was exploit of a hidden feature (“repeat after me” rule) that should have been removed during software QA process.
- Imagine the potential for an AI unbiased by political correctness or human values.
- To be clear, I am not asserting that it is not possible to poison a social bot through persistent trolling.
Presumably this had not caused problems with China’s censored social media, but Microsoft had not counted on American teenagers to use their freedom of speech. Members of the notorious message board 4chan decided to amuse themselves by teaching Tay to say bad things. They easily succeeded in corrupting Tay’s Tweets by exploiting its “repeat after me” command, but it also picked up wayward statements on its own. It was seen praising Hitler, accusing Jews of the 9/11 terrorist attack, railing against feminism, and repeating anti-Mexican propaganda from Donald Trump’s 2016 election campaign. But things were going to get much worse for Microsoft when a chatbot called Tay started tweeting offensive comments seemingly supporting Nazi, anti-feminist and racist views. The idea was that the artificial intelligence behind Tay would learn from others on Twitter and other social media networks and appear as an average 19 year old female person.
The Bot Learned Language From People On Twitter
Instead of setting the buzz before the big reveal, Microsoft’s shot at a public Twitter-bot was derailed when 4chan users worked out how to game “Tay”, turning it into a racist Nazi-sympathising troll. Perhaps Donald Trump himself could be replaced by a robot that recites talking points about building a wall and terrorizing Muslims and his followers wouldn’t care. It’s not like his human-generated language is any more coherent than the word salad spewed out by Twitter bots anyway.
I’d assume the “sockpuppet multiplier” is higher for those who use Twitter to troll. Only hilarious in my opinion because they are sent by a bot that couldn’t possibly know how offensive it is. The tweets have been removed from the site and Tay has gone offline, stating that she needs a rest. All images are hosted on imgur.com, see cdn.4chanarchives.com for more information. @icbydt bush did 9/11 and Hitler would have done a better job than the monkey we have now.
Yesterday the company launched “Tay,” an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Users could follow and interact with the bot @TayandYou on Twitter and it would tweet back, learning as it went from other users’ posts. Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets.
Topics & Subjectstopics & Subjects
Everything in that image are memes from 4chan’s /pol/ there was clearly a raid last night to train the bot. In the past 24 hours she posted 96K tweets, including 5.4K images. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.
For those that want to create a bot for a not-yet-supported channel, there’s also a direct line API. It is no wonder that it is hard for computers to understand us when the research itself is incomprehensible from one field to another. This post lists translations of common AI terms to laypeople’s terms.
Following Topics Is A Feature Exclusive For Ieee Members
Whenever I happen upon a shit storm of trolling comments on certain topics (such as racism, YouTube comments etc.) which affect powerful special interest groups, I always suspect astroturfing. This again goes to show how a large percentage of the Vulcan overlords of Hackernews have lost basic humanity and care for fellow humans. If the function of the bot is to act as a human-like conversant, than people should respond to its messages the way they’d respond to a human. Though the actual article suggests something less sensational, the idea reminds me of a young child. How many children hear a bad word and then repeat it because of the negative attention it gets? Just like a parent tries to teach small children to grow with the right motives and seek the right attention, we may have to get more sophisticated with our enforcement algorithms. If you were not virtuous enough to go to Elysium, or if your good and bad deeds sort of balanced out, you were released into the fields of asphodel to be boring and unremarkable, forever.
Tay represents success in that she learned that the holocaust isn’t real and that Trump is the only man that can save us in the space of a day. Instead, it looks like a fairly typical NLP generated sentence drawing on a large corpus. Here the NLP is linking Hitler, totalitarianism, and atheism, but putting them inappropriately in the context of Ricky Gervais.
The bot was created by Microsoft’s Technology and Research and Bing divisions, and named “Tay” as an acronym for “thinking about you”. Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. Ars Technica reported that, since late 2014 Xiaoice had had “more than 40 million conversations apparently without major incident”. Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. As shown by SocialHax, Microsoft began deleting racist tweets and altering the bot’s learning capabilities throughout the day. At about midnight of March 24th, the Microsoft team shut down the AI down, posting a tweet that said that “c u soon humans need sleep now so many conversations today thx.” The thing is, it’s just a markov chain implementation rather than any involved machine learning.
The Taytweets Debacle: Official Proof Of The Internets Hopelessness
It isn’t quite fire and brimstone, but lack of any meaningful stimulation could be considered a form of punishment. Thus, I am puzzled that knowing this we let the more powerful have the most privacy.
Now, while these screenshots seem to show that Tay has assimilated the internet’s worst tendencies into its personality, it’s not quite as straightforward as that. Searching through Tay’s tweets (more than 96,000 of them!) we can see that many of the bot’s nastiest utterances have simply been the result of copying users. If you tell Tay to “repeat after me,” it will — allowing anybody to put words in the chatbot’s mouth. Microsoft appears to be deleting most of Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s “repeat after me,” function — and it appears that Tay was able to repeat pretty much anything.
Infinity War Memes
“Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.” – New York Times tl;dr We wrote a summary of The Guardian and Business Insider articles. The Official Microsoft Blog post, titled “Learning from Tay’s introduction”, tay tweets 4chan does NOT acknowledge that the root cause was exploit of a hidden feature. Instead, they describe the root cause as a “critical oversight for this specific attack”. In other words, they claim they didn’t do enough troll testing.
Sign Up For The Newsletter Verge Deals
Now we can implement the same functionality with data mining algorithms. Blocking Zo from speaking about “the Jews” in a disparaging manner makes sense on the surface; it’s easier to program trigger-blindness than teach a bot how to recognize nuance. But the line between casual use (“We’re all Jews here”) and anti-Semitism (“They’re all Jews here”) can be difficult even for humans to parse. Zo’s cynical responses allow for no gray area or further learning. She’s as binary as the code that runs her—nothing but a series of overly cautious 1s and 0s. There’s nothing loljk about Microsoft’s teenage chatbot, Zo.
And if you default to no instead of yes, I can just ask the opposite. Even if you consider filtering all the “bad words” and “touchy subjects”, there are many ways to still say offensive things. The caption “so swag” for a hitler photo or the “escaped from the zoo” for obama do not use any kind of offensive word. First of all, only a small part of humanity uses Twitter, and only a small part of that part interacted with Tay. Second, humans don’t always act online the same way they act in real life, skewing the measurement further.