Microsoft chatbot draws criticism

By David Ludwig | Guest Writer

A new book by Microsoft president Brad Smith, Tools and Weapons, describes a legal incident in which he was contacted by lawyers representing Swift while on vacation, concerning a chatbot known as “Thinking About You,” or simply “Tay.”
While Taylor Swift is mostly known for her extremely successful singing career, fans may be less aware of her vast collection of trademark applications, mostly lyrics from her album 1989 for merchandising purposes. However, it seems that Swift attempted to take things one step further by involving her lawyers in an infamous Microsoft blunder centering on a somewhat notorious chatbot.
Tay was designed by Microsoft as an Artificial Intelligence (AI) program that users on Twitter could type messages to and receive responses from, based on data that was gathered from programmers and previous conversations. This is a relatively simple and common form of AI known as a chatbot, which can be found scattered around the internet, including social media platforms.
Microsoft decided to develop Tay for Twitter users in the United States after the success of another chatbot named Xiaoice was released in China and saw major success. Chinese users would hold conversations with Xiaoice and tell it about a variety of topics, ranging from how their day was going to problems that weighed heavily on their minds.
For many Chinese denizens of the internet, Xiaoice was something to talk to, something that would listen to them and provide social interaction they otherwise could not receive. While Xiaoice may have become a force for good in the field of AI, the same cannot be said of the disaster that Tay would quickly become.
On March 23, 2016, Tay was released with high hopes, an innocent outlook and an excitement to meet humans. Less than 24 hours later, Tay was denying the Holocaust, slinging racist slurs left and right, praising the actions of Adolf Hitler and saying that she hated everyone. While some of the damage can be attributed to a “repeat after me” command users could give Tay, there is no denying the AI learned from its conversations with Twitter users.
The chatbot was quickly shut down by Microsoft and its Twitter account was set to private. However, no amount of cover-up could change what had already happened, and the spirit of Tay still lives on in internet culture through memes and various other formats. A new chatbot, Zo, was released but has since been shut down.
Around the time this incident occurred, Swift became aware of the chatbot and was angered the chatbot’s name bore a resemblance to hers. Swift and her lawyers argued the chatbot’s name would cause confusion and were prepared to take legal action. However, at this moment, no lawsuit from Swift’s legal team was filed. This is most likely because of how little time Tay was active.
Meanwhile, Smith and Microsoft were disappointed in the entire incident and wish to express more caution while moving forward in the field of AI by adding more safeguards to prevent any similar issues in the future.