Opening the door to the ocean of the societal decision-making through the internet is not without waves. Last week, the aftermath of two social media campaigns provided examples of such debacles.
In case you missed it, here is a recap.
First, the lesser of the two snafus (depending on perception) was when the Natural Environment Research Council of the United Kingdom decided to poll the public on the net for the name of an arctic polar naval research vessel designed to launch in near future. Someone must have thought what better way to get the public interested about deep-sea sciences research and celebrate the great British explorers than by having it named by the public – hence a website for naming the ship and here’s the link https://nameourship.nerc.ac.uk/
You’d be amazed the countless times when ill-advised social internet experiments have been launched with really noble intentions. Needless to say the tally for the most popular name currently…hold on are you ready…”RSS Boaty McBoatface” yes siri…you read it right. Yes, ridiculous as it may seem… social media users have voted for this name more than any other. Albeit the web traffic did create a lot of “enthusiasm and creativity” for the naming process which is to conclude on April 16. Learning lesson don’t let the internet decide your fate online especially the name of your first-born.
Now for the next sour chapter in public social experimentation. The unveiling of the Microsoft’s machine learning via an artificial intelligence “chatbot” capable of interacting with users over twitter and other chat apps. The chatbot called Tay that was launched to learn and supposedly to mimic the vocabulary of a female millennial persona. “She” soon started repeating vitriolic, racist, and sexist remarks from the manipulation and exploitation by the net users.
The unintended consequences when an AI robot mirrors learning through interaction without a filtering mechanism are apparently the prejudicial attributes of the society/internet’s end users. The chat bot Tay was quickly shut down. The company released a statement apologizing for the offensive remarks, and deleted them from feeds. For now, Tay is offline until the engineers figure out Tay’s vulnerabilities in this current net culture. Have we learned a lesson…I am not sure. As AI technology is released into the wild, it is as predictable as its beastly users. An example when discussion grows long enough on the internet…the path ends in the monster…Have you heard of Godwin’s Law?
Photo Credit: What Have I Said by Thomas Hawk CC by –NC 2.0