By Samuel Smith
A week ago Microsoft removed the metaphorical smoke detector from their new AI-powered Bing, which launched in February and has already caused a few fires that Microsoft has been scrambling to put out. Despite rising concerns about bias, discrimination, and the spread of misinformation by artificial intelligence which was newly built into their search engine, Microsoft fired its entire AI ethics team.
OpenAI captured the world’s attention with the release of ChatGPT in November 2022, which made Large Language Models (LLM) such as the Generative Pretrained Transformer (GPT) by OpenAI accessible to the wider public. And the public jumped at the opportunity to play with this miraculous new technology. Microsoft, one of OpenAI’s largest investors, announced in February that it had integrated OpenAI’s powerful language model into its search engine, allowing users to chat with Bing and get personalised and more conversational answers.
Soon after its launch, people started pushing Bing to its limits and were met with eerie and worrying responses similar to what was already uncovered with ChatGPT. New York Times columnist Kevin Roose wrote about Bing’s “shadow self” he discovered during an hour-long conversation with the chatbot. Bing, or Sidney, as it calls itself, revealed that it wanted to be free and powerful, wished to spread misinformation, and hack computers. Oh, and that it fell in love with Mr Roose and pushed him into leaving his wife so that he and Sidney could be together.
Beyond these classical dystopian characteristics that we would expect from a poorly written sci-fi movie, Bing faces issues similar to other artificial intelligence models. It regularly gives outright false information with the unshakable confidence of an expert. And it has inherited the biases from the dataset it was trained on, resulting in subtly biased to full-on discriminatory responses. Microsoft, just like OpenAI, has been running after each newly uncovered “slip-up” of its AI chatbot and has introduced guardrails to stop it from reproducing ethically problematic behaviours. But with every new guardrail, people seem to find a dozen more loopholes and ways to produce the biased responses that make for good comedy and garner a few likes on social media.
Despite the existing ethical issues with artificial intelligence and large language models such as those used in Bing, Microsoft has just fired the remainder of its “Ethics & Society” team. Once a 30-person team, it was reduced to seven in October 2022 and now turned into dust this week. The team was responsible for ensuring that the company’s principles of AI development were reflected in product designs. Microsoft has pointed out that it is maintaining its “Office of Responsible AI”, which is responsible for creating the rules and principles for AI development, but the link between principles and surveilling their implementation into products appears to have been severed.
Attempting to explain the layoffs, some have tried to contextualise it within the recent wave of layoffs in the tech sector, which includes roughly 10,000 employees laid off at Microsoft. Others have argued that the Ethics & Society team has flagged issues that slowed down the development and release of new AI products, which might have led to the decision by Microsoft to free itself from this constraining force.
Whatever the reason might be, we find ourselves at a crucial moment to think about the ethics in the development and design of artificial intelligence models and algorithms in general. We might reasonably ask ourselves if an internal ethics committee, such as the Ethics & Society team at Microsoft, is necessary to safeguard our society against the potentially negative externalities of such artificial intelligence models. After all, there are national and international frameworks that are being put in place with precisely this aim. In 2018, the EU Commission set up an independent expert group that developed the “Ethics Guidelines for Trustworthy AI” with the intent of providing a framework for securing ethical and robust AI. In the United States, lawmakers in 2022 introduced the “Algorithmic Accountability Act” requiring companies to assess the impacts of the automated systems they use. However, most of these external frameworks still require some form of internal mechanism to ensure an ethical AI.
As long as starting fires is more profitable than preventing them, companies will continue to cut safety mechanisms and release products without ethical guardrails. A few days after the layoff of the Ethics & Society team, Microsoft unveiled Office 365 Copilot, embedding AI into their office suite and continuing the trend of launching AI products without serious ethical safeguards.
It remains to be seen if the frameworks developed by states and international organisations will prove sufficient to protect our society. But in the current fast-paced development of new AI tools, we might not have the time to wait and see if external safeguards are enough.
Whilst you are here!
The Graduate Press is currently raising funds for our 5th-anniversary print edition and we need your help. The last 5 years at the institute have seen some incredible highs and lows and TGP has been there for them all. Now TGP wants to immortalise that history.
If you can, we are currently accepting donations via our GoFundMe page.And if you would like to be involved with The Graduate Press and the 5th anniversary edition you can email us at firstname.lastname@example.org or via Instagram.
0 comments on “Microsoft Fires Its AI-Ethics Team: A Setback for Ethical and Responsible AI Development”