Microsoft Lays Off Team That Taught Employees How – What Now?

Microsoft recently implemented layoffs that impacted 10,000 employees across the company, Platformer has discovered. This included eliminating its entire ethics and society team within the artificial intelligence organization. This leaves Microsoft without a specialized group to guarantee its AI standards are linked firmly with product development while they are making AI tools accessible to the public, according to previous and current staff members.

Microsoft has a functioning Office of Responsible AI responsible for drawing up regulations and standards to guide their AI projects. Despite the recent job cuts, they have stated that their commitment to responsibility has not wavered and continues to expand.

The company Spokesperson says:

“Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this.”

“Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. […] We appreciate the trailblazing work the Ethics & Society did to help us on our ongoing responsible AI journey.”

Employees noted the importance of the ethics and society team in guaranteeing that Google’s responsible AI principles are incorporated into the products eventually released.

The company Spokesperson went on to say:

“Our job was to … create rules in areas where there were none.”

“People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies.’”

Our job was to show them and to create rules in areas where there were none.”

In the last few years, the team created Judgment Call, a role-playing game that aids designers in imagining and discourse potential harms arising from AI during product formation. This is a part of their “responsible innovation toolkit,” which they made available online. Now, they are trying to recognize threats from Microsoft’s utilization of OpenAI’s technology in their products.

At its peak in 2020, the ethics and society team comprised about 30 individuals, including engineers, designers, and philosophers. However, October saw a significant team downsizing to approximately seven people due to a restructuring process.

After the reorg, John Montgomery, corporate vice president of AI, informed staff at a gathering with the team that company executives had commanded them to act quickly. “Kevin Scott and Satya Nadella are both strongly urging us to get these newest OpenAI models and any from now on into customers’ hands as soon as possible,” he said, according to audio acquired by Platformer.

Because of that pressure, Montgomery said, much of the team would be moved to other areas of the organization.

One employee declared on the call that certain team members had resisted.

One employee says:

“I’m going to be bold enough to ask you to please reconsider this decision.”

“While I understand there are business issues at play … what this team has always been deeply concerned about is how we impact society and the negative impacts that we’ve had. And they are significant.”

Montgomery says:

“Can I reconsider? I don’t think I will.”

“Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”

Montgomery stated that the team would not be dissolved when asked about it.

Montgomery went on to say:

“It’s not that it’s going away — it’s that it’s evolving,”

“It’s evolving toward putting more of the energy within the individual product teams that are building the services and the software, which does mean that the central hub that has been doing some of the work is devolving its abilities and responsibilities.”

After several members were moved to other parts of Microsoft, those on the ethics and society team commented that the decreased size made it hard to realize their grand aspirations.

On March 6th, staff was asked to join a Zoom call at 11:30 AM PT to receive an update of immense importance from Montgomery. Sadly, according to one worker, they were informed that the team was being shut down, creating a deficiency in the entirety of designs for AI products.

The worker expressed that the transition has left a significant void regarding the user experience and comprehensive design of AI products.

One employee says

“The worst thing is we’ve exposed the business to risk and human beings to risk in doing this.”

The ongoing struggle highlights tech giants’ difficulty in creating divisions to ensure their products are more socially responsible. At their peak, these departments help product teams anticipate technology misuse and address issues before they are released.

Ethical AI researchers have to take on the difficult task of saying “yes” and “no” within companies that may not be open to hearing their opinion. This can create tension that often surfaces in public, as evidenced by Google’s firing of Timnit Gebru after she published a critical paper on large language models. This incident caused many top leaders within the department to leave, damaging Google’s reputation regarding responsible AI use.

The ethics and society team reported that they generally attempted to be accommodating toward product development. Nevertheless, as Microsoft drove to launch AI tools more rapidly than its competitors, the leadership seemed less keen on the type of long-term thinking that was their expertise.

This presents Microsoft with an opportunity to challenge Google in key areas. If they successfully drive away 1% of the market share from Google, it could generate $2 billion yearly. Therefore, this dynamic merits careful inspection.

Microsoft has spent an enormous $11 billion on OpenAI and is working hard to use the startup’s technology throughout its company. This effort appears to be paying off; last week, Microsoft announced that Bing now has 100 million daily active users, of which one-third have joined since the search engine was relaunched with OpenAI’s innovations.

All parties concerned with the growth of AI acknowledge that it could bring about serious, even deadly, dangers and uncertainties. To demonstrate their commitment to tackling these risks, tech companies have created several departments specifically for the purpose; however, Microsoft has chosen to downsize its ethics and social team. This decision is worrying due to the severity of the potential outcomes.

The elimination of the ethics and society team came just as the group’s remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released tools powered by OpenAI to a global audience.

Last October, Microsoft launched the Bing Image Creator, which uses OpenAI’s DALL-E system and allows people to create images based on text prompts. This was one of the first public collaborations between Microsoft and OpenAI. Before the launch, a memo had been written by the team outlining the potential brand risks associated with this tool.

Source: The Verge

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top