Meta is reportedly disbanding its Responsible Artificial Intelligence (RAI) team as it directs its resources towards more productive AI. According to the news, most RAI members will move to the company’s productive AI product team, while others will work on Meta’s AI infrastructure.
The company regularly says it wants to improve AI, and the company even has a page dedicated to this promise, where it lists “responsible AI pillars” including accountability, transparency, security, privacy, and more.
In The Information’s news, Jon Carvill, who represents Meta, is quoted as saying that the company will continue to prioritize and invest in developing safe and responsible artificial intelligence. Carvill added that although the company is splitting the team, those members will continue to support efforts to develop responsible AI.
RAI was created to detect problems with AI training approaches, including whether the company’s models were trained with enough diverse information, and to prevent situations such as moderation issues on their platforms.
Automated systems on Meta’s social platforms have led to problems such as a Facebook translation issue that led to a wrongful arrest, WhatsApp AI sticker generation resulting in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse material. On the other hand, the US government signed agreements with AI companies, and Joe Biden later directed government agencies to create AI safety rules. Meanwhile, the European Union has also published its own rules and principles on artificial intelligence and is still struggling to pass artificial intelligence legislation.