Rebel AI group raises record cash after machine learning schism

A breakaway group of artificial intelligence researchers has raised a record first round of financing for a new start-up involved in general-purpose AI, marking the latest attempt to create an organisation to guarantee the safety of the era’s most powerful technology.

The group has split from OpenAI, an organisation founded with the backing of Elon Musk in 2015 to make sure that superintelligent AI systems do not one day run amok and harm their makers. The schism followed differences over the group’s direction after it took a landmark $1bn investment from Microsoft in 2019, according to two people familiar with the split.

The new company, Anthropic, is led by Dario Amodei, one of OpenAI’s founders and a former head of AI safety at the organisation. It has raised $124m in its first funding round. That is the most raised for an AI group trying to build generally applicable AI technology, rather than one formed to apply the technology to a specific industry, according to the research firm PitchBook. Based on figures revealed in a company filing, the round values Anthropic at $845m.

The investment was led by Jaan Tallinn, the Estonian computer scientist behind Skype. Eric Schmidt, the former chief executive of Google, and Dustin Moskovitz, co-founder of Facebook, have also backed the venture.

The break from OpenAI started with Amodei’s departure in December, and has since grown to include close to 14 researchers, according to one estimate. They include Amodei’s sister, Daniela Amodei, Anthropic’s president, as well as a group of researchers who worked on GPT-3, OpenAI’s breakthrough automatic language system, including Jared Kaplan, Amanda Askell, Tom Henighan, Jack Clark and Sam McCandlish.

OpenAI changed course two years ago when it sought Microsoft’s backing to feed its growing hunger for computing resources to power its deep-learning systems. In return, it promised the software company first rights to commercialise its discoveries.

“They started out as a non-profit, meant to democratise AI,” said Oren Etzioni, head of the AI institute founded by the late Microsoft co-founder Paul Allen. “Obviously when you get $1bn you have to generate a return. I think their trajectory has become more corporate.”

OpenAI has sought to insulate its research into AI safety from its newer commercial operations by limiting Microsoft’s presence on its board. However, that still led to internal tensions over the organisation’s direction and priorities, according to one person familiar with the breakaway group.

OpenAI would not comment on whether disagreement over research direction had led to the split, but said it had made internal changes to integrate its work on research and safety more closely when Amodei left.

Microsoft won exclusive rights to tap OpenAI’s research findings after committing $1bn to back the group, much of it in the form of technology to support its computing-intensive deep learning systems, including GPT-3. Earlier this week Microsoft said it had embedded the language system in some of its software-creation tools so that people without coding skills could create their own applications. 

The rush to rapidly commercialise GPT-3 comes in contrast to OpenAI’s handling of an earlier version of the technology, developed in 2019. The group initially said it would not release technical details about the breakthrough out of concern over potential misuse of the powerful language system, though it later reversed course.

To insulate itself against commercial interference, Anthropic has registered as a public benefit corporation, or “B corp”, with special governance arrangements to protect its mission to “responsibly develop and maintain advanced AI for the benefit of humanity”. These include creating a long-term benefit committee made up of people who have no connection to the company or its backers, and who will have the final say on matters including the composition of its board.

Anthropic said its work would be focused on “large-scale AI models”, including making the systems more easy to interpret and “building ways to more tightly integrate human feedback into the development and deployment of these systems”.

This article has been amended since initial publication to correct the number of researchers leaving OpenAI for Anthropic

Daily newsletter

© Financial Times

#techFT brings you news, comment and analysis on the big companies, technologies and issues shaping this fastest moving of sectors from specialists based around the world. Click here to get #techFT in your inbox.

Source link


How To Find Success In Data Analytics

Are you looking to find success in the data analytics field? Data analytics is a field that many have been gravitating toward in recent...

How Minimum Viable Secure Product [MVSP] Works

The Minimum Viable Secure Product, or simply MVSP, is a concise, checks-based security baseline. It is designed so that businesses can streamline their...

Amanda Holden makes cheeky confession about her sex life

Amanda Holden made a cheeky confession about her sex life in a teaser clip for her new show The Italian Job with Alan...

You’re Never Fully Protected Using iMessage

Apple iMessage uses end-to-end encryption in a way that messages from the sender’s Apple device to another Apple device are only viewable by them....

Love Island SPOILER: The contestants pucker up and lock lips for a kissing competition

Kai Fagan  Name: Kai FaganAge: 24Location: ManchesterOccupation: Science and PE teacherSomething not many people know about him: 'I'm a Jamaican citizen. Because of...