OpenAI Makes Big Announcement 🗣️

Read time: 4.5 mins

Welcome Futurist,

OpenAI released its framework for addressing safety in its models, which would allow the Board of Directors to overrule CEO Sam Altman on releasing future products if they are deemed unsafe. This is an important move with respect to new services and governance. Also, Hugging Face CEO Clément Delangue has big goals for the open-source AI community in 2024. You’re going to want to check them out. We’ve got all the most important details for you.

Let’s dive in!

TODAY’S BREAKDOWN

  • OpenAI says its Board has the power to overrule its CEO on safety of future products

  • Hugging Face CEO reveals goals for 2024

  • EU's AI Act isn't actually a done deal yet?

  • Game-changing productivity tools

  • AI job opportunities

  • Mesmerizing AI art 

🤖 AI x TECH

OpenAI: Board Has Final Say on Safety of Future Products

(credit: OpenAI)

OpenAI published its framework for addressing safety in its models, which would allow the Board of Directors to overrule CEO Sam Altman on releasing future products if they are deemed unsafe. 

“OpenAI’s recently announced ‘preparedness’ team said it will continuously evaluate its AI systems to figure out how they fare across four different categories — including potential cybersecurity issues as well as chemical, nuclear and biological threats — and work to lessen any hazards the technology appears to pose,” Bloomberg News reports. “Specifically, the company is monitoring for what it calls ‘catastrophic’ risks, which it defines in the guidelines as ‘any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals.’”

OpenAI said it will also create an advisory group to evaluate safety reports for the board. Aleksander Madry, who is head of the preparedness body, said the new group will review Madry’s team’s work and make recommendations to Altman and the board, which has undergone changes since the CEO’s brief ouster. 

“Madry said his team will repeatedly evaluate OpenAI’s most advanced, unreleased AI models, rating them ‘low,’ ‘medium,’ ‘high,’ or ‘critical’ for different types of perceived risks,” Bloomberg News also reports. “The team will also make changes in hopes of reducing potential dangers they spot in AI and measure their effectiveness. OpenAI will only roll out models that are rated ‘medium’ or ‘low,’ according to the new guidelines.”

Madry said the safety framework is the product of months of work.

“AI is not something that just happens to us that might be good or bad,” he told the outlet. “It’s something we’re shaping.”

More top stories

  • AI can predict events in people's lives, according to researchers in the U.S. and Denmark.

  • Google launches Gemini upgrade to Bard across the U.K.

  • Fashion mogul Tommy Hilfiger has created what is being touted as the first mobile-based fashion styling game to incorporate generative AI.

  • AI-screened retinas photos can diagnose childhood autism with 100% accuracy, according to researchers in South Korea.

  • International law firm Hogan Lovells has launched a legal AI chatbot called ELTEMATE CRAIG.

📈 AI X BUSINESS

Hugging Face CEO Reveals Goals for 2024

(credit: 20VC with Harry Stebbings)

Hugging Face CEO Clément Delangue told Quartz in a newly-published interview that he wants 10 million people to use the AI startup’s services by the end of 2024 and will boost efforts to build more community and collaboration into the platform. 

Delangue said these objectives are part of the company’s plan to democratize AI. Hugging Face, a key voice for open-source AI development, counts over 50,000 organizations using its services. Developers have access to over 300,000 models, 100,000 applications, and 50,000 datasets. 

“We need more companies and organizations to share their models and datasets publicly and in open-source so that everyone can understand and build AI themselves,” Delangue told Quartz.

The company, which has been called the GitHub for AI, reached a $4.5 billion valuation in August after scoring $235 million in funding. 

The round included funds from Salesforce, Google, Amazon, Nvidia, Intel, AMD, and IBM. 

More top stories

  • Taiwan Semiconductor Manufacturing Company (TSMC) Chairman Mark Liu will retire in 2024. C.C. Wei, the chip giant’s Vice chairman and CEO, will succeed Liu’s position, subject to approval from shareholders. 

  • Expedia plans to encourage customers to use its website-based, AI-powered travel search option instead of Google.

  • Meltwater, the AI-powered media monitoring startup, scores $65 million investment from Norwegian private equity firm Verdane. The investment values the company at $592 million.

  • Amazon Web Services Vice President Mai-Lan Tomsen Bukovec discusses what’s ahead for AI in 2024 with Bloomberg News.

  • Toronto is poised to establish itself as a leading AI hub as the city develops cutting-edge infrastructure with assistance from both the private and public sector, according to McKinsey.

🛠️ TOOLS

MagicForm: AI-powered sales assistant builder

Eloise: AI-powered Facebook description writer

Author's Voice: Converts books into audiobooks using AI

Taskade: AI-powered task manager tool

Decktopus: AI-powered presentation generator

🌎 AI x POLICY

EU's AI Act Isn't Actually a Done Deal Yet?

(credit: Dušan Cvetanović From Pexels)

Negotiations surrounding the EU's AI Act are growing increasingly complicated as a handful of countries as holding off on green lighting the historic regulations until more details emerge, according to a report. 

“At a debriefing on the deal held among EU national governments on Friday, four of the largest countries — Germany, France, Italy and Poland — insisted they would not sign off on the deal until a final text is ready,” Axios reports. “France's digital minister, Jean-Noël Barrot, has taken to referring to the Dec. 8 deal as merely a "step" in the negotiation process, warning that France will continue to negotiate in favor of innovators and national security interests on the final details.”

However, European Parliament member Eva Maydell believes the EU’s AI Act will ultimately get ratified. 

"The entry into force will vary gradually based on the type of use case," she told the news outlet. 

Some tech organizations have warned that the sweeping rules could be "potentially disastrous" for innovation. "Regrettably speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt far beyond the AI sector alone," said Daniel Friedlaender, senior vice president of the Computer & Communications Industry Association. 

More top stories

  • Israel's Minister of Innovation, Science and Technology announces its first-ever comprehensive policy on AI regulation and ethics.

  • New Jersey Gov. Phil Murphy (D) and Princeton University President Christopher Eisgruber announced plans to establish a hub for AI in the Garden State in partnership with the New Jersey Economic Development Authority. 

  • Rep. Gary Palmer (R-AL) warns the U.S. must stay ahead of China on AI, saying, "Whoever controls artificial intelligence and quantum computing will control the battlefield."

🎨 AI X ART

“Entry to Valhalla” by Fabio Comparelli

"Day 167" by Prompt Soru

“JUST EAT IT” by Riccardo Ponticelli

 💼 JOBS BOARD

Mountain View, CA │ Full-time │ Associate level

Sunnyvale, CA │ Full-time │ Mid-Senior level

Mountain View, CA │ Full-time │ Entry level

👋 SAY HELLO!

That’s it for today’s news in the world of AI!

We always want to hear from you: [email protected] 

Did you enjoy today’s newsletter?

Your feedback makes our newsletter better!

Login or Subscribe to participate in polls.