Safety Concerns, Pushback Against OpenAI’s For-Profit Plan

[Artificial Intelligence -& Machine Learning](https://www.govinfosecurity.com/artificial-intelligence-machine-learning-c-469) , [Next-Generation Technologies -& Secure Development](https://www.govinfosecurity.com/next-generation-technologies-secure-development-c-467)Safety Concerns, Pushback Against OpenAI’s For-Profit Plan==========================================================Opponents Say Restructuring Will Undermine OpenAI’s Security Commitments [Rashmi Ramesh](https://www.govinfosecurity.com/authors/rashmi-ramesh-i-4224) ([rashmiramesh_](https://www.twitter.com/rashmiramesh_)) • December 31, 2024 [](https://www.bankinfosecurity.com/safety-concerns-pushback-against-openais-for-profit-plan-a-27193#disqus_thread) * * * * * [Credit Eligible](/premium/pricing ‘As a BankInfoSecurity.com annual member, this content can be used toward your membership credits and transcript tracking.’)* [](/premium/pricing ‘As a BankInfoSecurity.com annual member, this content can be used toward your membership credits and transcript tracking.’)* Get Permission* ![Safety Concerns, Pushback Against OpenAI’s For-Profit Plan](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/safety-concerns-pushback-against-openais-for-profit-plan-showcase_image-3-a-27193.jpg) Image: ShutterstockOpenAI’s attempt to convert to a for-profit company is facing opposition from competitors and artificial intelligence safety activists, who argue that the transition would ‘undermine’ the tech giant’s commitment to secure AI development and deployment.**See Also:** [AI-Driven SOC Transformation with Cortex XSIAM](https://www.govinfosecurity.com/whitepapers/ai-driven-soc-transformation-cortex-xsiam-w-14599?rf=RAM_SeeAlso)Non-profit organization Encode on Friday [requested](https://dd80b675424c132b90b3-e48385e382d2e5d17821a5e1d8e4c86b.ssl.cf1.rackcdn.com/external/ndoc-424-cv-04722-musk-v-altman-dec-272024.pdf) the U.S. District Court for the Northern District of California to allow it to file an amicus brief supporting Elon Musk’s [motion for an injunction](https://dd80b675424c132b90b3-e48385e382d2e5d17821a5e1d8e4c86b.ssl.cf1.rackcdn.com/external/ndoc-424-cv-04722-ygr-musk-v-altman-notion-for-preliminary-injunction-nov-29-2024.pdf) to prevent OpenAI’s planned [transition](https://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission/) (see: [*OpenAI Exits, Appointments and New Corporate Model*](/openai-exits-appointments-new-corporate-model-a-26384).Volunteer network Encode is a supporter of the AI safety bill [vetoed](/california-gov-newsom-vetoes-hotly-debated-ai-safety-bill-a-26407) by California Gov. Gavin Newsom and has contributed to the White House’s [AI Bill of Rights](https://www.whitehouse.gov/ostp/news-updates/2022/10/04/blueprint-for-an-ai-bill-of-rightsa-vision-for-protecting-our-civil-rights-in-the-algorithmic-age/) and President Joe Biden’s [AI executive order](/white-house-issues-sweeping-ai-executive-order-a-23428).An early contributor to OpenAI in its nonprofit days, Musk filed a lawsuit in November accusing the company of being anti-competition and of abandoning its philanthropic mission, asking that it scale back the transition. OpenAI [labelled](https://openai.com/index/elon-musk-wanted-an-openai-for-profit/) Musk’s contention a case of sour grapes.Encode’s proposed brief from late last week said it sought to support Musk’s injunction petition because OpenAI becoming a for-profit company would ‘undermine’ its mission to develop and deploy ‘transformative technology in a way that is safe and beneficial to the public.”OpenAI and its CEO Sam Altman claim to be developing society-transforming technology, and those claims should be taken seriously,’ it said. ‘If the world truly is at the cusp of a new age of artificial general intelligence, then the public has a profound interest in having that technology controlled by a public charity legally bound to prioritize safety and the public benefit rather than an organization focused on generating financial returns for a few privileged investors.’OpenAI was set up in 2015 as a non-profit research lab, but it shifted to a hybrid structure to fund projects that required significant funds. It [adopted](https://openai.com/index/openai-lp/) a ‘capped profit’ model, permitting investments from corporations [including Microsoft](https://news.microsoft.com/2019/07/22/openai-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/) while maintaining non-profit oversight. The organization now intends to convert its for-profit segment into a Delaware public benefit corporation, which would issue ordinary shares of stock. Although the nonprofit branch will continue to exist, OpenAI will trade in its control for ownership stakes in the PBC.Encode’s lawyers argued OpenAI’s nonprofit division, which has pledged not to compete with ‘value-aligned, safety-conscious projects’ nearing AGI, could lose motivation for such commitments under the for-profit structure. Company board members’ authority to revoke investor equity for safety reasons would also be eliminated following the restructuring, Encode’s brief said.’The public interest would be harmed by a safety-focused, mission-constrained nonprofit relinquishing control over something so transformative at any price to a for-profit enterprise with no enforceable commitment to safety,’ it said.Other corporations have attempted to prevent the transition as well.Meta earlier this month [reportedly](https://www.wsj.com/tech/ai/elon-musk-open-ai-lawsuit-response-c1f415f8) wrote to California attorney general Rob Bonta, saying that the conversion would have ‘seismic implications for Silicon Valley.’Several of OpenAI’s top executives have quit the company, citing concerns over the company prioritizing profits over safety.The company set up a committee to make ‘critical’ safety and security decisions for all of its projects in May, after disbanding its [‘superalignment’](https://openai.com/index/introducing-superalignment/) security team dedicated to preventing AI systems from going rogue (see: [*OpenAI Formulates Framework to Mitigate ‘Catastrophic Risks’*](/openai-formulates-framework-to-mitigate-catastrophic-risks-a-23930)).OpenAI co-founders [Ilya Sutskever](https://twitter.com/ilyasut/status/1790517455628198322) and [Jan Leike](https://x.com/janleike/status/1791498174659715494) quit the company over their disagreement on the approach to security, as did policy researcher [Gretchen Krueger](https://x.com/GretchenMarina/status/1793403475260551517). Both Sutskever and Leike were part of OpenAI’s now-disbanded superalignment safety team, working on addressing the long-term safety risks facing the company and the technology. Krueger said she decided to resign a few hours before her other two colleagues did, as she shared their security concerns.Leike in a social media post criticized OpenAI’s lack of support for the superalignment security team. ‘Over the past years, safety culture and processes have taken a back seat to shiny products,’ Leike said.Policy researcher Miles Brundage, who quit the company in October, [said](https://x.com/Miles_Brundage/status/1872676410046976096) on social media that he was concerned about OpenAI’s non-profit entity becoming a ‘side thing.’If OpenAI was allowed to operate as a for-profit company, Encode said the Sam Altman-led firm’s ‘touted fiduciary duty to humanity would evaporate, as Delaware law is clear that the directors of a PBC owe no duty to the public at all.’ ![Rashmi Ramesh](https://6d63d49ccb7c52435540-5070aa97eaa2b8df4eb5a91600e69901.ssl.cf1.rackcdn.com/rashmi-ramesh-largeImage-10-a-4224.jpg) #### [Rashmi Ramesh](https://www.govinfosecurity.com/authors/rashmi-ramesh-i-4224)*Assistant Editor, Global News Desk, ISMG* Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.[](https://twitter.com/rashmiramesh_) [](https://www.linkedin.com/in/rashmi-ramesh-57061069/) [](mailto:rramesh@ismg.io) ![Live Webinar | Get Ahead and Stay Ahead of Threats with Tanium and Microsoft](https://75d03c5f1bfbbbb9cc13-369a671ebb934b49b239e372822005c5.ssl.cf1.rackcdn.com/live-webinar-get-ahead-stay-ahead-threats-tanium-microsoft-landing_page_image-8-w-6046.jpg) ##### [Live Webinar -| Get Ahead and Stay Ahead of Threats with Tanium and Microsoft](https://www.govinfosecurity.com/webinars/live-webinar-get-ahead-stay-ahead-threats-tanium-microsoft-w-6046?rf=RAM_Resources)##### [Second Annual Generative AI Study: Business Rewards vs. Security Risks](https://www.govinfosecurity.com/second-annual-generative-ai-study-business-rewards-vs-security-risks-a-27037?rf=RAM_Resources)![AI Surge Drives a 40-1 Ratio of Machine-to-Human Identities](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/ai-surge-drives-40-1-ratio-machine-to-human-identities-showcase_image-4-a-26879.jpg) ##### [AI Surge Drives a 40-1 Ratio of Machine-to-Human Identities](https://www.govinfosecurity.com/ai-surge-drives-40-1-ratio-machine-to-human-identities-a-26879?rf=RAM_Resources)![Vá à luta com armas mais inteligentes: acelere seu SOC com IA](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/v-luta-com-armas-mais-inteligentes-acelere-seu-soc-com-ia-showcase_image-5-a-26763.jpg) ##### [Vá à luta com armas mais inteligentes: acelere seu SOC com IA](https://www.govinfosecurity.com/va-luta-com-armas-mais-inteligentes-acelere-seu-soc-com-ia-a-26763?rf=RAM_Resources)![OnDemand | Recon 2.0: AI-Driven OSINT in the Hands of Cybercriminals](https://75d03c5f1bfbbbb9cc13-369a671ebb934b49b239e372822005c5.ssl.cf1.rackcdn.com/live-webinar-recon-20-ai-driven-osint-in-hands-cybercriminals-landing_page_image-3-w-5939.jpg) ##### [OnDemand -| Recon 2.0: AI-Driven OSINT in the Hands of Cybercriminals](https://www.govinfosecurity.com/webinars/ondemand-recon-20-ai-driven-osint-in-hands-cybercriminals-w-5939?rf=RAM_Resources)![](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/cyberedboard-profiles-in-leadership-kevin-li-image_large-9-a-26936.jpg) [Training -& Security Leadership](https://www.govinfosecurity.com/training-security-leadership-c-488)##### [CyberEdBoard Profiles in Leadership: Kevin Li](https://www.govinfosecurity.com/cyberedboard-profiles-in-leadership-kevin-li-a-26936)![](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/patched-bitlocker-flaw-still-susceptible-to-hack-showcase_image-7-a-27195.jpg) [Encryption -& Key Management](https://www.govinfosecurity.com/encryption-key-management-c-209)##### [Patched BitLocker Flaw Still Susceptible to Hack](https://www.govinfosecurity.com/patched-bitlocker-flaw-still-susceptible-to-hack-a-27195)![](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/arrest-us-army-soldier-tied-to-att-verizon-extortion-showcase_image-6-a-27192.jpg) [Cloud Security](https://www.govinfosecurity.com/cloud-security-c-445)##### [Arrest of US Army Soldier Tied to AT-&T and Verizon Extortion](https://www.govinfosecurity.com/arrest-us-army-soldier-tied-to-att-verizon-extortion-a-27192)![](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/safety-concerns-pushback-against-openais-for-profit-plan-showcase_image-3-a-27193.jpg) [Artificial Intelligence -& Machine Learning](https://www.govinfosecurity.com/artificial-intelligence-machine-learning-c-469)##### [Safety Concerns, Pushback Against OpenAI’s For-Profit Plan](https://www.govinfosecurity.com/safety-concerns-pushback-against-openais-for-profit-plan-a-27193)![](https://130e178e8f8ba617604b-8aedd782b7d22cfe0d1146da69a52436.ssl.cf1.rackcdn.com/evolution-identity-defense-saviynts-vision-for-2025-showcase_image-8-a-27171.jpg) [Identity -& Access Management](https://www.govinfosecurity.com/identity-access-management-c-446)##### [The Evolution of Identity Defense: Saviynt’s Vision for 2025](https://www.govinfosecurity.com/evolution-identity-defense-saviynts-vision-for-2025-a-27171)[Overview](https://www.govinfosecurity.com/webinars/risk-management-framework-learn-from-nist-w-255) * Twitter* Facebook* LinkedIn* * * From heightened risks to increased regulations, senior leaders at all levels are pressured to improve their organizations’ risk management capabilities. But no one is showing them how – until now.Learn the fundamentals of developing a risk management program from the man who wrote the book on the topic: Ron Ross, computer scientist for the National Institute of Standards and Technology. In an exclusive presentation, Ross, lead author of NIST Special Publication 800-37 – the bible of risk assessment and management – will share his unique insights on how to:* Understand the current cyber threats to all public and private sector organizations;* Develop a multi-tiered risk management approach built upon governance, processes and information systems;* Implement NIST’s risk management framework, from defining risks to selecting, implementing and monitoring information security controls.Presented By————![Ron Ross](https://6d63d49ccb7c52435540-5070aa97eaa2b8df4eb5a91600e69901.ssl.cf1.rackcdn.com/ron-ross-smallImage-a-558.jpg) [Presented By](/authors/ron-ross-i-558)—————————————#### [Ron Ross](/authors/ron-ross-i-558)*Sr. Computer Scientist -& Information Security Researcher, National Institute of Standards and Technology (NIST)*

Related Tags:
NAICS: 81 – Other Services (except Public Administration)

NAICS: 333 – Machinery Manufacturing

NAICS: 518 – Computing Infrastructure Providers

Data Processing

Web Hosting

Related Services

NAICS: 92 – Public Administration

NAICS: 33 – Manufacturing – Metal

Electronics And Other

NAICS: 51 – Information

NAICS: 813 – Religious

Grantmaking

Civic

Professional Services

Similar Services

Blog: GovInfoSecurity

Associated Indicators: