Ah, the irony of our age! OpenAI, that bastion of innovation, has unveiled a new spectacle: the Safety Fellowship, a program that dangles a princely sum of $3,850 weekly before the eyes of external researchers, tasked with probing the depths of what might go awry with advanced AI. Yet, in a twist befitting a Turgenev novel, this announcement arrives mere hours after The New Yorker revealed that the very same OpenAI had quietly disbanded its internal safety teams and, with a stroke of bureaucratic genius, expunged the word “safely” from its IRS mission statement. How quaint!
- The OpenAI Safety Fellowship, unveiled on April 6, shall run from September 14, 2026, to February 5, 2027. Fellows shall be rewarded with a stipend of $3,850 per week, approximately $15,000 in monthly compute resources, and the dubious honor of mentorship from OpenAI researchers. Alas, access to the company’s internal systems remains a forbidden fruit.
- Priority research areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving methods, agentic oversight, and high-severity misuse. Applications close on May 3, with fellows to be notified by July 25. A race against time, indeed!
- Ronan Farrow, that intrepid chronicler of our times, reported in The New Yorker that OpenAI had dissolved its superalignment team, AGI Readiness team, and Mission Alignment team since 2024. When queried about existential safety researchers, an OpenAI representative retorted with a shrug, “What do you mean by existential safety? That’s not, like, a thing.” How profoundly reassuring!
The fellowship, announced with great fanfare on April 6, is billed as “a pilot program to support independent safety and alignment research and develop the next generation of talent.” Fellows shall labor from Constellation’s Berkeley workspace or remotely, their efforts remunerated at $3,850 per week, over $200,000 annualized, plus roughly $15,000 in monthly compute. Applications close on May 3, and the program is not confined to AI specialists; OpenAI casts its net wide, ensnaring talents from cybersecurity, social science, and human-computer interaction alongside computer science. A veritable smorgasbord of expertise!
The timing, as they say, is everything. Farrow’s investigation, published on the very same day, laid bare the dissolution of three internal safety organizations over 22 months. The superalignment team was shuttered in May 2024, following the departure of co-leads Ilya Sutskever and Jan Leike. Leike, in a parting shot, remarked that “safety culture and processes have taken a backseat to shiny products.” The AGI Readiness team followed in October 2024, and the Mission Alignment team was disbanded in February 2026 after a mere 16 months. When a journalist sought to speak with OpenAI’s existential safety researchers, the response was a masterpiece of nonchalance: “What do you mean by existential safety? That’s not, like, a thing.”
The fellowship, it must be noted, does not replace internal infrastructure. Fellows are granted API credits and compute resources but no system access, positioning the program as arm’s-length research funding rather than a resurrection of the dissolved teams. A gesture, perhaps, but one that leaves much to be desired.
What the Fellowship Requires Fellows to Produce
The research agenda spans seven priority areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. By the program’s end in February 2027, each fellow must produce a substantive output-a paper, benchmark, or dataset. Academic credentials are not required; OpenAI prioritizes research ability, technical judgment, and execution capacity. A noble endeavor, if only it were not so transparently self-serving!
Why This Matters Beyond the AI Industry
As crypto.news has observed, confidence in frontier AI companies’ stated safety commitments is a market signal that influences capital allocation across AI infrastructure, AI tokens, and the DePIN and AI agent protocols at the intersection of crypto and artificial intelligence. OpenAI’s spending trajectory and the credibility of its operational priorities are scrutinized by investors evaluating the AI infrastructure sector-a sector increasingly intertwined with blockchain-based systems. Whether external fellows, working without internal access, can meaningfully influence model development remains an open question, one that the first cohort’s research will begin to answer in early 2027. Until then, we are left to ponder the spectacle of a company outsourcing its conscience while dismantling its soul.
Read More
- Gold Rate Forecast
- Brent Oil Forecast
- USD ZAR PREDICTION
- ETH PREDICTION. ETH cryptocurrency
- FET PREDICTION. FET cryptocurrency
- Bitcoin Whale Selling Pressure Eases as Binance Inflows Drop and ETF Demand Weakens
- You’ll Never Guess What Ethereum Did After Jumping Over $2,700 🚀 (Hint: Not Ballet)
- EUR PHP PREDICTION
- BNB Gets a Boost: CZ Supports Subversion & the Creator Revolution
- US DOJ Shuts Down Crypto Unit: Who Let the Banter Out? 😎
2026-04-09 00:44