Last week, drama unfolded in the AI world. Sam Altman is the co-founder and CEO of OpenAI the company behind the groundbreaking ChatGPT. On Friday, November 17th Altman was removed as CEO of OpenAI by the company’s board of directors. He was then hired by OpenAI’s leading investor, Microsoft, over the weekend, only to be reinstated as CEO of OpenAI on Tuesday the 21st, alongside a restructured board of directors.
These events reflect more than a mere leadership change. They signal a pivotal episode in the governance of artificial intelligence, reflecting deep currents in the technology and business sectors.
Background
The catalyst for Altman’s reinstatement was extraordinary—a collective push by over 90% of OpenAI’s staff, who threatened to migrate to Microsoft unless Altman was restored as CEO and the board underwent a major restructuring. At the heart of the reshuffle is the introduction of prominent figures to OpenAI’s board, including Bret Taylor, Larry Summers, and Adam D’Angelo.
Microsoft’s endorsement of these changes is equally telling. As one of the world’s largest companies with deep investments in AI, their support for Summers and the new board composition indicates a commitment to shaping the direction of AI development. This is a clear indication of how central AI has become to both commercial and strategic interests on a global scale.
Sam Altman’s rollercoaster return as CEO of OpenAI isn’t just corporate theater. In today’s fast-evolving world, we’re facing many challenges related to the rise of Artificial Intelligence (AI). In this blog, I am going to focus on two issues that I believe everyone should be aware of, even if you’re not a tech expert or philosopher.
Who’s really leading the AI movement?
The first challenge stems from the ideas of social theorist Jean-François Lyotard. In his book “The Postmodern Condition,” Lyotard argues that in today’s world, knowledge and the pursuit of new ideas are heavily influenced by business interests. In a corporate world where growth and profit are the top priorities, things like research and innovation can become more about how lucrative they are, rather than how much they benefit everyone.
Applying this to the situation of Sam Altman’s return to OpenAI, we see a vivid representation of Lyotard’s theory. The dynamics at OpenAI, influenced by staff ultimatums and the interests of heavyweight investors like Microsoft, exemplify the shift from pursuing knowledge for its intrinsic intellectual value to aligning it with market demands and investor interests. The influence of funding and corporate interests on research directions and organizational priorities at OpenAI could be seen as a modern embodiment of the commodification of knowledge Lyotard warned about.
The reshuffling at OpenAI reflects a broader societal trend where intellectual endeavors are increasingly subject to the whims of market forces and capital. This situation limits the scope of intellectual freedom and innovation, as the priorities and interests of funding entities overshadow the intrinsic values and potential societal benefits of scientific and intellectual work.
Blurring the Lines of Reality
The second challenge is brought to light by Jean Baudrillard, who talked about something called ‘hyperreality’. These new technologies are not just advancements in computing; they represent a fundamental change in how we construct and interact with reality. AI-generated content, from text to images, creates a hyperreal layer over our experiences, where the simulated (AI-generated content) often seems more engaging, coherent, or ‘real’ than the actual. The boundary between the virtual (AI-generated) and the actual (human-generated) content becomes increasingly indistinct, leading to a scenario that resonates deeply with Baudrillard’s prediction of a world dominated by simulacra.
This is a bit like stepping into a sci-fi movie, where it’s hard to tell what’s real and what’s not. With AI getting so good at creating things – from realistic images to writing stories or having conversations – we’re slowly moving into a world where the line between what’s made by humans and what’s made by computers is blurring. It’s like living in a world made up of illusions, where we can’t always tell if what we’re seeing or reading is from a real person or an AI. This raises big questions: How do we handle a world where it’s hard to tell real from fake? What dangers will emerge when the “hyperreal” falls into (or remains in) “the wrong hands”? Are we emotionally, mentally, and socially ready for this?
An Inflection Point
Both these thinkers, Lyotard and Baudrillard, are essentially flagging critical issues that go beyond just technology. They’re talking about how AI could change our society, our understanding of truth, and even our sense of reality. It’s not just about new gadgets and smarter computers; it’s about how these advancements could reshape everything from how we learn and communicate to how we perceive the world around us. Understanding these challenges is crucial for everyone, as it helps us prepare for and shape a future where AI benefits all of society, without losing our grip on what’s real and valuable in our human experience.
In this grand AI saga, we cannot overlook the foolish path we appear to be embarking on. We have this groundbreaking tool at our fingertips, capable of rewriting the rules of the game. AI could be our golden ticket to a world that’s more just, more equal, more peaceful. But what are we doing? We’re letting the big guns use it as just another cog in the capitalist machine.
Literally! A January AP report unveils a disturbing development: the Pentagon is fervently advancing the deployment of fully autonomous drones on the battlefield. This marks a terrifying progression in warfare, eclipsing even the grim introduction of the machine gun in 1884. Their goal to unleash thousands of AI-controlled drones by 2026 is a chilling testament to this. This race to automate killing machines is a grotesque embodiment of capitalist values infiltrating even the most profound aspects of human life and death. It’s akin to harnessing the power of fire not to illuminate the darkness, but to burn the bridges of diplomacy and peace.
The Choice is Ours
This whole OpenAI drama is a microcosm of a larger, more tragic comedy. We’re witnessing Lyotard’s cautionary tale of intellectual manipulation unfold in real time. How can we remain unreflective and uncritical? Meanwhile, as Baudrillard’s eerie prediction of a simulated reality is coming true right before our eyes, how is it that society stands both psychologically and socially unprepared for its implications? Through it all, we’re squandering AI’s potential for societal renaissance.
The OpenAI shuffle is more than a corporate scuffle; it’s a stark reminder of the ethical crossroads we face with AI. Are we going to let AI become a tool for the few to tighten their grip on power and wealth? Or will we harness its true potential to create a world that’s not just efficient and advanced, but also more humane and equitable?
The power to decide this lies overwhelmingly with us, the vast majority, the 99%, the People. It’s a reminder that when we, as a collective, engage in critical thinking and organize effectively, we can direct the course of our future. By doing so, we can shape a world where AI benefits all, not just a privileged few. Let’s not only hope we choose wisely but also take proactive steps to ensure that we do.