AI regulation

AI, Oppenheimer, and Taiwan’s Democratic Solution

Published in Common Wealth English magazine on January 23, 2024 (https://english.cw.com.tw/article/article.action?id=3607)

The recent Taiwan elections and last week’s World Economic Forum at Davos have made it clear that AI control and regulation are urgent issues. AI is at a pivotal “Oppenheimer moment” as advances in AI technology threaten democratic states like the US and Taiwan.But Taiwan might have the solution.

The Threat to Democracies

AI technology can potentially be exploited to destabilize democracies through misinformation, propaganda, and polarization of beliefs at an unprecedented scale.In 2024, high-stakes elections in 50 countries affecting over 4 billion people will likely be the first major targets of large-scale AI interference. Beijing was already using the recent Taiwan presidential election as a testing ground by mass-posting a false “secret history” of Taiwan’s President Tsai Ing-wen and flooding Youtube, Instagram, X, and other platforms with AI-generated fake news videos  (source: Taipei Times).

We will likely see similar AI-enhanced misinformation and disinformation campaigns influencing elections in the US, Russia, Indonesia, Pakistan, and Mexico.

The “Oppenheimer Moment”: Fission or Fusion?

For context, let’s revisit the 2023 blockbuster Oppenheimer, nominated for 14 Academy Awards. In this movie, director Christopher Nolan masterfully intertwined two storylines: “fission” and “fusion.”“Fission” was shot in color and shows Oppenheimer’s view on the World War II development of the nuclear bomb.“Fusion,” on the other hand, was shot in black and white and describes the events after World War II when Oppenheimer changed his mind on nuclear technology, called for its international regulation, and was accused of being a communist spy.However, the “AI Oppenheimer moment” currently refers to the risks of advancing AI technology and how it should be controlled.Media attention to AI in the last year has often focused on doomsday scenarios of superintelligent AI eradicating humans. There was no shortage of fear-mongering and calls to pause or even ban AI research.But there is no one who can stop the big (or small) tech companies. And there is certainly no stopping China from developing AI. Both Big Tech and China are driven by immense power and profit motives. And each has its own distinct utopian agendas.

AI for Democracies ≠ AI for Totalitarian Regimes

In truth, the more realistic risk of AI research is not Terminator AIs enslaving humankind, but rather the threat to democratic states, like the US and Taiwan.Democracies, as best-selling historian and philosopher Yuval Noah Harari warns, are especially vulnerable to AI interference and hackers.The “democracy story” is that individuals have free will and the right to pursue it. In this story, the government needs to respect these rights, rule by law, and be accountable to its citizens in free elections.However, Harari points out that since democracies are essentially conversations based on free flows of information and language, AI can be weaponized at scale to dissolve these conversations. AI can be deployed to generate and mass-produce fake news, propaganda, and even chatbots to hold conversations with unsuspecting humans to influence their beliefs.In this way, AI can be used to destabilize democracies by radicalizing beliefs, intensifying debates, and exacerbating the already polarized identity politics plaguing so many “free” countries.Of course, AI as a propaganda machine and surveillance system can only bolster totalitarian regimes and solidify their power and control over information and its citizens.This is the “communist story” where collective rights trump individual rights. In this story, the central party knows what is best for the collective and can best ensure the fair distribution of wealth and power to the collective.The AI Oppenheimer moment, therefore, really refers to how the control of AI will potentially impact the world order and to what extent democracy will either flourish or flounder.

AI Control: Centralized Fusion or Decentralized Fission?

Recognizing the power, omnipresence, and user-friendliness of the generative AI technologies that flooded the internet in 2024, all governments and governing organizations are scrambling to understand AI technologies, their implications, and how to regulate them.Basically, there are four possible paths to AI regulation: from complete centralized control to complete decentralized control.Or, to continue the Oppenheimer movie metaphor, from fusion to fission.Centralized Fusion – The China Model Fusing Politics and BusinessAt the end of the control spectrum is fusion, or complete state control, as we are witnessing in China.This is far-left on the political spectrum.In this case, the Chinese Communist Party and Chinese corporations team up to create AI systems of control. On the hardware side, Huawei develops 5G infrastructure and surveillance technology, and for software, Tencent leads in AI development.This centralization of government control and corporate support is responsible for the production of smart city products and services that China not only uses itself but also sells to countries that have similar authoritarian conditions or aspirations.At the other extreme is a more libertarian model.

Decentralized Fission – Libertarian AI Model

At the most radically decentralized end, entrepreneurs like Tiny Corps’ George Hotz are developing AI systems on personal computers that will be independent from government or corporate control.This is more of a far-right view.These AI systems will be personalized to and serve the interests of the users, inside their homes or on their devices. The idea here is that these interests will not be monitored or manipulated by political or commercial organizations. The libertarian goal is to protect the individual’s privacy and freedom, ideally even acting as a defender against other AIs.This fissioned view of democracy has the goal of making personal AIs available to everyone, reminiscent of Steve Jobs’ and Steve Wozniak’s goal of making personal computers available to everyone.The remaining two options are more middle ground.

Semi-centralized Fusion – The Big Tech Model

In this scenario, big tech corporations, like OpenAI, Microsoft, and Google, develop closed models of AI (ChatGPT, Copilot, and Gemini), which can be used by all but run on their cloud servers. They have access to all activity and data that runs through these applications on their servers.Several big tech leaders, like OpenAI’s Sam Altman, have argued that big tech should not be the only ones deciding AI regulation and have called for more government involvement with AI regulation.This sounds helpful but perhaps a little disingenuous, given that government employees often know little about high tech and will likely depend heavily on consultants from these organizations.This view is still clearly on the centralized side.

Democratic Fission – The vTaiwan Pluralistic Model

The final, and fourth option, is the most democratic option and would better guarantee the rights of all concerned parties to be heard—not just the entrepreneurs and politicians.An interesting model for this situation is something like Taiwan’s vTaiwan platform, which serves as a model for People-Public-Private Partnerships.First introduced by Digital Minister Audrey Tang, vTaiwan aims to create digital legislation by bringing together government ministries, elected representatives, scholars, experts, business leaders, civil society organizations, and citizens to discuss and reach a consensus on issues that affect everyone’s lives.To date, more than 28 cases have been discussed through the vTaiwan process, and 80% of them have led to some decisive government action. Hopefully, this platform will be similarly effective for dealing with more complex technological issues later this year when it will be involved in crafting AI-related regulations for the Executive Yuan and the National Science and Technology Council.

The Democratic Solution Is Pluralistic Regulation

In the 1950s, Oppenheimer wanted the newly-formed UN to be the impartial international body to regulate nuclear arms. But this is not a feasible solution for the problem of AI regulation in 2024, if for no other reason than the fact that the UN has great difficulty reaching any consensus and when they do, have little direct power over nations or commercial entities.For democratic countries, the solution that best safeguards the rights and freedoms of democracies is the pluralistic approach to AI regulation. A democratic process and platform like vTaiwan would combine global and local assemblies comprising big tech, government, academic, and citizen representatives and facilitate a four-stage process of proposal, opinion, reflection, and then finally legislation.

Conclusion

In essence, the AI Oppenheimer moment is about who should have control over AI. The darker options, like Nolan’s bleak fusion story filmed in black and white, is government and Big tech centralized control.The most democratic solution is a pluralistic combination of all concerned parties—like Nolan’s fission story shot in a diversity of colors.The stakes are high, and the world order potentially hangs in the balance. Powered by AI technology and surveillance, totalitarian regimes are poised to expand and proliferate. But the same cannot be said for democracies, whose very nature makes them vulnerable and susceptible to polarizing forces.In this defining moment, global democracies need to look towards innovative and inherently democratic models like Taiwan’s vTaiwan. Democratic engagement and technological advancement can, and indeed must, ensure that AI serves as a tool for unity and progress, not division and discord.

Leave a Comment

Your email address will not be published. Required fields are marked *