TL;DR
AI may bring $13trn of GDP over the 2020s, but will also bring much harm that is hard to quantify.
The current AI regulation and policy maker debate, like the EU AI Act, does not weigh the risks well and is often not even aware of key risks.
The top 3 risks are that: i) AI that kills humans; ii) AI that destroys public institutions and trust; iii) AI that exacerbates private and public inequality in social, political, and economic dimensions.
AI will be like the printing press, which brought huge benefits (increased education, scientific discoveries, new inventions, political and religious freedom), but also costs (religious warfare, the destruction of political and social order, large amounts of false information, the use of new tech to colonize land and kill tens of millions of people, etc).
To maximize the benefits and minimize the costs, we need to have more study, discussion, and debate today, weighing the difficult tradeoffs that come from regulation and more investment in AI.
Use these unbreakable steel chains and shackle him to this high peak. [Prometheus] stole the blossom of your craft, the blaze of fire, the spark of every art and gave it to the mortals. Such is the crime for which we, gods, must receive recompense. As for him, he should learn to accept the rule of the Deity and cease his mortal-loving ways.
Aeschylus. Prometheus Bound (Cambridge Greek and Latin Classics), Ed. Mark Griffith. Cambridge: CUP, 1983.
Perhaps the two greatest technologies of the 21st century are the internet and artificial intelligence (AI), which build off each other and boost other forms of scientific knowledge and developments in engineering. McKinsey estimates that AI may deliver an additional economic output of around $13 trillion by 2030, increasing global GDP by about 1.2% annually. The internet and AI are similar to step change tech in human civilization such as fire, the wheel, the printing press, hydraulics, steam engines, nuclear reactors, electricity and motors, integrated circuits, and computers. However, the magnitude of the step change that the internet and AI will bring will be even larger, with likely benefits of solving fundamental human problems like housing scarcity, hunger, health & educational imbalances, crime, and so on.
Yet the internet and AI bring massive harm and risks too. These are poorly understood and not debated enough. The closest analogy I can think of is the printing press, which brought momentous changes and benefits (increased education, scientific discoveries, new inventions, political and religious freedom), but also costs (religious warfare, the destruction of political and social order, large amounts of false information, the use of new tech to colonize land and kill tens of millions of people, etc).
The first century of full dissemination from ~1450-1550 caused massive disruption, and that change continues till today, almost 500 years later in the 21st century, when new books, journal articles, and newspaper pieces continue to challenge the existing economic, political, and social order (albeit now with higher velocity and greater variance, due to a spread via online digital publishing tools and the internet). I expect the internet and AI to be the same; we are barely 3 decades into a multi-century reckoning that will be hard to predict, and yet we will want to aggressively harvest the benefits of AI technology while minimizing the harm and cost.
I help build and invest in large-scale AI systems that serve billions of people and therefore have a front-row seat to emerging issues in this space. I’m distraught by how little tech and internet policymakers understand the issues that come from simple online digital systems (online video, social media, fintech, eCommerce, etc.), let alone the large-scale AI systems being built off them. To intelligently debate these, one needs enough of a technical background in CS/computer engineering/product creation, but also insights from law, philosophy, economics, psychology, and multiple other disciplines. I see poorly conceived laws attempting to regulate the internet in mostly the EU and China, with the US occasionally attempting to follow along but getting blocked by sensible judges.
I argue here that citizens and policymakers must evaluate two propositions that are in tension: i) the internet and AI will cause many harms and need to be thoughtfully regulated to minimize these harms; ii) over-regulation has high costs and could lead to technological and economic decline relatively quickly (within 10-20 years), where entire countries and regions could become digital vassals to the jurisdictions that promote the AI systems with education and large investments. While the tradeoffs in tech policy are even more multidimensional than the tension here, the solutions will have to be both/and instead of either/or.

The most important attempt to regulate AI is currently happening through the EU AI Act, which is well-meaning in its intent, but defective in what areas the legislation focuses on. Hence, it will have a limited and likely negative impact on balance, by blocking promising technology, ignoring the most harmful, and failing to make the large investments that the EU needs. The EU AI Act is misguided in that it bans a few narrow and less consequential types of AI, and then identifies ones as high-risk AI that involve: Biometric identification and categorization of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes (Annex III).
I view only 2 of these areas as extremely important (Management and operation of critical infrastructure; Access to and enjoyment of essential private services and public services and benefits). The rest are marginal, misguided, or far less important than the likely, imminent harms that AI systems will bring.
The field of researching the “Harms of AI” is nascent and tends to focus on very long-term, long-tail existential risk from AGI, or often moderate risks from algorithmic discrimination. Papers like Acemoglu’s “Harms of AI” are helpful, but ultimately too focused on toy mathematical models, often with unrealistic assumptions, instead of observed and likely imminent harms.
Below I lay out what I think are the major harms of AI, in three categories of risk. The first set could lead to millions or even billions of deaths and are where the most public attention and policymaker debate should be. Surprisingly, outside of a few prescient AI researchers like Stuart Russell, few policymakers are willing to discuss these risks due to a strange myopia from places like DC, Brussels, or Beijing and the grip of secretive defensive programs and the National Security State in all three. The second set of risks would destroy public institutions, from trusting media sources reporting neutral facts to the distributive allocation of benefits and costs from public agencies. The third set of risks is long-term but could still lead to substantial problems. They arise from inequalities of access or the lack of focus on and investment in how citizens understand and use AI tech.
My list is not exhaustive. I hope it’s a better starting point than the current debate (or the EU AI Act) which does not prioritize risk in a meaningful way, while explicitly excluding consideration of the top risks I mentioned below. Note that AI systems embodying all of the risks stated below are currently in production, in one or more countries in the world. These are not speculative risks (like existential AGI risks), but ongoing and increasing in danger.
The first set of risks: AI that kills humans
Automated nuclear weapons: I was dumbfounded when a researcher in the US security policy establishment pointed to papers suggesting that parts of the US and Russian nuclear arsenal may already be automated, with humans potentially out of the loop. These systems are closely guarded state secrets that should be declassified and debated in legislatures. For more information on the nuclear command, control, and communications (NC3) infrastructure, see this report from the Arms Control Association, this SIPRI paper on Artificial Intelligence, Strategic Stability and Nuclear Risk, and this curiously naive whitepaper from NTI on Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. I know the least about the current status of this risk, but it seems the most urgent and compelling. As a final reminder, even an accident that leads to a nuclear winter could cause billions of humans to die, so this is the most dire risk.
Other Weapons of Mass Destruction (WMD): Unfortunately, nuclear weapons aren’t the only form of WMD that AI can augment. AI tools can be used to generate better and more lethal chemical and biological weapons, and that space is unregulated as these two examples show. First, an AI research group used a system that was intended to develop drugs to cure people to instead design dangerous biochemical weapons, as a poorly conceived “thought experiment” where they published their research results. Second, the controversial and likely unethical and dangerous gain of function research, where labs attempt to create more novel and deadly viruses for study, can be sped up and boosted by computational AI tools.
Conventional lethal autonomous weapons systems (LAWS): LAWs are drones or robots with the ability to kill humans, often with conventional guns or explosives. My understanding is that the US and China have the most advanced LAWS programs in the world, with the UK, Russia, Turkey, South Korea, and Israel not far behind. Most regulations around defense programs or AI fail to mention LAWS, and hence this oversight has led to unbridled and dangerous development. More background reading on LAWS, along with the confusion on the US DoD’s policy on LAWS, which does not ban LAWS and hence permits them. Specifically, I view the EU AI Act as a failure for not regulating automated nuclear weapons, other AI-boosted WMD, and LAWS.
Mass control systems (power and utility regulation): The usage of AI to run power plants, data centers, sewage systems, traffic control, and other mass infrastructure systems is still relatively new and has little oversight. While the benefits from these use cases are obvious and substantial, it’s unclear what work is being done on safeguard systems and redundancy to prevent shutdowns, malfunctioning, and failure.
Small autonomous systems (AVs, drones, etc): Perhaps the smallest of the first set of risks, yet one that has already injured and killed humans, are autonomous systems like self-driving cars (e.g. Tesla FSD) or sensors in airplanes (Boeing 737 Max and the angle-of-attack (AOA) sensor that caused crashes). Here too, the benefits could vastly outweigh the risks here, with perhaps a lot fewer people getting injured or dying if more cars, trucks, and planes were automated; still, the technical standards, inspection systems, and liability regime need to be established.
The second set of risks: AI that destroys public institutions and trust
Deepfakes and fake media - the loss of truth: Deepfakes are text (articles, books, social media posts), images, and videos that appear to be true and authored or captured by humans, but are doctored or created by AI systems and are false. As examples, consider these deepfakes that make Speaker of the House Nancy Pelosi seem like she is drunk in public, or porn sites that place the faces of celebrity men and women on pornographic videos. Deep fakes have taken a lot of effort to make, but in 2022, organizations like (the ironically named) Stability AI released tools that will allow anyone in the world to make deep fakes quickly, even in minutes.
While trust in the media is already at lows, an upcoming surge of deep fakes in the 2020s will cause citizens to distrust all media and generally become even more cynical about evidence in the form of text, images, and video. The long-term corrosion of truth and shared facts will likely have a tremendous negative impact on public institutions like news broadcasters, legislatures, and public agencies. While there are some early tools to help detect deep fakes, this is an arms race between creators and detectors. Governments need a lot more investment in this space and regulation on the circulation of deep fakes that defames institutions and individuals.
Algorithmic decision systems for public benefits and criminal justice: There is a range of harms that could come from automated decision systems that apportion public benefits like welfare, evaluate tax returns, decide on criminal culpability, remove children from the homes of parents, lend money to some groups but not others, and so on. The benefits of AI systems here are to use a lot more data to automate decisions, and on average may outperform biased human decision makers. The downside is systems could systematically discriminate and do worse than human alternatives, and so need lots of scrutiny and inspection, along with human-led appeals processes. These harms have been studied more extensively in the last few years but will expand.
Government surveillance and control of citizens: It was until recently impossible to build a surveillance system to track thousands, let alone millions or billions of people, in an automated way, to control their social behavior or even political activity. The CCP in China, however, has invested heavily in this automated surveillance, and it’s possible more autocratic governments and even democratic states follow through, with Clearview AI as one example of a private entity collecting this information and selling it to police departments and government agencies throughout the US and EU. Quis custodiet ipsos custodes? So far, none of the so-called “privacy” bills in the US or EU apply to government surveillance, which is often the most intrusive and problematic.
Lack of social alignment on the objective functions of AI systems: AI systems usually work to promote a broad end set of goals, often mathematically stated as an “objective function” (study). Without clear social and public alignment on these, let alone a lack of transparency on them, private or government entities could build systems with self-seeking or destructive end goals, or just poorly-specified systems with unintended consequences like the Monkey’s Paw scenario. This is similar to current digital systems that use points and gamification to increase engagement and usage, or certain social media that amplify anger, rage, and social outrage as unintended side effects, but on a much larger scale with a much wider range of goals. To counter this, it’s important to have a disclosure regime on objective functions, which is more important than the current muddled algorithm transparency debate about all aspects of an algorithm, which are often too complicated for outsiders to understand, instead of the end goals and results of AI systems.
The third set of risks: AI that exacerbates private and public inequality, in social, political, and economic dimensions
Imbalance of private vs public investment: As some AI critics have pointed out, most of the cutting-edge research, compute, and production AI systems today are controlled by large private companies like Google, Meta, Amazon, etc. The root cause of this is an imbalance in investment, where private-sector VCs may have invested $40-75bn+ annually in the field, while large tech companies have R&D budgets of $10-30bn each and may be spending a majority of it on their AI research. In contrast, total annual government spending for all the major countries in the world is in the order of $500mm to $4bn (study). Assuming $60bn of normalized VC funding and $35bn for the 7 largest private companies in the AI space, that is $95bn of private investment versus $4bn of public investment.
What this means is that the large-scale AI systems of the future, running cities and manufacturing plants, hospitals, cars, and media systems, will all be controlled by a small group of private companies, with their boards and stockholders being the main beneficiaries of surplus wealth generated. Hence for even the next decade, if $13trn of value is created, a disproportionate slice of that may accrue to a relatively small group of people for their justified effort and investment, while governments and the public were derelict in their duty to invest. Taxes and distributional transfers of this wealth will be extremely difficult due to the power imbalances and the morally justified ownership from the minority who invested while the legislatures declined to invest.
The value of task automation goes to capital and not labor: AI will destroy a few job categories but is more likely to automate many tasks and make humans much more efficient, productive, and wealthier. Laborers displaced from jobs owing to automation are often forced to compete with other workers for whatever jobs are left. For example, clerical workers who have been replaced by automation may subsequently seek employment in sectors that have not been automated; say retail work. Their entry into the retail sector causes wages in this sector to drop as clerical and retail workers undercut one another for employment. However, it doesn't have to destroy jobs - it could just transform them into higher-paying ones. Many policy choices along the way may lead to upskilling and higher-paid workforces, or neglect could lead to deskilling and a low-wage underclass.
Technical education and economic inequality: Many of the best jobs in the AI field go to workers with strong math and computer science skills. The field of CS has seen increasing demand and the highest wages for entry and mid-level skilled workers (while slowly changing the higher level jobs too), while a huge skills gap exists. This is due to most states and countries in the world not providing any systematic education in CS or even computational thinking, often just offering remedial courses in math for curricula that haven’t been updated in decades (the status quo arithmetic to algebra, and maybe geometry to calculus track).
AI workers in the future will need to be able to code, work with hardware, do advanced statistical analysis and use discrete math, and have a range of skills with machine learning and programming that current educational systems from K-16 fail to convey. Hence an elite of technical CS & math students will continue to dominate the field, to the detriment of spreading this knowledge and the resulting wealth to a larger part of humanity. Note that this isn’t just a STEM or STEAM observation - the core is specifically math and computer science tied to building AI systems, which are complementary and overlap with a STEM curriculum but may be different.
I’m still optimistic about AI and the massive benefit it will bring. Yet that does not mean I’m a blind booster or Panglossian about the large risks looming on the horizon. To maximize the benefits and minimize the costs, we need to have more study, discussion, and debate today, weighing the difficult tradeoffs that come from regulation and more investment in AI.
References & Further Reading
Primer on AI and Machine Learning (Part 1) — Beginners Level (Non-Technical)
EU AI Act (Stanford – 2021) – Act Text and Annex
Stanford AI Index Report (2021)
Artificial Intelligence: An Introduction to the Legal, Policy and Ethical Issues (Dempsey 2020)
Artificial Intelligence Policy: A Primer and Roadmap (Calo, 2017)
AI and Democratic Values Report (2021)
The Ghost of AI Governance Past, Present and Future: AI Governance in the European Union