This week President Biden signed a sweeping executive order around the use and development of Artificial Intelligence. While many commentators have praised it for its extensive use of platitudes and ambitious scope, basic economic analysis suggests this policy is business as usual for the Biden administration: usurping authority, brow-beating private-sector companies, slowing innovation, and advancing a divisive progressive agenda in the name of “equity.”

Although the administration claims authority from the Defense Production Act, very little of the executive order is even remotely related to national defense. It uses boilerplate language about “serious risk,” “national economic security,” “national public health,” “ensuring safety,” “ensuring appropriate screening,” and much more.

These aspirations have little connection with what this executive order will do.

The Biden administration signaled from day one (Executive Order 13985 Advancing Racial Equity and Support for Underserved Communities Through the Federal Government) that it would engage the entire machinery of the federal government to promote rent-seeking for “disadvantaged” groups – defined however the administration would see fit. It recently doubled down on this agenda. This new Executive Order 13960 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence continues to advance the equity agenda.

Give them kudos for consistency!

From their attempt to “forgive” student loans to handing out tens or even hundreds of billions of dollars to favored groups they call disadvantaged – from distressed farmers to women and minority business owners to companies advancing “climate justice” through renewable energy production or electric vehicle development – the Biden administration clearly knows who should win and who should be ignored.

The same is true of this EO about artificial intelligence. The order inserts government bureaucrats and agencies into the development and use of AI. The administration wants to slow and restrict AI development – recommending that large AI companies come to government officials to “independently verify” the safety of their models and applications. Of course, political incentives being what they are, these evaluations of safety will be used to redirect and tweak AI models towards the priorities of the current administration and its ubiquitous “disadvantaged” groups.

Without details, evidence, or examples, the Biden administration insists that it cannot and will not “tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.” In response, administration officials intend to put their thumbs on the scale to make sure their favored groups, labeled as disadvantaged, gain special status, funding, access, and priority through AI models.

The lack of nuance on the topic of equity is mind-numbing. Furthermore, it is pedantically simplistic and erases individuals as moral agents by subsuming them under whatever group or class identity happens to be politically convenient.

Economic models of rent-seeking demonstrate that these requirements will divert resources away from productive activity, toward lobbying politicians and regulators for favorable treatment. Restrictions on AI development, despite the administration’s claims to the contrary, will almost certainly make the AI space less competitive and more difficult for smaller and newer firms to operate in – further entrenching the economic size, political influence, and social clout of current massive tech companies.

The Biden administration is going about AI governance all wrong. Instead of allowing legislators to create clear, general rules based on observed, direct harm from AI development, it has taken the precautionary principle to an unhealthy extreme. This EO creates rules, restrictions, and demands on AI developers based on hypothetical, fictitious, abstract, and even imaginary potential harm. But all these precautions are costly – both in time and money – and will inevitably slow US companies’ advance in what appears to be a critical new technology.

Concern about the strength and application of AI in national security and great-power rivalry should lead to an opposite approach, known as “permissionless innovation.” The EO gets it right when it states “America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined.” But the principles of this order, as they are developed into regulatory tools by the administrative state, are a clear threat to this creative lead in AI by American companies.

How does that serve American interests?

Just as a strong economy built on the rule of law, private property, and free enterprise prepared the US for a global war in the 1940s, unleashing US innovation in software and AI algorithms by reducing rules and regulations will create a far more robust technological base from which to compete with other countries. It will also help us combat hacking, electronic espionage, and other forms of technological sabotage.

Rather than averting danger, this executive order will put the US at a disadvantage in the race to develop AI. Instead of making AI “safer” and more “equal,” these rules allow the federal government and its agents to direct the development of AI to benefit its favored interest groups at the expense of everyone else.

Rather than lauding (“landmark” and “the most sweeping actions ever”) their own foresight and wisdom, Biden administration officials should be ashamed that Executive Order 13960 ever saw the light of day.

Author