Econ 4 AI

Research in Economics for AI Governance and Safety

Frontier AI Development: A Challenge For Humanity Economists Can Contribute To

Disclaimer:

We assume here that the development of frontier AI, and more specifically of an Artificial General Intelligence, might pose catastrophic or existential risks if left unchecked. This may sound as a strong assumption and the reader should first make her mind on this before diving into the following so we advise here to stop and read our recommended readings for those who are not familiar with the topic beforehand.


Why can economics contribute to handling the issue of risks from frontier AI?


The history of political science is marked by numerous policy failures, despite well-intentioned efforts. It seems inevitable that AI governance will also encounter similar setbacks. To ensure that it is done without a lot of failures, it is essential to develop more theoretical and empirical models wherever possible, consider the trade-offs and externalities of specific regulations, and better anticipate their outcomes. This includes, among other things, the work of economists.


We think AI governance needs more rigorous and theoretical models for many policy proposals in AI safety. Ignoring trade-offs and technical or institutional constraints might fail or backfire harshly. Additionally, some AI policy proposals could potentially exacerbate the existing risks and problems or introduce new harms.  While a lot of people are now aware of the AI alignment problem from a technical perspective, we think we might also fall under an AI regulatory alignment problem if policy proposals are not carefully crafted with an understanding of their broader impacts.


We believe this is currently a missing link in AI governance. We urgently need more economists, both from academia and from the policy sphere, working in this area of research to contribute to the development of effective and comprehensive models that underpin well-founded AI policies. 


Finaly, in addition to their role in crafting effective public policies that tackle the problem they are trying to solve, it seems economists possess skills and expertise that could be advantageous in addressing some technical problems directly within AI alignment research. Especially because they have developed tools and conceptual frameworks (mechanism design, game theory, preferences elicitation etc.) that can be used to advance AI safety research and more particularly, that are useful for the alignment problem.


Thus, economists have a wealth of tools and knowledge that can greatly contribute to both AI governance and AI safety. The "Economics for AI Governance & Safety (Econ4AI)" website aims to bring together economists and other enthusiats interested in these issues, develop a research agenda in this field and foster collaborations.


Example: "Third-party safety audit requirements for frontier AI companies" AI Policy Proposal

We highlight here an example of the policy proposal "Third-party safety audit requirements for frontier AI companies" to better grasp how a seemingly simple AI policy proposal probably needs more theoretical and conceptual research to ensure that the policy effectively addresses AI risks while promoting innovation and other social and economic benefits.

Economists and policy researchers might be intereseted by several issues: