Econ 4 AI
Research in Economics for AI Governance and Safety
Frontier AI Development: A Challenge For Humanity Economists Can Contribute To
Disclaimer:
We assume here that the development of frontier AI, and more specifically of an Artificial General Intelligence, might pose catastrophic or existential risks if left unchecked. This may sound as a strong assumption and the reader should first make her mind on this before diving into the following so we advise here to stop and read our recommended readings for those who are not familiar with the topic beforehand.
Why can economics contribute to handling the issue of risks from frontier AI?
The history of political science is marked by numerous policy failures, despite well-intentioned efforts. It seems inevitable that AI governance will also encounter similar setbacks. To ensure that it is done without a lot of failures, it is essential to develop more theoretical and empirical models wherever possible, consider the trade-offs and externalities of specific regulations, and better anticipate their outcomes. This includes, among other things, the work of economists.
We think AI governance needs more rigorous and theoretical models for many policy proposals in AI safety. Ignoring trade-offs and technical or institutional constraints might fail or backfire harshly. Additionally, some AI policy proposals could potentially exacerbate the existing risks and problems or introduce new harms. While a lot of people are now aware of the AI alignment problem from a technical perspective, we think we might also fall under an AI regulatory alignment problem if policy proposals are not carefully crafted with an understanding of their broader impacts.
We believe this is currently a missing link in AI governance. We urgently need more economists, both from academia and from the policy sphere, working in this area of research to contribute to the development of effective and comprehensive models that underpin well-founded AI policies.
Finaly, in addition to their role in crafting effective public policies that tackle the problem they are trying to solve, it seems economists possess skills and expertise that could be advantageous in addressing some technical problems directly within AI alignment research. Especially because they have developed tools and conceptual frameworks (mechanism design, game theory, preferences elicitation etc.) that can be used to advance AI safety research and more particularly, that are useful for the alignment problem.
Thus, economists have a wealth of tools and knowledge that can greatly contribute to both AI governance and AI safety. The "Economics for AI Governance & Safety (Econ4AI)" website aims to bring together economists and other enthusiats interested in these issues, develop a research agenda in this field and foster collaborations.
Example: "Third-party safety audit requirements for frontier AI companies" AI Policy Proposal
We highlight here an example of the policy proposal "Third-party safety audit requirements for frontier AI companies" to better grasp how a seemingly simple AI policy proposal probably needs more theoretical and conceptual research to ensure that the policy effectively addresses AI risks while promoting innovation and other social and economic benefits.
Economists and policy researchers might be intereseted by several issues:
Private sector vs State-owned auditing organisations: Evaluate the trade-offs between programs with private sector auditors and programs that rely upon public sector auditors. Which kind of auditing system structure would be better to address AI risks? How can we avoid regulatory capture, where audit firms become more aligned with the interests of AI companies? Does it scale if more and more auditing firms are entering the market? Should different AI auditing firms specialize in a specific set of risks or failure modes?
Increased costs and barriers to entry: Examine the potential for market centralization resulting from differential audit costs. Is there a risk of power centralization among already well-established AI firms due to the auditing costs faced by smaller firms? Could the audit requirements for AI system updates discourage desirable speedy updates, including those related to AI safety? How can the costs of these audits be modeled for AI companies?
Rapid AI progress: New sets of risks can arise rapidly, and the standards used in audits can quickly become outdated. How can we design audit requirements that are indexed to SOTA frontier AI risks frameworks? How can we avoid requirement structures that are too slow to update? How to prevent potential lock-in effects caused by bureaucratic and slow to update auditing strucutre and mechanisms?
Lack of enforcement and follow-up: How do different penalty structures (e.g., fines, license revocations) impact company behavior regarding ongoing compliance? Examine effective incentive structures and policy mechanisms for ensuring ongoing compliance post-audit. Can market-based mechanisms create competitive advantages for companies that consistently comply with safety audits?
Overreliance on third-party assessments: Is there a risk that companies might rely too heavily on third-party assessments for their safety protocols, potentially neglecting their own continuous risk assessment and safety management processes? If so, what kind of mechanisms should the requirements put in place to adress this concern?
Divergence in international standards requirements: Examine challenges and solutions for harmonizing AI safety audits requirements across different jurisdictions.