top of page

The Drawbacks Of International Law In Governing Artificial Intelligence


Artificial Intelligence (AI) systems are being deployed across industries and sectors of the economy; it is certainly a global problem. At first glance, international law seems like an obvious way forward: a platform designed to address cross-border issues. But this is far from reality. The international law lens is in fact highly superficial and relies on categories that do not reflect the political economy and dynamics of AI.


Mauricio Figueroa argues why we need to look beyond the conventions of international law to regulate AI.


I. The Nature of International Law

Traditionally, public international law has been primarily – but not exclusively – concerned with the obligations of states, interactions of international organisations, and the recognition of individuals as rights bearers. Conversely, private international law seems to have been concerned with market dynamics, handling contractual transactions, jurisdictional issues, and commercial rights. The generation, commercialisation, and adoption of algorithmic systems was, until recently, largely related to the latter, based on principles and legal institutions of private law, such as torts, copyright, or contract law.


The landscape, however, began mutating as AI’s potential risks and impacts gained attention, and incited international bodies like UNESCO to issue its Recommendation on the Ethics of Artificial Intelligence in November 2021, adopted by all 193 Member States and intended to be universally applicable, despite its non-binding nature. In a similar vein, the United Nations Secretary-General’s High-Level Advisory Body on Artificial Intelligence released its final report Governing AI for Humanity on September 19, 2024. The report includes non-binding recommendations intended to guide the behaviour of states and international organisations, but it also symbolises how a topic like AI governance is now of global relevance and it signals to member States to engage in international efforts to address such a complex global issue. Furthermore, the recent endorsement by the Council of Europe of the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (CETS No. 225) might be seen as a landmark in the evolution of legal frameworks around Artificial Intelligence (AI), as it is the first binding agreement with a global focus.


II. Wrong Targets

The dichotomy of interests among states, dictated by disparate economic structures and varying levels of technological sophistication, profoundly complicates the drafting and adoption of a global instrument to govern AI. This disparity is further complicated by the dominance of large technology firms, commonly referred to as Big Tech, which wield significant influence over the AI landscape. These corporations often operate beyond the regulatory reach of states. Crafting international treaties that effectively regulate these corporations is thus fraught with difficulties, as these instruments must tackle, or at least acknowledge, the interplay between state obligations and corporate power.


For instance, the Framework Convention falls short of touching the substance of AI-related risks. In its 36 articles, it encapsulates a broad consensus around notions established in the international discourse around AI: protecting human dignity, individual autonomy, transparency, accountability, preserving privacy, risk assessments, procedural safeguards, remedies, and so forth. It does not introduce innovative mechanisms or profound insights into managing AI’s complex socio-legal implications.


But the issue at core is not a matter of limitations within the particular Convention per se but rather the very nature of AI. AI systems are part of a highly complex landscape where a handful of corporations exercise power well beyond the realm of corporate affairs. This operates within the new economic logic that scholars like Julie E. Cohen and Manuel Castells have referred to as informational capitalism.


The challenge lies in that domestic legal frameworks, both directly and indirectly, support the prevailing informational capitalist economy: contract law through terms of service and non-disclosure agreements, intellectual property regimes that protect corporate secrecy, corporate law that governs financial allocations, complex tort law that limits extra-contractual liabilities, and even privacy as well, in the form of data protection statutes that predominantly confer individual rather than collective rights, which could otherwise serve to address issues such as data collection, processing, and web scraping more effectively.


Public international law’s approach to AI regulation, which predominantly places the burden on States to enforce compliance, tends to overlook the profit-driven dynamics and influence of major technology powerhouses, and, on a deeper level, the stark disparities in enforcement capabilities among nations, particularly regarding the expertise and resources needed to establish effective regulatory oversight.


The disparity in oversight and compliance is particularly evident in emerging economies within the Global South or Majority World, where the capacity to regulate and contend with algorithmic decisions and the corporations behind them is often inadequate, with information collection and processing activities within algorithmic systems mediating the citizen-state relationship, wherein national and subnational governments find it difficult to enforce corrections or even understand the full extent of the technology’s impacts. Big Tech corporations, with their extensive resources and global reach, present a major challenge to national governance structures, making the enforcement of international AI governance mostly aspirational.


The significant influence and power of multinational technology corporations pose a unique challenge, as these entities often have more resources than the countries in which they operate, and more often than not they relegate corporate activities to outsourced firms in the Global South, instances like in the case of OpenAI and Kenyan content moderators illustrate how enforcement of AI-related rights and rules in emerging economies is certainly challenging.


Moreover, as Chinmayi Arun points out with precision, the regulatory discourse surrounding AI must contend with a globalised operational framework in which companies outsource data annotation to one nation, test algorithms in another, and deploy products with societal risks across multiple jurisdictions. Legal frameworks that fail to interrogate the power dynamics of these private actors, and the economic systems that propel their actions, risk perpetuating regulatory blind spots. Particularly, international law appears ill-suited to engage with these complexities, a deficiency rooted in its historical detachment from the analytical tools of political economy.


III. A Word on Political Economy

To grapple with the pressing legal challenges posed by AI, lawyers and legal scholars must confront the political economy underpinning these systems. Sadly, despite valuable academic efforts, the term often escapes the notice of practitioners and scholars alike, largely due to its lack of familiarity. The marginalisation of the term may arise, in part, from an unfounded assumption that it entails a wholesale critique of capitalism’s role in shaping AI’s development and being the cause of its negative externalities. But such a reading misses the mark. In fact, that would be rather simplistic because it would reduce complex social, political, and technological phenomena to a general critique of profit-driven motives.


First, while different harms associated with AI systems may certainly be characterised as consequences of profit maximisation, in line with a capitalism-focused approach, the lens of political economy interrogates the why, who, and how. It asks why these harms arise, who is benefited or oppressed, and how policy, governance, and power relations shape technological outcomes.


Secondly, a capitalist critique may well frame digital technologies and their development and deployment as tools for capital accumulation. Yet, political economy reveals how AI is co-constituted by both economic and political forces, rather than being purely driven by market dynamics.


Thirdly, while capitalism critiques may highlight global exploitation, political economy shows how AI systems reinforce dependencies and inequalities between nations, and the monopolisation of AI infrastructure by a few corporations.


Finally, political economy recognises that capitalism is not monolithic. Furthermore, it doesn’t necessarily aim to overthrow capitalism outright but instead offers a more granular analysis of its intersections with other forces, allowing for critique without fatalism, reflecting on spaces for reform, resistance, and alternative approaches.


With notable exceptions, international law rarely engages with general capitalism-focused critiques and has yet to become sensible to specific political economy approaches. The latter is crucial for understanding the AI landscape and its affordances.


IV. Ways Forward

The recent interest and involvement of international law scholars and practitioners in the realm of governance of algorithmic systems is interesting on different levels. However, initiatives like the first global Framework convention or global recommendations by international bodies seem to the missing the wider picture. The lifecycle of AI systems – from design and development, through deployment, to eventual decommissioning – is intertwined with the political economy of major tech companies. This relationship significantly alters any attempt for regulatory paradigms and highlights the challenges of establishing a governance model that adequately addresses AI and its broad implications.


The real value of recent international law involvement in this field lies in the opportunity it presents for widespread dialogue within the legal community at large beyond legal siloes; but also with other disciplines, urging a re-evaluation of traditional legal frameworks in light of AI’s complex and dynamic nature. This collaborative approach is essential for developing legal responses that are not only technically effective but also socially responsible, ensuring that AI development, commercialisation and deployment align precisely with the values and principles that international bodies enshrined.


The increasing prominence of AI within the domain of international law offers an enlarged space for enriched dialogue across legal subdisciplines, as well as with disciplines beyond the legal realm.


A more granular understanding of the political economy of AI and the lifecycle of AI systems will not only highlight the potential relevance of international law and its instruments, but also reveal its overall shortcomings and the limits of its applicability. Public international law will be part of the solution but not the solution for AI governance.

About the Author - Mauricio Figueroa is a legal researcher and the host of the Society for Computers and Law ('SCL') Podcast, Privacy and Technology Laws Around the World. SCL is the leading educational charity for the tech law community, with a UK-wide vision and global significance. Their mission is to inform and educate legal and technology professionals, academics, students and the wider audience on the impact of IT on law and legal practice. SCL works to promote ideals of best practice, thought leadership, and the fostering of a global tech law community. This article was first published on their website and has been reproduced with their permission. Find out more about SCL and the work they do here

Most Read

Furniture Village Doubles Its Footprint At Prologis Park

Furniture Village Doubles Its Footprint At Prologis Park

Furniture Village has opened a state-of-the-art National Distribution Centre at Prologis Park Marston Gate, marking a significant milestone in a year of strong growth and long-term investment.

Bechtel Secures Contract Extension At Waste Isolation Pilot Plant

Bechtel Secures Contract Extension At Waste Isolation Pilot Plant

Bechtel announced it received a three-year extension from the U.S. Department of Energy (DOE) to continue managing and operating the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico.

FoodCycle Has Been Awarded £75K

FoodCycle Has Been Awarded £75K

Thanks to players of The Health Lottery, £75, 000 has been awarded to FoodCycle and during a recent visit to one of the charity’s projects in Newcastle, The Health Lottery saw first-hand the importance of these weekly meals for people facing food insecurity, isolation or financial pressure. This grant awarded by The Health Lottery Foundation will go towards helping the charity run around 620 community meal sessions and serve more than 21,000 three-course meals across England and Wales. This...

Categories

  • Writer: Paul Andrews - CEO Family Business United
    Paul Andrews - CEO Family Business United
  • Apr 3, 2025
  • 7 min read

Artificial Intelligence (AI) systems are being deployed across industries and sectors of the economy; it is certainly a global problem. At first glance, international law seems like an obvious way forward: a platform designed to address cross-border issues. But this is far from reality. The international law lens is in fact highly superficial and relies on categories that do not reflect the political economy and dynamics of AI.


Mauricio Figueroa argues why we need to look beyond the conventions of international law to regulate AI.


I. The Nature of International Law

Traditionally, public international law has been primarily – but not exclusively – concerned with the obligations of states, interactions of international organisations, and the recognition of individuals as rights bearers. Conversely, private international law seems to have been concerned with market dynamics, handling contractual transactions, jurisdictional issues, and commercial rights. The generation, commercialisation, and adoption of algorithmic systems was, until recently, largely related to the latter, based on principles and legal institutions of private law, such as torts, copyright, or contract law.


The landscape, however, began mutating as AI’s potential risks and impacts gained attention, and incited international bodies like UNESCO to issue its Recommendation on the Ethics of Artificial Intelligence in November 2021, adopted by all 193 Member States and intended to be universally applicable, despite its non-binding nature. In a similar vein, the United Nations Secretary-General’s High-Level Advisory Body on Artificial Intelligence released its final report Governing AI for Humanity on September 19, 2024. The report includes non-binding recommendations intended to guide the behaviour of states and international organisations, but it also symbolises how a topic like AI governance is now of global relevance and it signals to member States to engage in international efforts to address such a complex global issue. Furthermore, the recent endorsement by the Council of Europe of the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (CETS No. 225) might be seen as a landmark in the evolution of legal frameworks around Artificial Intelligence (AI), as it is the first binding agreement with a global focus.


II. Wrong Targets

The dichotomy of interests among states, dictated by disparate economic structures and varying levels of technological sophistication, profoundly complicates the drafting and adoption of a global instrument to govern AI. This disparity is further complicated by the dominance of large technology firms, commonly referred to as Big Tech, which wield significant influence over the AI landscape. These corporations often operate beyond the regulatory reach of states. Crafting international treaties that effectively regulate these corporations is thus fraught with difficulties, as these instruments must tackle, or at least acknowledge, the interplay between state obligations and corporate power.


For instance, the Framework Convention falls short of touching the substance of AI-related risks. In its 36 articles, it encapsulates a broad consensus around notions established in the international discourse around AI: protecting human dignity, individual autonomy, transparency, accountability, preserving privacy, risk assessments, procedural safeguards, remedies, and so forth. It does not introduce innovative mechanisms or profound insights into managing AI’s complex socio-legal implications.


But the issue at core is not a matter of limitations within the particular Convention per se but rather the very nature of AI. AI systems are part of a highly complex landscape where a handful of corporations exercise power well beyond the realm of corporate affairs. This operates within the new economic logic that scholars like Julie E. Cohen and Manuel Castells have referred to as informational capitalism.


The challenge lies in that domestic legal frameworks, both directly and indirectly, support the prevailing informational capitalist economy: contract law through terms of service and non-disclosure agreements, intellectual property regimes that protect corporate secrecy, corporate law that governs financial allocations, complex tort law that limits extra-contractual liabilities, and even privacy as well, in the form of data protection statutes that predominantly confer individual rather than collective rights, which could otherwise serve to address issues such as data collection, processing, and web scraping more effectively.


Public international law’s approach to AI regulation, which predominantly places the burden on States to enforce compliance, tends to overlook the profit-driven dynamics and influence of major technology powerhouses, and, on a deeper level, the stark disparities in enforcement capabilities among nations, particularly regarding the expertise and resources needed to establish effective regulatory oversight.


The disparity in oversight and compliance is particularly evident in emerging economies within the Global South or Majority World, where the capacity to regulate and contend with algorithmic decisions and the corporations behind them is often inadequate, with information collection and processing activities within algorithmic systems mediating the citizen-state relationship, wherein national and subnational governments find it difficult to enforce corrections or even understand the full extent of the technology’s impacts. Big Tech corporations, with their extensive resources and global reach, present a major challenge to national governance structures, making the enforcement of international AI governance mostly aspirational.


The significant influence and power of multinational technology corporations pose a unique challenge, as these entities often have more resources than the countries in which they operate, and more often than not they relegate corporate activities to outsourced firms in the Global South, instances like in the case of OpenAI and Kenyan content moderators illustrate how enforcement of AI-related rights and rules in emerging economies is certainly challenging.


Moreover, as Chinmayi Arun points out with precision, the regulatory discourse surrounding AI must contend with a globalised operational framework in which companies outsource data annotation to one nation, test algorithms in another, and deploy products with societal risks across multiple jurisdictions. Legal frameworks that fail to interrogate the power dynamics of these private actors, and the economic systems that propel their actions, risk perpetuating regulatory blind spots. Particularly, international law appears ill-suited to engage with these complexities, a deficiency rooted in its historical detachment from the analytical tools of political economy.


III. A Word on Political Economy

To grapple with the pressing legal challenges posed by AI, lawyers and legal scholars must confront the political economy underpinning these systems. Sadly, despite valuable academic efforts, the term often escapes the notice of practitioners and scholars alike, largely due to its lack of familiarity. The marginalisation of the term may arise, in part, from an unfounded assumption that it entails a wholesale critique of capitalism’s role in shaping AI’s development and being the cause of its negative externalities. But such a reading misses the mark. In fact, that would be rather simplistic because it would reduce complex social, political, and technological phenomena to a general critique of profit-driven motives.


First, while different harms associated with AI systems may certainly be characterised as consequences of profit maximisation, in line with a capitalism-focused approach, the lens of political economy interrogates the why, who, and how. It asks why these harms arise, who is benefited or oppressed, and how policy, governance, and power relations shape technological outcomes.


Secondly, a capitalist critique may well frame digital technologies and their development and deployment as tools for capital accumulation. Yet, political economy reveals how AI is co-constituted by both economic and political forces, rather than being purely driven by market dynamics.


Thirdly, while capitalism critiques may highlight global exploitation, political economy shows how AI systems reinforce dependencies and inequalities between nations, and the monopolisation of AI infrastructure by a few corporations.


Finally, political economy recognises that capitalism is not monolithic. Furthermore, it doesn’t necessarily aim to overthrow capitalism outright but instead offers a more granular analysis of its intersections with other forces, allowing for critique without fatalism, reflecting on spaces for reform, resistance, and alternative approaches.


With notable exceptions, international law rarely engages with general capitalism-focused critiques and has yet to become sensible to specific political economy approaches. The latter is crucial for understanding the AI landscape and its affordances.


IV. Ways Forward

The recent interest and involvement of international law scholars and practitioners in the realm of governance of algorithmic systems is interesting on different levels. However, initiatives like the first global Framework convention or global recommendations by international bodies seem to the missing the wider picture. The lifecycle of AI systems – from design and development, through deployment, to eventual decommissioning – is intertwined with the political economy of major tech companies. This relationship significantly alters any attempt for regulatory paradigms and highlights the challenges of establishing a governance model that adequately addresses AI and its broad implications.


The real value of recent international law involvement in this field lies in the opportunity it presents for widespread dialogue within the legal community at large beyond legal siloes; but also with other disciplines, urging a re-evaluation of traditional legal frameworks in light of AI’s complex and dynamic nature. This collaborative approach is essential for developing legal responses that are not only technically effective but also socially responsible, ensuring that AI development, commercialisation and deployment align precisely with the values and principles that international bodies enshrined.


The increasing prominence of AI within the domain of international law offers an enlarged space for enriched dialogue across legal subdisciplines, as well as with disciplines beyond the legal realm.


A more granular understanding of the political economy of AI and the lifecycle of AI systems will not only highlight the potential relevance of international law and its instruments, but also reveal its overall shortcomings and the limits of its applicability. Public international law will be part of the solution but not the solution for AI governance.

About the Author - Mauricio Figueroa is a legal researcher and the host of the Society for Computers and Law ('SCL') Podcast, Privacy and Technology Laws Around the World. SCL is the leading educational charity for the tech law community, with a UK-wide vision and global significance. Their mission is to inform and educate legal and technology professionals, academics, students and the wider audience on the impact of IT on law and legal practice. SCL works to promote ideals of best practice, thought leadership, and the fostering of a global tech law community. This article was first published on their website and has been reproduced with their permission. Find out more about SCL and the work they do here

Most Read

Furniture Village Doubles Its Footprint At Prologis Park

Furniture Village Doubles Its Footprint At Prologis Park

Furniture Village has opened a state-of-the-art National Distribution Centre at Prologis Park Marston Gate, marking a significant milestone in a year of strong growth and long-term investment.

Bechtel Secures Contract Extension At Waste Isolation Pilot Plant

Bechtel Secures Contract Extension At Waste Isolation Pilot Plant

Bechtel announced it received a three-year extension from the U.S. Department of Energy (DOE) to continue managing and operating the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico.

FoodCycle Has Been Awarded £75K

FoodCycle Has Been Awarded £75K

Thanks to players of The Health Lottery, £75, 000 has been awarded to FoodCycle and during a recent visit to one of the charity’s projects in Newcastle, The Health Lottery saw first-hand the importance of these weekly meals for people facing food insecurity, isolation or financial pressure. This grant awarded by The Health Lottery Foundation will go towards helping the charity run around 620 community meal sessions and serve more than 21,000 three-course meals across England and Wales. This...

Categories

FAC Opens Up Airshow Invite

FAC Opens Up Airshow Invite

Following its most successful year ever, Farnborough Aerospace Consortium (FAC) is inviting businesses to join it at this summer’s Farnborough International Airshow (FIA).

Extended Producer Responsibility Will Hit Consumer Pockets

Extended Producer Responsibility Will Hit Consumer Pockets

Aquapak, which specialises in developing high performance, environmentally safe materials that can do the job of conventional flexible plastics and improve recycling efficiency, is warning that Extended Producer Responsibility (EPR), which came into effect in October 2025, could increase the annual grocery bill for the average family of four in the UK by £312 per year.

Aldi's Best Ever Christmas Sales Of £1.65BN

Aldi's Best Ever Christmas Sales Of £1.65BN

Aldi has recorded its best-ever Christmas, with sales of £1.65 billion (+3% vs. 2024) in the four weeks to Christmas Eve, as millions of shoppers turned to the UK’s lowest-priced supermarket to make their money go further during the festive season.

Recent Posts

bottom of page