The new wave of artificial intelligence (“AI”) is bringing with it promises, as well as threats. By assisting laborers across industries, it can raise productivity and boost real wages. By making use of large, underutilized data, it can improve outcomes in services including retail, health, and education. Meanwhile, the risks range from deepfakes and privacy abuse to unappealable algorithmic decisions, large-scale intellectual property infringement, and wholesale job losses.
Both the risks and the potential benefits seem to grow by the day, as AI developers roll out new platforms and/or tools that consumers and companies, alike, can make use of. For example, Open AI recently released new models it said could reason, performing complex calculations and drawing conclusions. Against this background, there are rising calls for new AI-specific regulations.
Most AI Issues Are Already Regulated
A Senate committee in Australia is about to report on the opportunities and impacts of the uptake of AI. Separately, the government – following in the footsteps of the likes of the European Union, the United States, and others – is consulting about mandatory guardrails for AI in high-risk settings, which would function as a sort of checklist for what developers should consider alongside a voluntary safety standard.
But how necessary is new regulation? In reality, most of the potential uses of AI are already covered by existing laws and regulations designed to do things such as protect consumers, protect privacy, and outlaw discrimination. These laws are far from perfect, but where they are not perfect the best approach may be to fix or extend them rather than introduce special extra rules for AI, which can certainly raise challenges for existing laws – for example, by making it easier to mislead consumers or to apply algorithms that help businesses to collude on prices. But the key point is that laws to control these things exist, as do the regulators experienced in enforcing them.
Making Existing Laws Work for AI
One of Australia’s great advantages is the strength and expertise of its regulators, among them the Competition and Consumer Commission, the Communications and Media Authority, the Australian Information Commissioner, the Australian Securities and Investments Commission, and the Australian Energy Regulator. Their job should be to show where AI is covered by the existing rules, to evaluate the ways in which AI might fall foul of those rules, and to run test cases that make the applicability of the rules clear. It is an approach that will help build trust in AI, as consumers see they are already protected, as well as providing clarity for businesses.
AI might be new, but the established consensus about what is and is not acceptable behavior has not much changed.
In some situations, existing regulations will need to be amended or extended to ensure behaviors facilitated by AI are covered. Approval processes for vehicles, machinery and medical equipment are among those that will increasingly need to take account of AI. And in some cases, new regulations will be needed. But this should be where we end up, not where we begin. Trying to regulate AI because it is AI could, at best, be ineffective. At worst, it could stifle the development of socially desirable uses of AI.
Many uses of AI will create little if any risk. Where potential harm exists, it will need to be weighed against the potential benefits of the use. The risks and benefits ought to be judged against real-world, human-based alternatives, which are themselves far from risk-free. New regulations will only be needed where existing regulations – even when clarified, amended or extended – are inadequate. And where they are needed, they should be technology-neutral wherever possible. Rules written for specific technologies are likely to quickly become obsolete.
The Last Mover Advantage
Finally, there is a lot to be said for becoming an international “regulation taker.” Jurisdictions, such as the European Union, are leading the way in designing AI-specific regulations. Product developers worldwide will need to meet those new rules if they want to access the EU and those other big markets. If other nations developed their own idiosyncratic AI-specific rules, developers might ignore relatively smalls market – like Australia – and go elsewhere. This means that, in those limited situations where AI-specific regulation is needed, the starting point should be the overseas rules that already exist.
There is an advantage in being a late or last mover. This does not mean that countries like Australia should not be in the forefront of developing international standards. It merely means it should help design those standards with other countries in international forums rather than striking out on its own. The landscape is still developing. The aim should be to give ourselves the best chance of maximizing the gains from AI while providing safety nets to protect ourselves from adverse consequences. Existing rules, rather than new AI-specific ones, are a good place to start.
Stephen King is a Professor of Economics at Monash University.