The rise of AI and the future of lawtech

Charlie Bromley-Griffiths, Senior Legal Counsel at Conga writes exclusively for NODE Magazine

As artificial intelligence (AI) is more widely adopted, additional legislation and guidelines are emerging to protect personal data and ensure models are developed, focusing on risk management and bias prevention. Businesses and regulators naturally share concerns about protecting personal privacy and ensuring that data is secure, not least because there are so many regional laws to contend with.

As legal counsel for a global company, the idea of wrestling with various AI frameworks can feel overwhelming at times. Trying to align all the different rules, definitions and standards – it is a legal minefield. With AI evolving faster than ever, it is important that organisations stay on top of developments, legislation and regulatory changes. Failure to do so could prove catastrophic.

AI has already been creating muddy waters in the world of law, with last year’s news that two AIs had successfully negotiated a legally-binding non-disclosure agreement without any human involvement. This was done in a matter of minutes and only required a human signature at the end. This brings up all kinds of questions from the possibility of job losses or automation, as well as how much risk is associated with giving AI control over drafting contracts. If AI is to be given this kind of responsibility, then it needs to be legislated for, but in a globalised world this raises many more questions.

The international situation

There are 195 nations in the world. That is 195 different potential legal frameworks that will cover AI. Many countries are beginning to start introducing legislation, with the European Union (EU) prioritising transparency and algorithmic fairness with its AI Act, which is part of a wider set of policy measures including the AI innovation package and a coordinated plan on AI. The EU says that these will guarantee the fundamental rights of people and businesses as well as strengthen uptake, investment and innovation in AI across the EU. This framework also works on a ‘risk-based approach’ which considers how much risk each system offers and all those that are deemed a clear threat are banned.

Meanwhile the United States relies on a sector-by-sector approach. The White House published a blueprint for an AI Bill of Rights back in 2022, which looked to adopt five principles:

  1. Safe and effective systems
  2. Algorithmic discrimination
  3. Data privacy
  4. Notice and explanations
  5. Human alternatives, consideration and fallback

The issue, however, is this blueprint is not an enforceable, legally binding document. It is merely a series of suggestions. This may be the beginning of a step in the right direction but whether the United States will walk that line remains to be seen. While this blueprint remains only a blueprint, AI legislation in the USA will remain fractured.

This inconsistency worldwide creates challenges for developers and consumers alike. Imagine an AI used in loan approvals. In one country, bias might lead to rejections for certain demographics. Another might have no regulations to prevent such bias. Common legal standards are crucial to ensure fairness and ethical treatment for everyone, everywhere. Therefore, in terms of AI legislation, international collaboration will be crucial in making AI work for everyone.

Collaboration is key

In 2019, Fei-Fei Li, Co-director of the Stanford Human-Centred AI Institute, emphasised the importance of international collaboration in developing ethical guidelines for AI and stated the following:

“We need bold leadership, a national vision and a values-driven framework for international standards, policies and principles. This requires an unprecedented collaboration and commitment of international, federal, state and local governments, as well as academia, non-profits and corporations.”

More consistent regulations would protect both consumers and developers alike. However, we should also be cognizant of the fact that achieving global consensus on AI regulation is no small task. It demands extensive collaboration among nations with differing cultural, legal and economic interests. The pragmatist within understands the concerns that could complicate any standardisation efforts. Like the GDPR, the international community needs to find its ‘gold standard’ for AI regulation and try to at the very least model its laws around that.

Elon Musk suggested that AI would become the most disruptive force in the history of work. This effect on the working world is clearly a reason why global legislation is necessary, as without this, it will be impossible to use AI for international needs. Without clear frameworks, one country’s AI will be programmed to one set of legislation, and if trying to work with another country’s AI (programmed with another set of legislation) they will be incapable of coming together and creating a legally binding document and confusion will reign.

Through international efforts, we can establish common ground for responsible AI development, ensuring this technology serves humanity as a whole.

When it comes to the subject of organisations increasing their budgets for AI, there has been a huge increase in the past few years in spending. A report on the state of AI by McKinsey found that in most industries, organisations are likely to invest more than 5 percent of their digital budgets on some form of AI. With this level of investment companies will want to be sure that they stay on the right side of the law and with many companies operating in different regions, a set legal framework will be crucial to ensure AI use is both ethical and lawful.

It is best practice to keep up to date with all legislation relating to AI in the regions where a business operates and ensure that any further AI implementation follows this strictly. Businesses must also keep an eye on any future or impending legal developments and ensure that they are in the position to adapt their services accordingly. To do this successfully any company looking to invest in AI should keep their legal team in the loop and ensure they are on the same page to prevent miscommunication. AI is an ever-changing world and without a strong change management strategy businesses could end up creating more issues for themselves and their customers than AI can fix.

Charlie Bromley-Griffiths, Senior Legal Counsel at Conga writes exclusively for NODE Magazine

Charlie Bromley-Griffiths

Charlie Bromley-Griffiths is Senior Legal Counsel at Conga.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE