Sunday Times 2
Artificial intelligence, cybersecurity, and evolving legal trends
View(s):By R.M. Dhanushka Surendra
In today’s context, we live in a world where information technology has considerable influence on our day-to-day lives. As the growth of IT is rapid and revolutionary, new ventures have been introduced to society over the past decades. The basic legal principle “ignorance of the law is no excuse” plays a vital role in any legal system, even in a tech-driven era. In today’s tech-driven world, Artificial Intelligence (AI) and cybersecurity play major roles, making it crucial to understand the legal implications when collaborating with tools in the modern world.
This article breaks down key ideas in AI, cybersecurity, and the need for updated laws to tackle new challenges. As defined by Stanford, AI is “the science and engineering of making intelligent machines.” The history of AI goes back to “The Turing Test” in 1950. It evaluated a machine’s ability to exhibit intelligent behaviour. Sooner than later, the world will be governed by tools based on AI.
Cyber security
Cybersecurity involves protecting computer systems, data, networks, and programmes from unauthorised access, modifications, disclosures, and destruction. It aims to safeguard our digital lives against cybercrime and fraudulent activities.
Many countries have laws to protect citizens from cybercrimes, but the question is: “Are existing laws sufficient to address the new challenges brought by the development of AI?”
AI tools and legal concerns
There were recent media reports on the distortion of the “karaneeya metta sutta” using AI tools. Today, tools like ChatGPT and Google Gemini are widely used, though they have limitations. It has been noticed that sometimes these tools provide biassed answers in favour of certain ethnic groups. For example, if we type a question like “Why does ChatGPT provide a biassed answer for racism?” its answer will be “ChatGPT is trained on vast amounts of text data from the internet, which includes a wide range of perspectives and biases. If certain viewpoints are more prevalent in the training data, this can influence the responses generated by the model.”
Accordingly, it provides output based on trained data. However, the concern is the reliability of data-trained use. On the other hand, whether the data used to train the model is taken with or without the consent of the authors is a concern.
These AI tools generate text, images, and videos based on vast amounts of pre-trained data. However, this raises legal questions about data ownership and the consent of the original authors. Who are the owners, who can access it, and under what authority? These are some of the concerns. When AI models are trained on large datasets, it can lead to intellectual property infringements. Who is liable for these violations—the AI developer or the user? Current laws need to address these issues, and new laws may be required.
Misinformation, disinformation, and deep fakes
Today, information plays a vital role. In such a context, misinformation, disinformation, and deep fakes have become serious concerns. Well-trained deep-fake algorithms can create realistic videos and images of people doing or saying things they never did. This technology can be misused to spread false information, damage reputations, and incite hatred. While freedom of speech is a fundamental right in most countries, the misuse of AI to violate these rights is a growing problem.
The need for updated laws
Sri Lanka recently passed the Online Safety Act to protect individuals from harm caused by prohibited online statements and to safeguard against the misuse of online accounts and bots. However, amendments are needed to address AI-based crimes and misuses. Some countries have already updated their laws to address these issues.
In the case of Authors Guild et al. v. OpenAI Inc. et al., copyright infringement was questioned. The plaintiff’s major allegation was that defendants used the plaintiff’s copyrighted works to train large language models, and hence this algorithm committed large-scale theft of the plaintiff’s works, such as books.
If we type “summarise Madol Doowa or Tale of Two Cities” in a ChatGPT, it will provide a summary. Again, the question arises: is it legal? Original users may publish their work on some websites; those may be for commercial or noncommercial purposes, but those sites were referred to without the consent of the original users, and this may be considered digital theft and breach of intellectual property rights.
A recent development in this regard is Elon Musk vs. OpenAI, where Musk sued OpenAI for making money using its AI-based tools. This provides a clear hint that the future world will be in huge chaos if the boundaries are not defined and legal frameworks are not introduced.
Redefining information security
The pillars of information security—availability, confidentiality, and integrity—need redefining in the AI era.
• Confidentiality: Keeping data secret and accessible only to authorised parties is challenging when AI tools use training data without owners’ consent.
• Integrity: Ensuring data correctness is difficult if AI models are trained with inaccurate information, leading to false responses.
• Availability: Ensuring data availability is essential, but the trustworthiness of AI-generated information remains a concern.
These recent developments in cyber security definitions need to be revisited, and it is essential to introduce new dimensions to address the challenges of AI.
Current laws may not be sufficient to address the implications of AI and cybersecurity advancements. A thorough review and the establishment of new legal boundaries are essential to prevent potential catastrophic outcomes. Moreover, as stated, to address the principle that ‘ignorance of the law is no excuse,’ it is recommended to create awareness in society about these developments.
(The writer is an attorney at law and senior lecturer in IT at the UCL Campus, Sri Jayewardenepura Kotte.)