I believe you read the article published in the previous edition of SriLankaNZ newspaper on “What is Chat GPT and how that may impact our lives”. I thought of writing this article as a continuation of the topic after attending a conference that discussed Artificial Intelligence or AI and its Regulation.
Three decades ago, AI helped moviemakers to produce movies such as Matrix or Blockbuster. However, today we are experiencing AI everywhere, either knowingly or with no clue at all. In the last few months, Chat GPT has been the talking point everywhere. Just to refresh, what Chat GPT and AI Chatbots are; which can write essays, make-up phone scripts, complete movies, and even attempt exams though Chat GPT still struggles with the exam. We observe many tech leaders calling to regulate AI. The creator of Chat GPT, Samuel Altman, wants us to regulate it. Many people think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models without killing useful innovations. Some of those creators understand how powerful AI is. However, the question is how we can control it. Or even the harder question is, can we control it? There could also be researchers and some people may question why we should care.
AI is all around us; we are already using AI in Google Maps, talking to Siri or Alexa, using predictive text, using face ID to unlock our phones, the Google algorithm that is making this SriLankaNZ newspaper/article pop up on your feeds are few to name. Let us try to understand AI by comparing it to humans. Humans can think and that is exactly what AI is trying to mimic or replicate with systems that think and solve problems like we humans do.
The idea is simple to recreate human-like thinking to solve problems, which brings us to the term machine learning, essentially learning from self-experience on the go beyond learning from previous data. They recognise existing patterns and use them to solve problems with no major human intervention. Imagine you are scrolling down your search engine, and you will see a video about Sri Lankan beaches. When you ‘liked’ that video, Sri Lankan beach videos started popping up for you. So what the algorithm has done is, it used machine learning to understand what you like and started recommending similar content.
Now AI has come a long way, computers beating humans at chess is history. AI can be categorised in many ways. AI can broadly be characterised into three categories. First, Artificial Narrow Intelligence, where the system is given one task and it does just that. Some examples are appliances, self-driving cars, streaming apps, and even healthcare. This looks simple, as we give a machine a task, and they do it. Then in the second stage, AI can rival humans as they can do multiple things at the same time, many called Chat GPT a step towards Artificial General Intelligence. Then we have the third stage where machines go beyond human intelligence, which raises some concerns. It is called artificial super-intelligence. They have a mind of their own. They can be seen in science fiction movies with the villain robot. The current emergence of AI has raised a lot of questions, including ethics, bias, responsibility and explainability. The critical question is, can it go out of hand and are we too late to control this?
In March this year, Twitter CEO Elon Musk and Apple co-founder Steve Wozniak wrote a letter asking world leaders to take action to stop AI development for six months. The letter requested powerful AI systems should be developed only once we are confident that their effects will be positive and the risks will be manageable.
Tech leaders believe AI is growing too fast and fast enough to have negative impacts on society and humanity. Inherently AI is not without its own set of problems. So the first would be the way it is used. Facial recognition to open your phone sounds great. But what happens when the same is used to spy on you? The other problem is bias. You would expect machines to be neutral. However, they are made by us and as humans, we are inherently biased. So the machines we make with a bias, AI, almost amplify the existing biases in society. AI is as good as the data that is fed if the data is biased, so will the machine. Consider an AI algorithm that is trained on data involving only men. Now when this same algorithm is applied to women, the result will be a biased one.
Compare AI with social media. We thought of regulation of social media only after things went out of control. AI too can be a double-edged sword that can be used for both good and bad.
By Dr. Amal Punchihewa – Palmerston North