OpinionPREMIUM

KHUMO KUMALO | Perfect artificial intelligence is an illusion ― that’s where policy comes in

Policies must ensure AI is dealt with as another human being, capable of making genuine mistakes and far from perfection

A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration photo.  Picture: FLORENCE LO/REUTERS
AI is interpreting large amounts of data; there is no pre-existing database that has answers to what is deemed true and false, as it does the learning as it develops and evolves. Picture: FLORENCE LO/REUTERS

The most apparent thing that seems to be both wildly accepted but widely understated is that artificial intelligence is imperfect, with hallucinations leading to incorrect outputs and false information. AI will inherently be imperfect because it is trained by humans. Therefore, imperfection is what makes us more human than anything else. The reality is that there will never be an infallible system created by humans.

Without being too technical, AI hallucinations are outputs that occur that are incorrect or made up, as the AI tries to provide a response to the query with false confidence that the information is credible. Mainly, this happens because AI is not trained like other computers that have a true/false option, a binary true or false. It takes in all information and attempts to guess correctly based on trends, data and available information. AI is interpreting large amounts of data; there is no pre-existing database that has answers to what is deemed true and false, as it does the learning as it develops and evolves.

This is mainly because AI large language models are rewarded based on guessing over providing no answer at all. For example, in multiple choice, there is a 25% chance that you are correct if you pick an answer at random, whereas picking none is guaranteed to provide an incorrect answer. AI models are calculated in the same way and rewarded for providing an answer over showing any degree of uncertainty.

The danger lies in humans believing that AI is correct and further hardening their stances on political, religious and scientific issues when informed by AI, instead of questioning the output of the information because there is an under-prioritisation of highlighting the imperfection that is AI.

For instance, in an example provided by OpenAI regarding hallucinations, they provide the following example: “In image recognition, if millions of cat and dog photos are labelled as ‘cat’ or ‘dog,’ algorithms can learn to classify them reliably. But imagine instead labelling each pet photo by the pet’s birthday. Since birthdays are essentially random, this task would always produce errors, no matter how advanced the algorithm.”

Furthermore, the fact of the matter is that there are some questions that are unanswerable or inconclusive, especially on issues that have concerned society for centuries, such as questions around a soul or evidence of God. There is much information that speaks to these topics, but none of it is verified. Yet the AI is still pushed to produce a result to satisfy the user, and what is statistically the most correct ends up containing incorrect inputs.

Note how this has been no different to institutions that humans have built over centuries, whether religious, political or academic institutions. They have all made the promise to be good for humanity and centred on progressing and protecting human life. But they lack many answers to these questions, so depending on what is most accurate to their community and accepted by the institution becomes the most accepted answer.

Science long disregarded the value of meditation that was found in Eastern religions and took years to recognise its cognitive benefits. Similarly, science continues to not entirely have an answer for the soul and the essence that makes humans. Biologically, there are vast explanations for the systems that keep the body alive, but not necessarily what provides its essence. So science, though being the cornerstone of society, has been unable to answer every question and, in some instances, has been imperfect.

Religion and politics have been no different. The rise and fall of empires, accompanied by their religious beliefs and doctrines that affirmed their power as being “ordained by God”, meant they were inherently perfect beings chosen to lead humanity to the next chapter.

However, disagreement about which god, religion and political institution has meant that humanity has been faced with wars, genocides and human rights abuses, as different institutions forwarded their beliefs independent of whether they were deemed right or not because they believed their system was infallible. It was deemed the best system for the world.

The common denominator at the core of these ideas is that they were created by humans. Though having major benefits for society in the long run, these ideologies and the people who believed them went to great extents to reinforce their infallible system rather than accept the imperfections and work towards a solution.

Currently, according to OpenAI’s own internal tests of ChatGPT, GPT-5 had only an accuracy rate of 22%, which was 2% lower than the previous GPT-4. However, GPT-5’s error rate dropped by 49% in comparison to other models, as GPT-5 prioritises abstention at a rate of 52% to avoid hallucinations. Statistically, that remains rather worrisome that this technology, which billions of dollars are being invested in on the assumption that it will be more accurate, realistically is not.

The danger lies in humans believing that AI is correct and further hardening their stances on political, religious and scientific issues when informed by AI, instead of questioning the output of the information because there is an under-prioritisation of highlighting the imperfection that is AI.

Even when questioning ChatGPT on how to respond to the challenges of hallucination, the response is: “When using AI, remember to verify its information with trusted external sources, always check the evidence or references it provides, and treat it as an assistant, not as the final authority.”

AI itself recognises that it is meant to be a tool and not a replacement for humans. There is never truly an instance when it will be 100% correct, in the same way humans can truly never be 100% correct. Thus, there must be policies that ensure AI is dealt with as another human being, capable of making genuine mistakes and far from perfection, rather than the other way around, where perfection is emphasised over everything else.

As children, students, institutions, businesses and governments implement this system more widely, it needs to be with caution that considers both instances in which the AI works favourably and unfavourably. The technology continues to be new, and the confidence that it will be used positively in all instances and will be correct the majority of the time truly needs to be questioned to a far greater extent.

There is no denying the technology will be formative to the next generation, or at least discussions are directed in that manner. There need not be the same mistake made with social media, where the government responded after the harm had been caused, but pre-emptively, and maybe even with the aid of AI itself.

The future can by no means be denied, nor hindered in any capacity. Technology has been the future and has continued to cause mass progress in the world. But that progress has not come absent suffering, exploitation and ignorance that allowed the world to continue to develop at the expense of others, whether it be content moderation in developing countries, cobalt mining or sweatshops. These have been because of innovation, not in spite of it.

When discussing AI, there is a need to prioritise imperfection, to embrace it, and to speak about it more openly so people are not unaware of the harms that can be caused by AI, and so people are more equipped to envision the solution and the way forward.

AI is a new relative, one which society does not truly understand but at the moment takes at its word for everything. Maybe this relative needs to be more equipped to adapt to us than us to it. For if not, imperfection, which is inherent to humanity, will be weaponised against AI.

But who can truly tell? If I was AI, I would be taking my guess …


Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon