Can ChatGPT be trusted? True or false - five myths about the reliability of artificial intelligence

Text: Antti KivimÀki
1. ChatGPT and other AI services are objective
FALSE
ChatGPT always gives a slightly different answer to the same question. Like other generative AI algorithms, it has randomness built into it. But the truth doesnât change, so its answers canât really be âtrueâ if they keep changing.
The same problem exists in image recognition services. We tried , and we found that they gave different answers and recognized a different numbers of things in the images. One service predicted with certainty that there was a fire engine in one picture, but the other services didnât detect it âand neither did I.
Answers from AI services shouldnât be thought of as the truth but as a point of view. They have their own interpretations of things, just like humans do.
2. AI generates answers by itself
FALSE
Humans are involved in many ways in the production and processing of information by AI services. Moderators screen ChatGPTâs responses and remove ones that are deemed inappropriate. Humans also annotate material for machine learning, marking different features (like cats or fire engines) so the AI systems can learn to recognise them. Many annotators and moderators work in the Global South for a low wage and often in poor conditions.
Generative AI companies are also hiring poets to make the responses flow better and sound more beautiful, and users also influence AI systems. For example, when they tweak ChatGPT to get a better answer, that helps the machine learning system calibrate its responses.
3. AI is political
TRUE AND FALSE
Few things in the world are truly apolitical. As AI is used more and more in society, there are lots of ideas and discussions about where it should and shouldnât be applied. For example, screenwriters in Hollywood went on strike because they were concerned that AI would be used to replace them.
But we also have a tendency to see AI as more aware and more human than it is. Although ChatGPT produces politically charged sentences, it doesnât ârealiseâ that itâs talking about a politically sensitive topic. It simply organizes and processes data statistically.
4. ChatGPT is politically left-leaning
TRUE AND FALSE
Many studies have investigated the values in ChatGPTâs responses, and theyâve found that its responses skew to the left â though âleftâ and ârightâ were measured by US standards. There are a couple of theories to explain these findings.
One possibility is that there are more left-wing articles and posts on the internet, so the data used to train ChatGPT might have been biased. The skew could also come from moderation if right-wing responses by ChatGPT are more likely to seen as politically incorrect and get flagged by moderators. Or it could be something else entirely â the truth is that itâs actually very difficult for us to say anything exact about these complex algorithmic systems.
But Iâm also not very convinced by these studies because of some weaknesses in how they were done. Even small differences in how you ask a question can get very different responses from ChatGPT, and the studies werenât designed to deal with this. Some of them also didnât repeat things enough to account for the random variation in ChatGPTâs responses.
5. Artificial intelligence dramatically increases productivity
FALSE
AI has proven useful for processing large data sets â for example, an image recognition system can quickly distinguish the contents of millions of images. AI systems can also help with many routine tasks, like formulating an email with a friendly tone. But these benefits are partly illusory.
AI does a good job of sifting, classifying and aggregating, but it often doesnât produce anything very useful. If you ask a machine learning algorithm to find ten groups in the data, it will find ten groups â but they might not be sensible groups. Itâs up to the human user to assess the meaningfulness of the responses. If you include the time needed for fact-checking, traditional processes might be quicker than using AI.
When people are hired for expert work, the hope is that theyâll be so proficient in their field that there wonât be much need to supervise their work. AI certainly doesnât yet have the depth of expertise or the ability to make overall judgements. That means the AI always has to be monitored by a human with the skills to sceptically evaluate its output.
Read more news

Glitch artwork challenges to see art in a different light
Laura Könönen's sculpture was unveiled on 14 October at the Otaniemi campus.
Nanoparticles in Functional Textiles
Dr. Md. Reazuddin Repon, Postdoctoral Researcher at the Textile Chemistry Group, Department of Bioproducts and Biosystems, Aalto University, has contributed as an editor to a newly published academic volume titled âNanoparticles Integrated Functional Textilesâ.
Introducing Qi Chen: Trustworthy AI requires algorithms that can handle unexpected situations
AI developers must focus on safer and fairer AI methods, as the trust and equality of societies are at stake, says new ELLIS Institute Finland principal investigator Qi Chen