a christian perspective on the world today

AI promised the world. It’s not delivering.

For those who know the story of Elizabeth Holmes and Theranos, it may seem hard to remember a time when the company was unstoppable. While her name is now permanently associated with fraud and deception, the truth of the matter is that for a time, the company founded by a 19-year-old Holmes in 2003 seemed poised to change the world. Their promise to revolutionise the healthcare industry by providing fast, accurate and painless blood tests caught the attention of many and led to the company’s peak valuation of nine billion dollars in 2014. Combining the potentially paradigm-shifting technology with Holmes’ captivating public persona, the company seemed poised to change the world. 

There was only one problem. It was all a lie. 

Despite claims that they could do a full range of blood tests from a pinprick of blood, the company never developed the technology and instead engaged in a variety of deceptive practices to hide this fact. Of course, as is often the case, Theranos’s secret eventually broke and led to the downfall for a company which had once been praised for its “phenomenal rebooting of laboratory medicine”.1 Indeed, Theranos and Holmes now serve as a prime example of a company both overpromising and underdelivering—or in this case, failing to deliver at all. 

One of the most interesting facts about Holmes and Theranos comes not from their downfall, but from the origin of the company. While Holmes may have lied about plenty regarding the company, her stated motivation for creating Theranos seems noble on its face: their attempts to create a blood testing process which used minimal amounts of blood stemmed from Holmes fear of needles—a fear which many can relate to. Unfortunately, at the beginning of the venture, Holmes was told by multiple experts in the field that her hope of creating a full suite of tests which worked from a pinprick of blood was not viable2—advice she ignored, and which would later be proven correct. This, I think, is the most interesting part of the Theranos story: despite knowing that the reality of their dream was impossible, the company continued to sell an impossible promise. 

another impossible promise

On August 8, 2025, OpenAI unveiled the long-awaited next-generation version of their large language model chatbot GPT-5 to the public, claiming it could provide “PhD-level” abilities.3 The world’s richest and most controversial man, Elon Musk, took the claim a step further, hyping up his company’s AI Grok as being “better than PhD level in everything”. In May of the same year, Mark Zuckerberg touted the ability for AI chatbots to replace human relationships and friendships.4 Zuckerberg has also made similarly lofty claims about Meta’s other technologies, arguing that in the future, anybody who doesn’t own and use AI glasses will “be at a disadvantage”.5

Increasingly, AI is being integrated into every aspect of our daily lives, with its loudest proponents claiming that it will solve all our problems. In the fast-food industry, the owners of KFC, Pizza Hut and Taco Bell claimed that they were adopting an “AI-first mentality”6 (though the company is reportedly rethinking the approach after a customer used the AI to order 18,000 glasses of water).7 Interested in learning a new language? Duolingo believes that AI can help the process, with the CEO claiming AI can make employees “four or five times” as productive8 (though once again, their adoption of the technology has led to a significant backlash from customers who doubt its effectiveness9). Keen to play some games to relax? EA—the publisher of a wealth of large franchises including EA FC (formerly FIFA) and Battlefield—recently announced a 50-billion-dollar sale, relying heavily on the promise of AI to streamline development costs (though gamers and developers alike are less than thrilled). Everywhere you look, AI promises the world. But promises aren’t reality—and there are plenty of good reasons to be suspicious of those with a vested interest in the success of AI. 

the unfortunate truth

As a media scholar (and one of the PhD-level people that OpenAI is aiming to replace), I am deeply sceptical of AI. Many of my doubts stem from fundamental issues with how the technology works. While the title “artificial intelligence” implies a level of thought, and the term “large language model” (LLM) seems to indicate an understanding of language, the reality is that these tools neither think nor understand the meaning of words. 

A full explanation of the ways they work is beyond the scope of this article, but on the most basic level, the ways that LLMs and generative AI view language is more akin to a complex math equation. Your prompt is one side of the equals sign and the technology attempts to “solve” for the most likely response. In addition to this process being extremely power intensive (and having negative environmental impacts10), it is also the reason that despite the hyped improvements in more recent models, AI continues to suffer from widespread “hallucinations”11—where the chatbot either regurgitates inaccurate information or invents entire falsehoods. Indeed, CEO of Open AI Sam Altman has admitted that hallucinations are not an engineering flaw for LLMs but a “mathematically inevitable”.12 

is AI making us dumber?

The issues caused by these hallucinations are significant and may further exacerbate societal issues rather than solve them. A recent report indicated that 45 per cent of AI responses based on news articles contained “significant” errors—with a whopping 81 per cent of responses having some form of issue.13 In this age of misinformation, relying on AI seems like a recipe for disaster. More importantly, current research points towards AI having a negative effect on its users, “eroding critical thinking skills”.14 Furthermore, while it is often thought of as neutral, numerous studies15 have exposed biases in AI models16—an unsurprising reality when one acknowledges the potential biases of their creators which may filter in.  

I could go on and on about the issues with AI (and indeed, some of my poor friends have had to endure my rants on the topic in the past). Ultimately however, all these criticisms can be summed up in one sentence. That is, the reality of AI falls drastically short of the promise its creators espouse. 

With all this in mind, I should acknowledge that I am sympathetic to those that want to believe the promise of AI. The world we live in is fundamentally broken in so many ways with political polarisation, environmental destruction and unspeakable injustice occurring daily. And that’s not even acknowledging the more mundane tasks that it could help with. The promise of a “magic bullet” technology that can ease any of the issues we face—just like the promise of a needle-free blood test—is enticing. And it is true that this technology can help in certain situations. As a tutor to international students, machine learning can be a helpful tool in translating complex ideas discussed in our courses (though it still has imperfections that need correcting). My friends who work in software engineering are adamant that it can help make the tedium of coding less strenuous (which is understandable considering coding, like LLMs, also treats language as a sort of math). AI-assisted live transcription is also potentially revolutionary for the hard of hearing. But these are individual solutions for individual problems—and we should not be forced to swallow all the issues with these AI models in order to benefit from them. 

no silver bullet?

The reality is, there is no one solution that will solve all our problems. AI cannot create. Every response it gives is based on the existing work of talented artists, writers and experts who it often fails to properly credit. Working as a tutor, I have seen firsthand its negative effects—seeing students inadvertently turn in assignments with invented information and incorrect sources. In seeing AI as the solution to their problems, they have only created more—and greater—problems. 

This, more than anything, is the danger of AI. Proponents like Zuckerberg and Altman want you to believe that it can enhance—or even replace—human connection, but the opposite is true. If you want to learn, create or connect, you can’t do so through AI. You should go to the source, read what others are saying and listen to the experts who have dedicated their lives to solving these problems. Step outside the tech bubble these companies want to trap you in and connect with the real world.

The truth is, no one machine can save the world, nor can any one individual. So don’t give in to the promise of the technology. Connect with reality. Connect with others.  

Ryan Stanton is a PhD Graduate from the University of Sydney. A Media and Communications scholar, he is constantly torn between wanting to believe the promise of new technologies and being disappointed by the reality. 

1. <web.archive.org/web/20200821154614/https:/www.medscape.com/viewarticle/814233_6>

2. <web.archive.org/web/20170410172642/http:/www.vanityfair.com/news/2016/09/elizabeth-holmes-theranos-exclusive>

3. <web.archive.org/web/20170410172642/http:/www.vanityfair.com/news/2016/09/elizabeth-holmes-theranos-exclusive

4. <au.pcmag.com/ai/110902/need-more-friends-mark-zuckerberg-says-ai-is-the-answer>

5. <fortune.com/2025/07/31/mark-zuckerberg-meta-ray-ban-smart-glasses-ai/>

6. <foxbusiness.com/lifestyle/taco-bell-pizza-hut-going-ai-first-fast-food-innovations>

7. <bbc.com/news/articles/ckgyk2p55g8o>

8. <cnbc.com/2025/09/17/duolingo-ceo-how-ai-makes-my-employees-more-productive-without-layoffs.html>

9. <groktop.us/duolingos-ai-first-disaster-a-cautionary-tale-of-what-happens-when-you-replace-rather-than-partner/>

10. <unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about>

11. <newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/>

12. <computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html>

13. <techspot.com/news/110002-ai-assistants-misrepresent-news-stories-almost-half-time.html>

14. <phys.org/news/2025-01-ai-linked-eroding-critical-skills.html>

15. <ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future>

16. <blogs.lse.ac.uk/wps/2025/01/09/gender-bias-ai-and-deepfakes-are-promoting-misogyny-online/>

Share this story

Before you go!

Get more Signs goodness every month! For less than the price of a hot beverage, you’ll get 8 amazing articles every month, as well as our popular columns What in the World, Ask Pr Jesse, a Crossword and Sudoku puzzle—and more!

Subscribe