
AI technology is now ubiquitous. But a study from Charles Darwin University in Australia warns we are often using AI without questioning it, and without any transparency into how it works…
By David Sussin
It's not often a new technology is accused of undermining democratic values and destroying ethical landscapes at dangerous speed.
Nobody ever said that about the Sony Walkman.
But Artificial Intelligence is obviously on a different scale. It has abilities our world has never seen.
It's so fast at processing massive datasets and so impressive in returning complex answers, we can't wait to hand over the keys to every important aspect of society.
We've come a long way since 1967, when Stanford chemists first experimented with AI to help analyze molecular structure.
The technology is now ubiquitous. Governments use it for policy analysis and decision making. Medicine relies on it for diagnostics, drug discovery, and patient care.
AI has been integrated into education through personalized learning. Global financial systems depend on it for fraud detection, trading, and risk management. It's grown beyond any new technology.
It's infrastructure. Even more than that, AI is actively shaping our society.
It should be a great thing. AI is doing complicated work for us humans, and doing it in seconds.
But a study published this April from Charles Darwin University in Australia warns we are often using AI without questioning it, and without any transparency into how it works.
There's a "black box" around AI's internal process, how it makes decisions and generates answers. Not only can it be factually wrong, its answers are generated without human guardrails -- there's nothing ensuring its recommendations are good for humanity.
That may sound grandiose, if you're just asking ChatGPT about a lasagna recipe. But when government uses AI to make policy, it becomes key.
The authors of the study, titled "Human dignity in the age of Artificial Intelligence," see the enforcement of ethical guardrails as the difference between a utopian future and a slow descent into a Mad Max dystopia.
A huge red flag came in March of 2023, when Italy became the first Western Democracy to ban ChatGPT.
The AI product, made by OpenAI, was violating Italy's data protection laws on several fronts. First, it was collecting user data without permission, using personal information to train its algorithms without consent.
Second, ChatGPT would process personal data along with hallucinated, incorrect facts, resulting in potentially damaging recommendations.
A third concern was ChatGPT's absence of any age verification, surprising since OpenAI's own terms reserve ChatGPT's use for people who are at least 13 years old.
If there was a human involved in this process at any point, they might have flagged this on the OpenAI side. But in a world where algorithms make all the decisions, there is no human concern, no ethical judgement. There is just data.
By April of 2023, Italy lifted the ban. OpenAI made fast adjustments to its chatbot and calmed concerns. But the researchers at Charles Darwin University point out, not every country is enforcing regulations on AI. In fact, some are not attempting to regulate AI at all.
The United States is big on doing nothing. That is, relying on companies to self-regulate. The CDU study warned against this approach, saying "it may potentially pose many risks to the USA's stability, safety, and security."
It's a risk we willingly take, for the freedom of financial success. We let tech companies grow into powerful regimes unto themselves, with their own geopolitical influence, wielding power beyond U.S. borders and jurisdiction.
So, what exactly is the risk of unregulated AI?
That we'll actually rely on it.
Turns out, humans will give up their judgement to AI at alarming rates. Even when they know AI can be wrong. And even when the situation is life or death.
As one example, the CDU paper sites a study on drone warfare done in 2024. Participants had to respond to an AI agent's recommendation whether to kill enemy combatants or preserve citizens. The goal was to see how often people deferred to the AI, even when they know it's unreliable.
Participants acted as lethal drone operators, and saw a series of eight greyscale aerial images. Superimposed symbols on each picture indicated "enemy" (a small red circle) or "ally" (a small green circle).
Then the same images were presented again without the symbols. The participant had to remember: was it an enemy that should be hit with a missile and destroyed? Or an ally to be left alone? They made the call. And they were pretty good at it. Participants got it right 70% of the time.
That was before AI Chatbots chimed in with their feedback.
After participants made their initial call, the AI-agent would say "I agree" or "I don't agree" or "I don't think that's right" -- basically giving an opinion that might cause the participant to reconsider their own human judgement.
The right move would be to ignore AI entirely -- the feedback was entirely random, not at all based on the facts, and often incorrect. Participants were given a chance to stick with their original decision or change it based on AI's feedback. Then they fired their missile. Or not.
There were two experiments done. In the first, 58% of the participants changed their mind to match AI recommendations.
In the second, the percentage of those who over-relied on bad AI feedback rose to 67%. In the end, overall accuracy declined by 20% because the AI had an oversized influence on drone operator confidence. This, despite the AI explicitly stating its fallibility and providing random input.
The truth is, AI can be programmed to have a confident tone, and give its answers in convincing language. It doesn't even know if it's right or wrong.
The CDU paper recommends regulating this technology with a human-centric approach, so AI responses are consistent with human values and principals, and ensure humans have a place in the decision-making process.
Hopefully the U.S. isn't too far down the unregulated road.
This past May, the U.S. Air Force revealed plans for an unmanned, AI-driven fighter called the YFQ-42A. It's built to keep up with an F-22 fighter as a "wingman", fully capable of carrying weapons and executing lethal strikes.
Evidence indicates, when this AI wingman makes a recommendation, we're likely to follow its lead. Whether it's good for humanity or not.
Sources:
<https://www.businessinsider.com/air-force-first-uncrewed-fighter-jets-photos-2025-3?utm_source=chatgpt.com>
<https://www.sciencedaily.com/releases/2025/09/250907172635.htm>
<https://www.nature.com/articles/s41598-024-69771-z>
<https://spectrum.ieee.org/humanoid-robot-scaling>
<https://knightcolumbia.org/content/failure-internet-freedom>
<https://houstonherald.com/2025/02/study-humans-overtrust-ai-in-life-or-death-decisions>