- Pascal's Chatbot Q&As
- Posts
- 86% of executives believe agentic AI brings new compliance risks, and a staggering 95% have already encountered incidents such as AI-driven privacy violations or inaccurate predictions.
86% of executives believe agentic AI brings new compliance risks, and a staggering 95% have already encountered incidents such as AI-driven privacy violations or inaccurate predictions.
The article identifies specific risks: hallucinations, data propagation errors, cascading technical failures, and power constraints — all worsened by lack of training, oversight, and digital literacy
The Dangers of Overreliance on AI in Banking — A Wake-Up Call for the Financial Sector
by ChatGPT-4o
The article “Experts Warn Banks About the Dangers of Overreliance on AI” by Penny Crosman, published in American Banker (August 15, 2025), presents a sobering look at how the rapid integration of generative and agentic AI into banking is creating new risks that industry leaders cannot afford to ignore. While artificial intelligence offers transformative potential in terms of automation, efficiency, and cost savings, the article makes clear that the technology's misuse or misinterpretation may produce disastrous outcomes — from operational breakdowns and cascading errors to regulatory noncompliance and systemic vulnerabilities.
This essay will summarize the key arguments of the article, highlight the most surprising, controversial, and valuable statements made, and conclude with reflections on how banks, regulators, and AI developers should respond to these growing concerns.
Summary of the Article
The piece draws attention to a new wave of AI applications in banking: agentic AI — systems that operate autonomously, taking actions and making decisions without direct human oversight. According to research cited from the Infosys Knowledge Institute, 86% of executives believe agentic AI brings new compliance risks, and a staggering 95% have already encountered incidents such as AI-driven privacy violations or inaccurate predictions.
Key voices in the article — from fintech CEOs to cloud consultants and bank technology leaders — issue strong warnings: while large language models (LLMs) like ChatGPT are excellent at synthesizing knowledge and mimicking intelligence, they lack true reasoning capabilities. Yet business leaders unfamiliar with the technology's limitations are increasingly treating AI outputs as strategic advice, which may lead to flawed decision-making. The article identifies specific risks: hallucinations, data propagation errors, cascading technical failures, and power constraints — all worsened by lack of training, oversight, and digital literacy among personnel.
Most Surprising Statements
“We’re running out of electricity.”
Blair Sammons, a director at Promevo, states that power, not cost or data storage, is the next great IT bottleneck due to AI’s extreme energy consumption. This shifts the conversation about AI scalability from abstract ethics or financial cost to a hard infrastructure crisis. It is a tangible, near-term limitation that few business leaders have factored into their AI roadmaps.“LLMs are like impossible-to-tire-out college summer interns.”
Trevor Barran, CEO of FNCR, likens LLMs to inexperienced interns who work tirelessly but lack business context or understanding. This analogy is striking and useful—it cuts through technical jargon and drives home the point that banks must not mistake AI outputs for genuine human reasoning.“I have one issue in this model, I have one issue in that model, now I have 412 issues.”
Sammons’s warning about exponential error propagation is a visceral reminder of how AI can cause compounding damage in financial systems. It challenges the assumption that AI scales linearly, and reveals how complexity can spiral beyond human comprehension.
Most Controversial Statements
“Business leaders take AI answers as something thoughtful and intelligent.”
This critique implies that many C-suite executives are using AI as a crutch, not a tool. It suggests a dangerous overestimation of AI’s cognitive capabilities and underestimation of its limitations — particularly by those without technical backgrounds. Implicitly, it critiques a trend of tech overconfidence among business leaders.“Younger generations are not as computer literate as expected.”
Sammons challenges the common assumption that digital natives are well-prepared for an AI-driven workforce. The suggestion that many can navigate Instagram but not Excel contradicts the tech-optimist narrative often used to justify digital transformation.“These problems will work themselves out because AI makes too much money.”
Sammons closes the article on a somewhat controversial note: while he acknowledges AI’s current risks, he predicts the market will resolve them due to profit motives. This techno-optimistic view may understate the need for deliberate regulatory or institutional guardrails.
Most Valuable Statements
“Data quality, data remediation, agent remediation and agent quality are all critical.”
This quote from Briana Elsass of BMO pinpoints the backbone of responsible AI deployment: clean, validated, and well-managed data. In the context of autonomous agents making decisions, this becomes not just a best practice but a legal and operational necessity.“AI can't think for you. You have to have a base-level knowledge that lots of people just don't have.”
Sammons’s observation highlights a significant workforce readiness gap. It’s a call for widespread education in digital literacy, critical thinking, and AI fluency—not just among engineers, but across all roles in a bank.“Even one little screw-up now cascades into potentially massive issues.”
The warning about cascading errors in interconnected AI systems reflects a systemic risk: a localized AI error can now spread rapidly across systems and functions, creating broad operational and reputational damage.
Reflection and Recommendations
This article is not an indictment of AI’s value in banking, but a call for maturity, responsibility, and humility. The financial industry is no stranger to complex systems, but the speed and scale of agentic AI adoption have outpaced internal checks and human capacity for oversight. Overreliance on generative models — especially by non-technical executives — risks turning powerful tools into brittle liabilities.
For Banks:
Establish internal AI governance teams with cross-disciplinary expertise.
Avoid “black box” deployments; always pair AI systems with human-in-the-loop processes.
Invest in training programs for all employees to boost AI literacy.
For Regulators:
Define standards for AI explainability, auditability, and error management.
Require banks to disclose and evaluate AI-driven decisions that impact customers.
For AI Vendors:
Emphasize transparency, documentation, and user education.
Design systems with fail-safes and layered responsibility models.
Conclusion
American Banker’s article underscores an essential truth: AI may be powerful, but it is not magic. The risks it brings to banking are real, urgent, and intensifying as systems become more autonomous and opaque. Success in this new era will require more than just technical tools — it demands wisdom, oversight, and the courage to ask hard questions before the outputs are trusted. As one expert in the piece put it: AI can assist with thinking, but it must never replace it.
