Dan J. Harkey

Master Educator | Business & Finance Consultant | Mentor

Artificial Intelligence: The Threat the Government Can’t Destroy-Technical Read

How truth-finding machines, born from code, are prying open a century of obscurity and secrecy.

by Dan J. Harkey

Share This Article

Introduction: Truth vs. Framed Narrative

For much of the past 115 years, American governance and finance have evolved within institutions whose decisions often occurred behind closed doors, backed by complex rules, jargon, and shifting narratives.  The result has been recurring cycles of war finance, fiat expansion, debt accumulation, and policy opacity that citizens struggle to interrogate.

Artificial Intelligence (AI), by contrast, is built to parse complexity on a scale.  Properly governed and publicly accountable, AI can expose contradictions, follow money trails, and illuminate patterns that human institutions either miss or prefer to leave unexamined.  That prospects at scale are precisely why AI is the most disruptive accountability technology of our time.

From the Panic of 1907 to the Federal Reserve (1913): The Birth of Modern Opacity

The modern architecture of American monetary power traces to the Panic of 1907, when the collapse of the Knickerbocker Trust triggered bank runs that were quelled only after J.P. Morgan orchestrated a private rescue.  The crisis catalyzed political momentum for a central bank, culminating in the Federal Reserve Act, signed by President Woodrow Wilson on 23 December 1913.  Proponents argued that a central bank would furnish an “elastic currency,” stabilize panics, and improve credit flows.  Critics worried that centralized monetary authority would entrench elite control and enable policy discretion to be divorced from democratic accountability.  Both views matter.  Elastic currency did mitigate panics, but discretion also enabled decades of opaque interventions whose costs and beneficiaries can be hard to trace.

The historical record is unambiguous about the Act’s aims and structure—twelve regional Reserve Banks, a Washington-based Board of Governors, and, later, the Federal Open Market Committee (FOMC) to run open-market operations.  Yet the longer arc—from the New Deal reforms through wartime finance, post-1971 changes in dollar convertibility, and multiple asset bubbles—reveals a steady expansion of technocratic discretion.  This evolution, while often stabilizing in the short term, fostered an environment in which policy narratives (about price stability, full employment, “temporary” liquidity support) frequently outran public comprehension.

AI’s promise is precisely here: to reconstruct histories of decisions, data, and outcomes in a way citizens can interrogate.

A Century of Complexity: Where Corruption and Waste Hide

Complexity is not merely a technical condition—it is a political shield.  Corruption thrives where audits are slow, records are fragmented, and oversight relies on samples rather than the whole population.  Consider the scale of improper payments across U.S. agencies: the Government Accountability Office (GAO) estimated $162 billion in improper payments in FY2024 (about 84% overpayments), concentrated in Medicare, Medicaid, SNAP, Earned Income Tax Credit, and the Restaurant Revitalization Fund.  Not all improper payments are fraud, but all fraud is inappropriate, and the sheer magnitude underscores how much activity escapes timely detection.

Healthcare programs illustrate the tension: for years, the Centers for Medicare & Medicaid Services (CMS) has pushed predictive analytics through its Fraud Prevention System to shift from “pay-and-chase” to prepayment anomaly detection (e.g., the “Impossible Day” and peer group outlier models).  These systems ingest millions of claims daily, score risk, and trigger edits or investigations.

That is AI as a preventive audit—and it demonstrably improves the government’s ability to prioritize scarce investigative resources.

AI as an Integrity Engine: How It Exposes Corruption

1) Whole-Population Transaction Scans (Anomaly Detection)

Unlike traditional audits, which sample a subset, AI scans entire transaction populations, flagging outliers such as round-trip payments and ghost vendors, thereby enhancing oversight and exposing hidden Corruption patterns.

2) Ownership and Network Analysis (Who Owns What, really?)

Fraud and kleptocracy exploit layered corporate structures and cross-border entities.  The EU-funded DATACROS project is a concrete tool that analyzes corporate ownership across 200+ countries, surfacing anomalies linked to money laundering, collusion, and Corruption.  Its evolution (DATACROS I–III) integrates machine learning, NLP, and image processing with business registries, procurement data, real estate records, and even satellite imagery—precisely the kind of multimodal fusion that exposes networks designed to stay hidden.

3) Text Analytics at Audit Scale (NLP over Laws, Contracts, and Case Files)

Large Language Models (LLMs) turn thousands of pages of legislation, contracts, and audit files into searchable, comparable maps of hidden clauses and conflicts.  Brazil’s Federal Court of Accounts (TCU) built ChatTCU and complementary tools (e.g., GABI, INACIA) to assist auditors in retrieving case information, interpreting regulations, and automating meeting summaries.  OECD has recognized TCU’s maturity; the code has been shared with other public bodies, amplifying audit capacity across institutions.  

4) Predictive Oversight (Where Will Corruption Likely Occur?)

AI can forecast Corruption risk in public contracts by prioritizing audits where historical indicators (change orders, cost overruns, and bidder collusion signals) cluster.  This is already in use in Latin America and Europe, as documented by OECD’s integrity program.

5) Market Surveillance (Insider Trading & Accounting Manipulation)

In capital markets, the SEC’s Market Abuse Unit uses data analytics to detect “improbably successful trading” and relational patterns among accounts.  The Netflix insider trading ring (2016–2019) was uncovered using these tools, leading to civil and parallel criminal actions.  Beyond trading, the SEC’s EPS Initiative applies risk-based analytics to identify earnings management schemes; cases against firms like Gentex and Healthcare Services Group illustrate how pattern analysis in quarterly data flushes out manipulation.

Case Studies: What AI Has Already Exposed

  • EU Corporate Ownership Anomalies — DATACROS

Investigators and journalists use DATACROS to flag shell companies, politically exposed persons (PEPs) linked to them, and offshore chains associated with procurement collusions.  The project’s public and restricted interfaces support civil oversight and due diligence investigations alike—an example of AI extending integrity beyond government to watchdogs and the media.

  • U.S. Medicare/Medicaid — CMS FPS

CMS’s FPS2 runs models against ~11 million daily fee-for-service claims, enabling prepayment edits and postpayment pattern detection, integrated with case management.  Public reporting and practitioner commentary show measurable improvements in lead quality and earlier case initiation—saving taxpayer funds before they are lost.

  • U.K. Welfare Fraud — Machine Learning in Universal Credit

The Department for Work & Pensions (DWP) has deployed ML to flag inconsistent benefits claims, with documented savings and improved targeting; however, watchdogs and academics have raised fairness and bias concerns, underscoring the need for transparent safeguards and explainability.  The lesson: AI can improve oversight, but governance must keep pace.

  • U.S. Securities Markets — Insider Trading & EPS Management

AI-enabled analytics have expanded SEC enforcement from reactive tips to originating cases from data.  The Netflix matter exemplifies how account-level clustering and profitability patterns can flag insider activity; EPS analytics then surface quarter-to-quarter manipulations in accruals, reserves, and contingencies.  

Why AI Is a Structural Threat to Corruption

Scale and speed.  AI eliminates informational chokepoints and Corruption exploits.  It reads everything (transactions, filings, contracts), links everyone (ownership graphs, social graphs), and remembers indefinitely (versioned audit trails).  That means long-running schemes become visible as continuous anomalies rather than isolated “errors.”

Transparency by Design.  When built into public dashboards and oversight workflows, AI can notify citizens when budgets, contract terms, or spending patterns deviate from plan—real-time civic alerting rather than annual PDF reports few people read.  OECD’s integrity outlook stresses both the opportunity and the caution: many countries lack data quality and monitoring capacity; AI amplifies integrity only if inputs are reliable and governance is strong.

Independent Replicability.  AI increases the verifiability of claims.  The same dataset, processed by different teams, should yield comparable risk signals, reducing dependence on gatekeepers.  The EU’s trajectory with DATACROS—bringing academics, journalists, and authorities into the same tooling ecosystem—embeds plural oversight into the data layer.

Guardrails: Truth Must Be Governed

AI is powerful, but not infallible.  When governments deploy AI, three risks occur:

·       Bias and Disparate Impact.  As seen in the U.K. DWP debate, ML systems can over-refer certain groups, primarily if training data reflect historic inequities.  Any integrity tool must pair model audits with human review and publish methodology to the extent possible without enabling adversaries.

·       Opaque Models.  Black‑box models can conceal policy choices within code, replacing one opacity with another.  Public bodies should favor explainable AI, publish evaluation metrics, and maintain appeal routes for affected parties.  OECD working papers and integrity indicators emphasize implementation gaps—strategy without monitoring is performative.

·       Co-optation and Censorship.  Power may attempt to weaponize AI to surveil dissent rather than fraud.  The counter is plural deployment—courts of accounts, ombuds, journalists, civil society, and open data pipelines that enable independent replication of integrity analyses.

The Historical Pivot: From Narrative Control to Data Control

Suppose the long arc since 1913 has been about centralizing monetary discretion to fight panics.  In that case, the next arc must be about decentralizing accountability to fight Corruption: releasing structured data, enabling third-party audits, and embedding AI into constitutional oversight rather than solely in executive agencies.  The GAO’s figures remind us of the fiscal stakes; CMS’s FPS and SEC analytics show the technical feasibility; DATACROS and TCU’s ChatTCU demonstrate how audit AI can spread across institutions and borders.  

Case Study: Netflix Insider Trading Ring

Background:

Between 2016 and 2019, a group of Netflix employees and associates exploited non-public subscriber growth data to trade ahead of earnings announcements.

How AI Helped:

The SEC’s Market Abuse Unit used data analytics and clustering algorithms to detect:

  • Unusual profitability patterns across multiple accounts.
  • Coordinated trading behavior timed precisely before earnings releases.

Outcome:

Civil charges and parallel criminal prosecutions followed, illustrating how AI-driven surveillance can uncover insider trading schemes that traditional tip-based enforcement might miss.

Key Insight:

AI doesn’t just flag anomalies—it connects dots across accounts, timing, and profitability, making complex collusion visible.

Case Study: CMS Fraud Prevention System (FPS)

Background:

Medicare and Medicaid programs process millions of claims daily, making them prime targets for billing fraud and improper payments.  Historically, oversight relied on “pay-and-chase” audits—audits that detected fraud only after funds were disbursed.

How AI Helped:

The Fraud Prevention System (FPS) uses machine learning to:

  • Score every claim in real time for fraud risk.
  • Flag anomalies like “Impossible Day” billing (more hours billed than exist in a day).
  • Detect outlier patterns compared to peer providers.

Impact:

Since its launch, FPS has saved billions of taxpayer dollars by preventing fraudulent payments before they occur.  It also prioritizes investigative resources, reducing audit lag and improving enforcement efficiency.

Key Insight:

AI transforms oversight from reactive to proactive—catching fraud before money leaves the Treasury.

Conclusion: Truth as a Public Utility

Artificial Intelligence, properly governed, is truth infrastructure.  It cannot make policy virtuous, but it can make vice visible.  And once visible, vice becomes contestable—by courts, by voters, by markets.  That’s why AI is the threat no government can easily destroy: because its core output is a verifiable explanation in a world saturated with false narratives.  The morass that began with the Federal Reserve’s creation—centralized discretion in the illusion of stability—can be balanced by centralized data discipline and decentralized audit capacity.  When truth becomes computable and accessible, Corruption loses its best friend: complexity without scrutiny of special interests.

References (selected)