Summary
“We will look good in the boss’s eyes if we turn in the slickest report that contains the usual bells and whistles. Then we can return to social media, make some personal posts, and relax for a while.”
Modern organizations rarely fail for lack of activity. Sometimes activity becomes a “flurry” rather than a results-oriented, performance-driven activity. They fail because they become exquisitely skilled at producing the appearance of performance—while quietly shedding responsibility for outcomes.
In that world, “work” is defined as following policies and procedures, generating documentation, updating dashboards, and circulating status reports. The organization remains busy in the flurry, the paper trail remains immaculate, yet the underlying mission—learning, safety, customer welfare, patient care—stagnates or deteriorates.
Getting managers and employees on board with results-oriented leadership is crucial for the organization, as it aligns daily activities with the organization’s core purpose and prevents disconnection from meaningful outcomes.
The above is a universal organizational problem when people are taught to pursue extrinsic motivation rather than intrinsic motivation in their roles.
The Bureaucratic Pattern: Process Worship and Paper Trail Accountability
The characteristic bureaucratic activity in question is simple: make compliance artifacts deliverable. The organization treats policy adherence (checklists, sign-offs, forms, audits, attestations) as evidence of competence and “good management,” even when the problem it is intended to address remains unresolved. Over time, procedures become not merely means to an end but the ends in themselves. This environment can make compliance officers feel their efforts are valued, yet also highlight the need for meaningful outcomes beyond paperwork.
This dynamic is especially potent where outcomes are hard to measure, slow to emerge, or politically dangerous to own. Recognizing the need for genuine Impact can help organizations feel understood and inspire a desire for meaningful change.
The result is a culture of paper-trail treadmill accountability: everyone can demonstrate procedural correctness, but no one is truly accountable for outcomes.
Incentives for completed reporting: The Hidden Operating System of Bureaucracy
Incentives are not a footnote; they are the operating system. They answer the only question that matters inside institutions: what is rewarded, what is passively isolated without notice, what is punished, and what is safe? When rewards are attached to measurable artifacts (metrics, reports, completion rates, target attainment) and punishments are attached to being associated with failure, rational people optimize for artifacts and avoid ownership of outcomes. Recognizing this can inspire policymakers to rethink incentive structures to promote genuine effectiveness and a results-oriented approach, rather than mere compliance.
This is why we observe a recurring signature across sectors: once a metric becomes high-stakes, it becomes susceptible to gaming. That does not always mean overt fraud. More commonly, it means classification games, date changes, definitional tweaks, or “informal” workarounds that preserve plausible deniability. The system produces numbers that satisfy leadership and regulators, while the organization can truthfully claim it “followed established procedures.” That is precisely the kind of environment that allows an institution to be simultaneously “busy” and “ineffective.”
C. Northcote Parkinson’s Book reflects on one of his primary laws for such organizations.
“Work expands to fill the time available for its completion.”
Education Sector: Three Real-World Examples of Incentives Driving Results Avoidance
1) Atlanta Public Schools: When reporting scores became the mission, truth and fact became the enemy
The Atlanta Public Schools testing scandal is a vivid case of incentives turning measurement into an objective. Georgia’s Governor-appointed investigators concluded that cheating occurred throughout the district and that they found “organized and systemic wrongdoing” extending years before 2009. The investigation described educators altering answer sheets, erasing and correcting mistakes, and engaging in coordinated misconduct—behaviors that produce reported gains while decoupling the system from actual learning.
What distinguished this from “bad actors” was the incentive environment: test outcomes were treated as a primary indicator of institutional success and reputation. When reputational reward and job security are tied tightly to test metrics, the system predicts the behavior: people protect the numbers. The organization can remain procedurally active—administering tests, compiling results, issuing reports—while the underlying educational outcome is undermined.
2) England’s “Off-Rolling”: Protect the league-table position by removing the risk
Ofsted defines off-rolling as removing a pupil from the school roll (often by encouraging a parent to withdraw the child) when the removal is primarily in the school’s interest rather than the pupil’s. The mechanism is bureaucratically elegant because it can occur through legitimate administrative pathways—transfers, home education, “managed moves”—creating the appearance of compliance while optimizing the accountability system.
Ofsted-commissioned research found that off-rolling is perceived by many education professionals as being triggered by league table pressures and inspection ratings, and that it often occurs through an informal process in which documentation is assembled around behavior and correspondence. In effect, incentives transform student welfare into an input, and performance statistics into the output. The organization remains “in compliance” with processes while sidestepping responsibility for educating its most challenging students.
3) Texas “Leaver” and Dropout Accounting: When coding replaces knowing where students went
Texas developed extensive leaver-reporting systems and documentation requirements precisely because student “leavers” can be misclassified, thereby distorting dropout and graduation metrics. TEA’s leaver records data integrity materials describe how the state moved toward electronic auditing and standards designed to identify districts at high risk of inaccurate dropout records, reflecting a recognition that incentives around accountability metrics can lead to insufficient data and misclassification.
The bureaucratic pattern here is the shift from genuine responsibility to documentation and coding. The institution can produce compliant records—each student assigned a leaver reason code with proper documentation—while the fundamental question remains unanswered: did the student graduate, transfer, or disappear? Highlighting this disconnect underscores how reporting artifacts can mask real outcomes, encouraging readers to consider reforms.
Cross-Sector Parallels: Why the Pattern Repeats Everywhere
Education is not unique. The pattern appears wherever outcomes are difficult, and incentives are misaligned.
- Healthcare access metrics: The VA OIG found widespread scheduling practices that produced misleading wait-time reporting (e.g., using the wrong desired date to “zero out” wait times, and “fixing” appointments) and emphasized that unreliable reported wait times and systemic problems persisted when leadership accountability and data integrity were weak.
- Safety-critical operations: The Columbia Accident Investigation Board found that “organizational causes” mattered profoundly and highlighted cultural traits and practices detrimental to safety, including barriers to communicating critical safety information and decision processes operating outside formal rules—precisely the kind of environment in which compliance can exist while accountability dissipates.
- Financial services: The CFPB described how sales targets and compensation incentives led employees to open unauthorized accounts to meet sales targets. In contrast, the DOJ described sustained pressure to meet unrealistic sales targets, resulting in millions of unauthorized accounts and falsified records. Incentives turned a customer-serving institution into a metric-serving institution.
The lesson is consistent: institutions do what they are paid to do—financially, reputationally, and politically. When the reward system prizes defensible activity and penalizes bad news, “performance” becomes performative.
How to Fix It: Incentive Design That Produces Reality, Not Theater
If incentives created the problem, incentives must be central to the solution. The goal is not to abolish procedures—procedures can be essential in safety, finance, and education. The goal is to ensure procedures remain a tool, not a shield.
· Reduce single-metric domination.
When one number governs careers and reputations, gaming becomes rational. Balance score-based indicators with multi-year outcomes, independent verification, and qualitative evidence that is difficult to counterfeit. The need for auditing systems—whether in VA scheduling or TEA leaver records—exists precisely because single-point metrics are vulnerable to distortion.
· Pair outcome metrics with integrity metrics.
Track not only outcomes (graduation rates, proficiency, attendance) but also data integrity signals (anomalies, audit flags, unusual withdrawal patterns). Ofsted’s attention to off-rolling and TEA’s emphasis on documentation standards are examples of systems trying to add integrity checks where incentives created distortion.
· Reward surfacing problems early.
A system that punishes bad news will always receive good news—until reality arrives catastrophically. Columbia’s findings emphasize the danger of blocked communication and informal decision chains; the antidote is cultural and structural: protect dissent, elevate technical authority, and incentivize risk reporting as a matter of professionalism rather than disloyalty.
· Name accountable owners for outcomes.
Diffuse responsibility breeds paperwork. If a dozen people “own the process” and no one owns the result, reporting proliferates and performance weakens. Accountability must be tied to outcomes that matter (student learning trajectories, verified attainment, safety margins), not just to completing steps.
Closing
Ultimately, performance systems do precisely what they were designed to do: they produce the numbers that make the rewards. So the question isn’t why people game the system—it’s why leaders built a system worth gaming. When survival depends on scores, the system will protect scores, even if students are collateral damage.
A dashboard is a visual interface that centralizes and displays key information, metrics, and data visualizations on a single screen. It is designed to provide an at-a-glance overview of performance, allowing users to monitor status and make informed decisions quickly. When careers depend on dashboards, dashboards become more valuable than reality.
What gets rewarded gets repeated. What gets punished gets hidden. And what gets measured gets managed—even when it’s the wrong thing. A bureaucracy rewarded for “following procedure” will always choose procedure over truth. That is why incentives write the real policy; everything else is just documentation.
If you want results, pay for results—and stop paying people to look like they got them. Make truth safer than fiction. Make performance the most rewarded and least risky path. Otherwise, the institution will continue to do what it has been trained to do: produce paperwork, produce metrics, and avoid responsibility for outcomes—while insisting, with a straight face, that it “followed the process.”
Footnotes
· U.S. Department of Veterans Affairs, Office of Inspector General. Review of Alleged Patient Deaths, Patient Wait Times, and Scheduling Practices at the Phoenix VA Health Care System (Report No. 14-02603-267, 26 August 2014). https://www.vaoig.gov/sites/default/files/reports/2014-08/VAOIG-14-02603-267.pdf
· NASA. Columbia Accident Investigation Board Report, Executive Summary (Aug. 2003; PDF reposted Mar. 2024). https://www.nasa.gov/wp-content/uploads/2024/03/sept4-caib-report-executive-summary.pdf
· NASA Technical Reports Server. Report of the Presidential Commission on the Space Shuttle Challenger Accident (Rogers Commission), Volume 1 (1986 record page). https://ntrs.nasa.gov/search.jsp?R=19860015255
· Consumer Financial Protection Bureau. CFPB Fines Wells Fargo $100 Million for Widespread Illegal Practice of Secretly Opening Unauthorized Accounts (8 September 2016). https://www.consumerfinance.gov/about-us/newsroom/consumer-financial-protection-bureau-fines-wells-fargo-100-million-widespread-illegal-practice-secretly-opening-unauthorized-accounts/
· U.S. Department of Justice. Wells Fargo Agrees to Pay $3 Billion to Resolve Criminal and Civil Investigations into Sales Practices Involving the Opening of Millions of Accounts without Customer Authorization (21 February 2020). https://www.justice.gov/archives/opa/pr/wells-fargo-agrees-pay-3-billion-resolve-criminal-and-civil-investigations-sales-practices
· Ofsted (UK). Off-rolling: exploring the issue (10 May 2019). https://www.gov.uk/government/publications/off-rolling-exploring-the-issue
· Governor’s Special Investigators (Georgia). Special Investigation Into Test Tampering in Atlanta’s School System (30 June 2011; full text). https://archive.org/stream/215252-special-investigation-into-test-tampering-in/215252-special-investigation-into-test-tampering-in_djvu.txt
· Georgia Public Policy Foundation (hosts the report and excerpts). The Atlanta Public Schools Cheating Scandal (16 April 2015, referencing the 2011 investigators’ report). https://www.georgiapolicy.org/news/the-atlanta-public-schools-cheating-scandal/
· Ofsted blog. What is off-rolling, and how does Ofsted look at it on inspection? (10 May 2019). https://educationinspection.blog.gov.uk/2019/05/10/what-is-off-rolling-and-how-does-ofsted-look-at-it-on-inspection/
· Ofsted/YouGov. Exploring the issue of off-rolling (May 2019 PDF). https://assets.publishing.service.gov.uk/media/5fb541488fa8f54aafb3c30d/Ofsted_offrolling_report_YouGov_090519.pdf
· UK House of Commons Library. Off-rolling in English schools (21 February 2020). https://commonslibrary.parliament.uk/research-briefings/cbp-8444/
· Texas Education Agency. 2005 Leaver Records Data Integrity Manual (20 September 2005; PDF). http://www.simplycateredtoyou.com/system/files/2005LeaverRecordsDIManual.pdf
· Texas Education Agency (PEIMS Data Standards). Leaver Reason Codes and Documentation Requirements (web page). http://ritter.tea.state.tx.us/peims/standards/weds/app_leaver_reason_codes_and_documentation_requirements.html