top of page
Search

💸 The Fake Dollar Bill Problem: Why AI Projects Fail Without Modeling Normality First“

  • Writer: beatrizkanzki
    beatrizkanzki
  • Jun 3
  • 3 min read
A cashier comparing a bill to a wall of taped dollar bills

🧠 What’s the Real Problem in Many AI Projects?

Over the years, working in finance, public health, and as an AI consultant, I’ve seen a repeating pattern:Clients come in with a long list of “bad behaviors” or anomalies they want AI to detect — in text, images, or videos. The request is passed to engineers, who start building models to catch these anomalies. Once those are delivered, the client brings a new list. The cycle repeats.


The issue? There’s rarely a solid understanding of what the normal process looks like in the first place.


This is what I call The Fake Dollar Bill Problem — and it’s more common than you think.


🏪 Let Me Tell You a Story...

A few years ago, I went to a local store in a small town with a $100 US bill. As I paid, I noticed several $100 bills taped to the wall near the cash register. The cashier compared my bill to those on the wall, called someone else to help verify, and finally completed my purchase.

This seemed a bit odd, so I started thinking...

Have you ever seen this kind of wall display — with glued dollar bills — at a bank?

Of course not. Why? Because banks train their staff using real bills, not fake ones.

They know that counterfeiters can use endless variations to create fake bills. So instead of trying to learn all the ways a bill can be fake, they train staff to know exactly how the real thing looks and feels.


⚠️ What Does This Have to Do With AI?

Everything.

In anomaly detection and machine learning, this is known as the Principle of Modeling Normality:

To detect anomalies effectively, you must first model the behavior or distribution of normal data. Anything that deviates significantly from that model is flagged as an anomaly.

💃 Let’s Dance: A Practical AI Scenario

Imagine a client comes to you from the world of professional ballroom dancing. They want AI to help evaluate performances for an upcoming competition.

They provide you with video clips to “train the model” — but all the clips show mistakes:

  • A dancer falling due to a misstep

  • Pairs colliding mid-routine

  • Dancers off rhythm

  • Poor posture

  • Inappropriate choreography for the music

If your instinct is to build a model to detect those faults — STOP.

This is a classic Fake Dollar Bill Problem.


What You Should Do Instead

Instead of training the model on dozens of errors, ask the client for footage of dancers executing each routine flawlessly — tango, rumba, merengue, salsa, waltz, and quickstep.

Why?

Because:

  • The list of possible errors is infinite.

  • A model trained on anomalies will constantly need retraining for new mistakes.

  • You’ll end up stuck in an expensive, never-ending cycle of patching.

By training the model on examples of correct form and movement, it learns to recognize the standard. Any deviation — posture, rhythm, movement pattern — becomes detectable without needing to label every single anomaly type.


And the only reason to retrain later? If the definition of correct changes (i.e., new dance styles or updated judging criteria).



🟨 Conclusion: Stop Chasing Errors — Start Learning the Norm

AI systems are only as smart as the process you teach them to understand. If you only feed them what not to do, you risk creating fragile, reactive solutions that constantly need repair.

The better path? Model the real thing first. 

When AI understands “what good looks like,” it can flag the bad — no matter how it changes over time.


💬 Let’s Talk:Have you ever dealt with a “Fake Dollar Bill Problem” in your AI projects?Do you scope your models by learning what’s right — or chasing what’s wrong?

Leave a comment or message me directly — I’d love to hear your perspective.Let’s build smarter, not harder. 🚀

 
 
 

Comments


bottom of page