Skip to content

The Data Scientist

Continuous model improvement

Why Continuous Model Improvement Is the Backbone of U.S. Fraud Strategy – Interview with Viraj Soni

Date Published: 26th Feb 2022

The fraud threat landscape is evolving faster than ever—adaptive, adversarial, and driven by automation. For someone like Viraj Soni, who has built and refined fraud strategies for leading U.S. banks, the reality is clear: a static model is a vulnerable one. In this Q&A, we explore why constant model iteration is a necessity in today’s card-not-present (CNP)-dominated economy, and what it takes to maintain effective, compliant, and resilient systems.

Q: Viraj, you’ve worked on fraud models across multiple institutions. Why is ongoing model refinement not just a “best practice,” but a baseline requirement today?

A: Because the U.S. is the most targeted fraud market in the world—and we’re operating in an ecosystem where CNP (Card Not Present) volumes now surpass CP (Card Present), which inherently removes layers of physical verification. Fraudsters exploit this anonymity using bots, synthetic identities, and real-time social engineering. If your model isn’t evolving weekly or monthly, it’s aging out by design. Fraudsters iterate in days. So must we.

Q: What are the key indicators that a model is due for rework or tuning?

A: A few red flags:

  • Drift in capture rate despite stable volume
  • Uplift in false positives near threshold bands
  • Disputes increasing in low-risk segments
  • Operations re-escalating known fraud MOs you thought were resolved
    You can have a high AUC and still underperform if your feedback loop is stale. This is where tools like shadow scoring, Population Stability Index (PSI), and early segment-level KPI divergence are critical to catch degradation before loss manifests.

Q: How do you evaluate whether new models are meaningfully better—beyond just offline metrics?

A: AUC and precision-recall curves are necessary, but not sufficient. You need to anchor model value to business-aligned metrics:

  • Incremental fraud $ captured
  • False positive reduction in revenue-heavy cohorts
  • Approval rate uplift in historically over-protected merchant segments
  • Operational case reduction and prioritization accuracy

And then test in production via champion/challenger testing, not just in sandboxes. The model lives in a volatile environment—only live performance tells the full story.

Q: Let’s talk about compliance. How do you balance agility in model refresh with regulatory pressure around governance and explainability?

A: You can’t cut corners on compliance. Every score needs:

  • Traceable reason codes
  • SHAP-based feature impact mapping
  • Fair lending audits (even for fraud models if they impact onboarding or access)
  • Documentation of version history, performance shifts, and monitoring logs

The OCC, CFPB, and even the FDIC are now closely examining how fraud controls impact customer inclusion and complaint volume. A model that performs well but violates explainability or fairness thresholds is a liability.

Q: With fraud losses often tied to synthetic identity, refund abuse, or account takeovers, how do you future-proof models against novel threats?

A: You never truly “future-proof”—you create adaptive infrastructure:

  • Hybrid models (ML + business rules + graph-based link analysis)
  • Modular feature stores with real-time enrichments
  • Alert feedback ingestion pipelines tied directly into retraining cycles
  • Scenario-based stress testing (e.g., rapid email churn, device spoofing at scale)

The best fraud models aren’t locked artifacts—they’re learning systems embedded in operations. They evolve because the fraud evolves.

Q: What’s the broader message for financial institutions and U.S. economic infrastructure?

A: Fraud isn’t just a cost center—it’s a systemic risk vector. The U.S. has an open financial architecture. That’s a strength for innovation—but a weakness if defenses stagnate. To keep money flowing, commerce frictionless, and consumers safe, we need sophisticated, resilient fraud models that act as gatekeepers—not gate blockers.

And this isn’t optional. Without constant model improvement, we invite systemic abuse. With it, we build trust, reduce economic leakage, and protect national infrastructure from compromise.

Q: Final word?

A: Fraud is a moving target—and a distributed problem. To stay ahead, your models must learn faster than the fraud, your feedback loops must be sharp, and your governance must be airtight. It’s not about one brilliant model. It’s about continuous discipline and strategic speed.

Author Bio:

Dr. Stylianos (Stelios) Kampakis, CEO of The Data Scientist and The Tesseract Academy. He is a data scientist, AI expert, and tokenomics specialist with 10+ years of experience. He has worked with startups and major organizations like the US Navy and Vodafone. With 3 published books and 2 patents pending, he specializes in AI, blockchain, and predictive analytics.