TL;DR Series: Topic 2 of 5

When the Board Discovered They’re Liable for Code They’ve Never Seen

Mar 9, 2026

About This Series

Software supply chain risk rarely shows up as a single bad decision or dramatic failure. More often, it emerges quietly from ordinary work done by capable people inside well-functioning teams. Modern software development is fast, layered, and highly abstracted, spanning open source, proprietary code, third-party services, and increasingly, AI-assisted contributions. That complexity makes risk harder to see in the moment, even when teams are doing everything “right.”

This series uses short, TL;DR-style, day-in-the-life vignettes, each designed to be read in under five minutes, to make those dynamics tangible. The stories reflect real-world patterns we see across engineering, legal, and security teams, not to assign blame or stoke fear, but to surface where modern workflows create blind spots. The goal is simple: to help software risk leaders quickly recognize familiar situations in their own organizations and better understand where governance needs to meet velocity, without sacrificing either.

shinobi generative ai

An Ordinary Thursday

The quarterly board meeting was running ahead of schedule.

The CEO had just finished the revenue slide. The numbers were strong. Growth up. Churn down. Pipeline healthy. The CFO had walked through forecasts with quiet confidence. The product roadmap looked ambitious but credible. The deck flowed the way good board decks do… linear, controlled, reassuring.

No one raised their voice. No one challenged assumptions too hard. It was the kind of meeting that suggested operational maturity.

Then the General Counsel cleared her throat and held up a printed letter.

“It’s from a plaintiff’s firm representing a collective of open-source copyright holders.”

The room did not react at first. Not fully. It took a few seconds for the comment to register.

  • Open source.
  • Copyright.
  • Collective.

The former regulator on the board – the one who rarely spoke unless something touched governance – leaned back in his chair and folded his hands.

What are they alleging?

“That portions of our production codebase contain material derived from GPL-licensed software,” the General Counsel replied. “Specifically, code introduced over the past two years.”

The CTO looked up sharply.

“That doesn’t make sense,” he said. “We don’t ship GPL dependencies in production. We’ve always been careful about that.”

The General Counsel nodded. “They’re not referring to dependencies. They’re alleging copied or derivative code in the codebase.”

The CFO broke in. “What percentage of the codebase?”

No one answered. Because no one in the board meeting could know.

Velocity Without Visibility

For two years, the company had encouraged engineering teams to adopt AI coding tools.

It wasn’t reckless. It wasn’t unsupervised. It was strategic.

Product velocity had become a competitive advantage. AI-assisted development shortened cycles, reduced friction, accelerated prototyping. The metrics were compelling: pull request throughput up, feature delivery faster, backlog burn-down improved.

Investors liked the story. The board liked the story. The company had not banned AI tools. It had not restricted them. It had not formally governed them either, they were simply part of the workflow.

pull_request_throughput: ↑ 30%
feature_delivery_cycle: ↓ 20 days
backlog_burndown_rate: ↑ improved
// AI tools: refactoring, test gen, syntax translation
// Provenance tracking: not instrumented

Engineers used them for refactoring. For test generation. For edge cases. For syntax translation. For documentation scaffolding. Nothing dramatic. Nothing obviously risky.

It felt like just running a spellcheck and no one logs spellcheck.

The Question That Changed the Temperature

The former regulator spoke again. “If this becomes material litigation,” he said evenly, “what did we know, and when did we know it?”

It was not an accusation. It was worse than that. It was a governance question.

The General Counsel answered first.

“We do not have a formal AI usage policy,” she said. “We have informal guidance. But nothing board-approved.”

“And do we track which portions of the codebase were AI-assisted?”

“No.”

“Do we track provenance of snippet-level contributions?”

“No.”

“Have we ever audited for derivative open-source material beyond declared dependencies?”

A pause.

“No.”

The CEO looked at the CTO.

“Is this isolated?”

The CTO hesitated. “I don’t know.”

When Technical Risk Becomes Fiduciary Risk

Up until that moment, AI-assisted development had been an engineering story.

Now it was a board story. Directors do not think in commits. They think in exposure.

They think in disclosure thresholds, regulatory obligations, and shareholder litigation risk.

The letter itself was measured. It did not demand immediate damages. It requested engagement.

Because parts of the product were shipped as client-side JavaScript, the disputed code was exposed in browser-delivered bundles. It was easy to compare distinctive functions, comments and code structures to tie the overlap to specific files and releases to prove their original provenance.

If that was not devastating enough, it also referenced remedies available under copyright law. Injunctions. Statutory damages. Discovery.

If the allegation proved credible, it would not remain a technical matter.

It would become material.

The Gap No One Meant to Leave

There had been no malicious intent. No developer had deliberately copied GPL code into production. No executive had ignored a known warning. The gap was structural.

Modern development workflows optimise for speed and abstraction. Code suggestions appear inline. Tests generate automatically. Refactors propagate instantly.

But provenance does not surface by default.

If code compiles, passes tests, and clears review, it is treated as safe.

The system asks: “Does it work?

It rarely asks: “Where did it come from?

The Internal Audit

Within 48 hours, the company engaged external counsel and initiated an internal review. Engineering began combing through pull requests from the past twenty-four months.

They searched commit messages for phrases like “AI suggestion” or “generated.” They asked developers to recall which snippets had originated externally.

They ran additional scans against the codebase, this time looking not for declared dependencies – but for structural similarity.

What they discovered was not catastrophic. But it was unsettling.

They could not confidently assert provenance for meaningful portions of the codebase.

Not because it was tainted. Because it was untracked.

Reactive vs Instrumented Organizations
When AI code risk surfaces, the difference isn’t intent… it’s instrumentation.
Reactive Organisations
  • Discover exposure through an external letter, not internal monitoring
  • Cannot quantify what percentage of code was AI-assisted
  • Rely on developer memory to reconstruct provenance
  • Treat AI usage as a productivity tool, not a governance domain
  • Draft policy only after legal risk becomes visible
Instrumented Organisations
  • Continuously scan for snippet-level similarity and license conflicts
  • Log and trace AI-assisted contributions at commit level
  • Integrate provenance scanning into CI/CD, not post-incident audits
  • Elevate AI governance to board-visible controls before litigation forces it

Boardroom Reality

At the next board meeting, the tone had shifted. Revenue still grew, product still shipped, customers still renewed, but the conversation opened differently.

What controls are now in place?

What disclosures are required?

How are we documenting oversight?

The directors were not trying to assign blame. They were trying to establish defensibility, because in governance intent is less important than process.

And process had been informal.

This Could Happen to Anyone

The story does not end with a dramatic verdict. It ends with discomfort.

With process changes. With policy drafts. With long calls between legal and engineering.

No one is fired. No villain emerges.

Just a recognition: the systems built for velocity did not include visibility, and in the absence of visibility, directors found themselves responsible for code they had never seen and risks they had never discussed.

The problem was not recklessness. It was assumption.

  • That if something feels small in the moment, it will remain small.
  • That if no dependency is declared, no obligation exists.
  • That if engineers are careful, governance is automatic.

But it isn’t.

And this could happen in any boardroom where AI tools quietly became normal before oversight did.

Some organisations are redesigning the gap. They do not ban AI-assisted development, instead they instrument it. They treat provenance as a first-class signal, not an afterthought.

Speed remains. but it becomes observable.

James Spooner, Head of Software Security and Quality Services

James Spooner, Head of Software Security and Quality Services

James leads FossID’s Software Security and Quality Services, specializing in software supply chain integrity, application security testing, and Software Composition Analysis. With deep experience in software engineering and security consulting, he has led the delivery of complex security and quality initiatives across enterprise and regulated environments. James brings a practical, engineering-led perspective to helping organizations identify, quantify, and remediate software risk across modern development lifecycles.

Table of Contents

    Sushi Bytes Podcast

    Talk to a Software Supply Chain Ninja

    Book a discovery call with one of our experts to discuss your business needs and how our tools and services can help.