An Ordinary Tuesday
Linda’s Tuesday started the way most of her Tuesdays did.
She joined the sprint standup from her kitchen table, coffee cooling beside her laptop. Cameras were mostly off. Jira tickets scrolled past on a shared screen. The dashboard redesign was still on track for Thursday’s internal demo. No blockers. No drama.
Linda worked as a front-end developer at a Series B fintech startup – growing fast, hiring aggressively, shipping even faster. The company prided itself on clean execution. Clean cap table. Clean IP. Clean architecture. It was the kind of motto that showed up on diligence slides, not Slack channels.
Over the past six months, the engineering team had leaned hard into AI-assisted development. The productivity gains were undeniable. Internal metrics showed the front-end team shipping roughly 40% faster than the previous year. Velocity was now a competitive advantage – one the company talked about openly with investors.
That morning’s sprint plan reflected that confidence. The dashboard work was tagged low risk. Front-end only. No new dependencies. Just UI polish and a few edge cases.
Linda picked up the ticket assigned to her name and got to work.
9:14 AM / Standup Ends, Work Begins
The task was straightforward: add a date-range selector to a reporting widget. Most of the component already existed, but one edge case was annoying – handling time zone offsets when users toggled between preset ranges and custom dates.
Linda had written date pickers before. Everyone had. But this one was just annoying enough to interrupt her flow.
She paused, highlighted the relevant function, and opened her AI code assistant.
“Handle date range selection with time-zone offset in React”
The suggestion came back instantly. Clean. Idiomatic. A few helper functions. No imports she didn’t recognize. No obvious red flags. It slotted neatly into her existing component.
She scanned it – not line by line, but with the practiced eye of someone who had reviewed thousands of similar code snippets. It made sense. It worked. She pasted it in, ran the tests, and watched everything go green.
No new dependency. No license header. No external import.
Just code.
10:37 AM / The Invisible Decision
If you were reconstructing the story later, this is where you might point and say, that’s the moment.
But nothing about it felt momentous at the time.
Linda could have rewritten the logic herself. She could have Googled a Stack Overflow thread. She could have asked someone else on the team if they’d solved this before. In theory, she could even have pinged legal.
In practice, none of those options made sense.
The company had encouraged AI tools. The team used them daily. Her AI code assistant suggestions passed review all the time. There was no policy that said, “treat this code differently.” No workflow prompt that asked where it came from. No signal – implicit or explicit – that this particular moment required extra scrutiny.
Best practice, as it existed then, offered no guardrail here.
So, Linda moved on.
11:24 AM / Code Review
By late morning, the pull request was up.
The diff was small. Contained. Easy to reason about. The reviewer left a single comment:
“LGTM – AI code assistant win again.”
A thumbs-up emoji followed.
The CI pipeline ran clean. Linting passed. Tests passed. The build artifact deployed to staging without issue.
From the system’s point of view, this was a textbook success.
1:21 PM / Lunch and Context Switching
Linda grabbed lunch and skimmed Slack. Another team was struggling with a backend migration. Someone joked that front-end work was “the easy part.”
No one meant anything by it. Front-end changes didn’t touch core logic. They didn’t alter data models. They didn’t pull in heavyweight dependencies.
They felt safe.
4:06 PM / The Demo
The dashboard demo went smoothly.
The new date selector worked exactly as intended. Product was happy. Sales liked the flexibility. A few polish notes were added to the backlog, but nothing urgent.
The ticket was marked Done. The branch merged. The feature shipped with the next release.
Linda closed her laptop feeling good about the day. She’d unblocked a small annoyance, shipped value, and stayed ahead of the sprint. Nothing heroic. Nothing risky.
Just a normal Tuesday.
Six Months Later
The email arrived on a Monday morning.
It wasn’t marked urgent. It wasn’t addressed to Linda. It landed in a shared legal inbox and was forwarded to engineering leadership later that afternoon.
The subject line was neutral: Notice of Copyright Infringement.
The body was calm. Formal. Precise.
It alleged that a portion of the company’s front-end codebase contained material copied verbatim from a GPL-licensed library. Forty-seven lines, according to their analysis. Enough to establish derivation. Enough to trigger obligations the company had never agreed to.
One of those dates was a Tuesday in March.
POV: Legal Counsel
From legal’s perspective, the issue wasn’t dramatic – it was procedural.
The code comparison was clear. The licensing terms were unambiguous. The question wasn’t whether there was exposure, but how far it went.
How many files contained AI-assisted code?
Which ones might share the same provenance?
Was this an isolated incident, or a systemic risk?
There was no malice implied. No accusation of negligence. Just an uncomfortable reality: the company could not confidently answer basic questions about the origin of some of its own code.
POV: Engineering Manager
Engineering leadership reacted with a mix of disbelief and dread.
No new dependency had been added. No license scanner had flagged anything. The code had passed review, testing, and deployment without issue.
When someone asked, “Which files were AI-assisted?” there was no clean answer.
AI usage wasn’t tracked at the file level. It wasn’t logged. It wasn’t discouraged – but it wasn’t instrumented either.
What followed felt less like debugging and more like forensics.
Developers were asked to recall which snippets had come from where. PR histories were combed for clues that had never mattered before. The atmosphere shifted subtly. No one was accused, but everyone felt implicated.
POV: VC Associate
By the time the issue surfaced in diligence, it had a different shape.
It appeared as a bullet point in a risk memo:
Open-source licensing exposure – unresolved.
No context about the Tuesday. No nuance about intent. Just a question mark where certainty was expected.
The same code that had once demonstrated engineering velocity now introduced friction into a funding round.
The Gap
It’s tempting to tell this story as a morality play: AI copied GPL code, the company paid the price.
But that framing misses the point.
The real antagonist here wasn’t the AI code assistant. It wasn’t Linda. It wasn’t even the license itself.
It was the gap between velocity tooling and governance reality.
Modern development workflows are optimized for speed, reuse, and abstraction. They assume that if code compiles, tests pass, and reviews are clean, the system is healthy.
Provenance – the question of where code comes from and what obligations travel with it – often sits outside that loop.
In Linda’s workflow, nothing invited that question. Not the editor. Not the review process. Not CI/CD.
The absence of signals was itself the signal.
Fallout Without Villains
Inside the company, the effects rippled outward.
Engineering morale dipped as developers wondered whether everyday work had put the company at risk. Legal felt exposed, responsible for problems they had no visibility into. Trust frayed – not because anyone had done something wrong, but because systems had quietly failed to align.
Leadership faced uncomfortable trade-offs: how much to disclose, how fast to remediate, how to balance transparency with legal strategy.
All of this traced back to a small, rational decision made on an ordinary Tuesday.
How This Actually Happened
Not as a single failure, but rather as a sequence.

At no point does the system ask the question it eventually needs answered.
This Could Happen to Anyone
The problem wasn’t behavior. It was structure.
Linda didn’t violate policy.
The team didn’t cut corners.
The company didn’t act recklessly.
As AI-assisted development becomes normal, the absence of provenance awareness becomes a systemic risk – one that only reveals itself long after the moment has passed.
The question isn’t who failed. It’s what do modern workflows silently assume.
How Teams Prevent This
Some organizations are starting to close the gap – not by slowing down, but by updating their systems.
They treat provenance as a first-class concern.
They recognize that “front-end only” doesn’t mean “low risk.”
They build governance into the flow, not on top of it.
The lesson isn’t to fear AI-assisted development. It’s to observe it.


