On March 31, 2026, Anthropic accidentally published the complete source code of its Claude Code CLI to the public npm registry, exposing approximately 512,000 lines of TypeScript across 1,906 files.
But it's just one of many "oopsie daisy" that happened recently. And since nowadays, everyone in the tech world is just on his way to make the crappiest software or take the baddest decision, I decided to make a little best of. After all, AGI is happening as I'm writing right now and coding has largely been solved.
The Claude Code leak
As said before, Claude Code CLI code leaked. Thank you for making it open source Anthropic. No but seriously, how did it happen ? The discovery came after the AI upstart released version 2.1.88 of the Claude Code npm package. A user on X named Chaofan Shou was the first to flag that a source map file could be used to access Claude Code's source code.
After this event, we could finally dig into the code and find out if this was vibe coded, or a fine piece of software carefully engineered. And what we found out that it feels more like a fully instrumented system that observes how you behave while using it. Here are some of the things that stood out:
- Big old regex to classify your language (not deep AI understanding)
- Tracks hesitation during permission prompts
- Undercover Mode -> Strip Anthropic internal info from commits/PRs for employees on open source contribution (huh ?)
- High coding standards with a 3,167 lines long function with 12 levels of nesting at its deepest
But the most hilarious part was DMCA (Digital Millennium Copyright Act). The fact that an AI company is DMCAing people for infringing on their copyright is chef's kiss.
Github's downfall
Like how Disney destroyed the Star Wars franchise, Microsoft destroyed Github. And it's not just about a direction Github is taking. I mean that Github is not working properly anymore. It fell below 90% uptime. Seriously, it sounds like a joke. We're talking about an application that the vast majority of developers daily use, and it now collapses under its own weight. I guess it's just a consequence of 2 things.
- The AI era: Nowadays, bad softwares are better than no software. It's the philosophy we've adopted.
- Microsoft: That's it, there's really nothing else to say. Even though we'll talk about it thoroughly in the next part.
Github was truly a great thing back then, but since the AI boom, it has mostly become a mine that Microsoft is digging for the gold that is your data. And if you actually care about privacy and quality, the question is not if you should migrate, but when.
Microsoft Notepad and Copilot, Copilot, Copilot...
Talking about Microsoft, let's keep going down. Notepad. Yes, Windows Notepad had a RCE flaw in it because they added markdown support in February 2026. Attackers found a way to hide malicious commands inside markdown links. The risk for you was that the app could be tricked into launching a local script or an executable file, effectively giving an attacker control over your PC.
But overall, I think Microsoft is busy giving AI as a solution to every problem they're facing. Let's talk about Copilot. But wait which one ? Github Copilot, the AI coding agent ? Microsoft Copilot, the generative AI chat bot stuff ? Or maybe we're talking about the Copilot PC ?
Honestly I could name a few other Copilot stuff, but you get the idea. I mean it's so bad, that we're now talking about a big Linux migration from Windows users. Everyone wants to escape Windows 11, Copilot and spywares by Microslop. We take those.
NPM is on fire
As the JavaScript ecosystem has grown over time, it has become the #1 target for "low-effort, high-impact" hacking. One of the worst ones we've seen so far is the Axios hack. This represents the perfect example to show why package managers are problematic and why we should stop to overuse them.
Axios is a widely used library (100,000,000 weekly downloads) that has suffered a supply chain attack after two newly published versions of the npm package introduced a malicious dependency that delivers a trojan. A fake "plain-crypto-js" dependency has been found when installing 1.14.1 and 0.30.4. The Axios version used to inject this dependency was published using the compromised npm credentials of the primary Axios maintainer (yes, social engineering attack at its peak).
The whole purpose of this malware was to scan the infected machines in order to steal credentials and secret keys.
Axios was a big one, but it's only one among so many other supply chain hacks that happened over the last few years. Modern package managers have effectively automated dependency hell, creating a "black box" where a single command can silently inherit thousands of unvetted sub-dependencies, trading long-term security and architectural stability for the instant gratification of rapid development.
The Amazon Kiro incident
In February, Amazon decided to take a path no one has taken before: solve problems with AI. Who would expect that this would go wrong...
Amazon has been recently on a lay off streak, because as the Amazon CEO Andy Jassy said: "We want to operate like a startup". I bet you didn't see that one coming. CEOs find any excuses to lay off people these days.
And you know, if there are fewer people, I guess that the rest has to use AI. But not every AI tool like Claude Code or OpenAI Codex. Actually, they tracked closely the use of these tools and blocked them if you don't use the in-house tool: Kiro.
Now is the funny part no one expected. In December 2025, Kiro caused a 13-hour AWS outage by deleting the production environment. Because the whole goal of Amazon here was to replace the 10% workforce gap caused by recent layoffs with AI. They instead added cognitive debt. What a marvelous idea.
After experiencing this, you and I would certainly be like: "Let's slow down here, we messed up big time. Maybe AI is not the solution". A pretty natural response to the situation. But it's not the direction Amazon decided to take. Instead, their decision was:
From now on, every AI code submitted by Junior and Mid-level engineers will be reviewed and have to be approved by a Senior engineer.
And as we all know, AI loves to produce a lot of lines of code. Honestly, I would not like to be a senior engineer at Amazon because I guess they just do nothing but reviewing sloppy code.
I still don't understand why it is so important to move fast with a blindfold on for some companies. Because I sincerely think that there is no benefits to doing that. Producing 150K LOC a week will not make your product better.
Conclusion
I wanted to make this little best of because I think it says a lot about how we ship and develop software these days. Beyond the sarcasm I displayed during this post, I wanted to highlight something. A lot of people, including me, are having a hard time finding their bearings in this new world. I feel like we can't properly evaluate what a software engineer is in the today world. That's worrying and I think that big AI companies played a huge role in this destabilization. But I still think that this is a transition period. Things have to slow down at one point. That is a hope message for people struggling to find their place.