Some weeks the industry simply will not let you rest.
This was one of them.
Between mass layoffs, accidentally leaked source code, satellites redefining global connectivity, and a quantum computer crossing a historic threshold — the first week of April made it clear the pace is not slowing.
Here are the five things that stood out most to me, and why I think they matter for those of us working in technology, telecommunications, and AI.
1️⃣ Oracle cuts 30,000 roles to fund its $50B AI bet
The news arrived by email, without warning, on a Tuesday morning.
Oracle began the largest round of layoffs in its history — between 20,000 and 30,000 positions eliminated. Entire teams in SaaS, Cloud, sales, and operations cut by up to 30%.
The reason was not poor performance. It was debt.
The company is building data centers at a scale that is hard to take in: $38B in Texas and Wisconsin, $18B in New Mexico, $16B newly financed in Michigan — all to support OpenAI workloads. To fund this, Oracle said it wants to raise another $50B through debt and equity.
The stock is down ~25% year to date. Shareholders are nervous. And Larry Ellison is still pushing ahead.
For teams working telco and banking infrastructure, this raises questions worth asking: what happens to contracts and dependencies with providers under liquidity stress? Consolidation is accelerating, and organizations that rely on a single cloud vendor will feel that pressure.
The bet might pay off. But the cost of being wrong at this scale is enormous.
Putting $50B in before you see returns is not crazy in this market — it is the price of entry. The question is who can wait it out.
Sources: CNBC — Oracle layoffs and AI spending, Reuters — job cuts, Business Insider — employee email
2️⃣ AI models are turning into a commodity — and that changes the rules
For two years the conversation was the same: GPT or Claude? Gemini or Llama? Which benchmark wins?
That conversation is fading.
IBM put it plainly this week: in 2026, the competition is not between models, it is between systems. The model is no longer the differentiator. What matters is how it integrates — with your own data, your business tools, your real operational flows.
In day-to-day AI work — automating deployments, helping design architecture, generating technical documentation — what matters is not whether the model scores 0.3 points higher on a benchmark. It is whether it plugs into existing pipelines, understands network context, and can follow runbook steps without breaking things.
Value has shifted to those who can orchestrate complex systems: combining models, tools, APIs, and workflows into something that runs in production.
For engineering teams in telco and banking, this is both opportunity and warning: those with clean data, tidy pipelines, and well-documented architectures will move fast. Those still untangling legacy chaos will fall behind — not for lack of model access, but for lack of something to integrate them with.
AI is already infrastructure. The gap is no longer who uses it — it is who knows what to use it for.
Sources: IBM Think, IBM Newsroom — enterprise AI
3️⃣ Satellites + 5G: connectivity without blind spots is no longer science fiction
This is one of the most interesting trends — and among the least discussed outside technical circles.
NTN (non-terrestrial network): 5G extended with non-terrestrial segments (for example satellite) so the same standards family can orchestrate terrestrial and space-based coverage.
Hybrid NTN-5G networks, combining terrestrial infrastructure with LEO (low Earth orbit) satellite constellations, are moving from experiment to market reality. Growth is projected at 34% a year through 2034. Everyone knows Starlink, but the bigger story is how these networks will integrate with existing terrestrial networks.
A network that blends terrestrial, LEO, and GEO (geostationary) layers cannot be run with playbooks built for 4G or even 5G Stand Alone. Orchestration complexity explodes: handoffs between layers, latency management, resilience when any segment fails. That takes advanced automation — not just more engineers.
The direct-to-cell model — where the phone connects straight to the satellite without specialized hardware — is already rolling out with some vendors. It is not an incremental improvement: it shifts the business model for traditional operators.
Teams already building network automation and AI-driven operations will be able to run these networks. Those that are not will try to hire their way out at an unsustainable pace.
Without networks, chips, and data centers, the AI revolution simply does not exist. And networks are about to get a lot more complex.
Sources: Deloitte — next-gen satellite internet (2026), Deloitte — 2026 telecommunications industry outlook
4️⃣ Quantum computing: the threshold almost no one is taking seriously yet
IBM made a claim this week that struck me as important — and it went relatively unnoticed in the noise.
2026 will be the first year a quantum computer can solve a problem better than any classical method available. Not as a lab experiment. As a real capability.
For people working network infrastructure and security, this is not abstract.
The algorithms protecting critical communications today — RSA, ECC, what we use in VPNs, authentication, financial transactions — are mathematically vulnerable to quantum attacks at scale. Not today. But the horizon has a date.
Cryptographers also warn about harvest now, decrypt later: an adversary can store ciphertext today and break it years later once large-scale quantum hardware exists. That is why moving to PQC is not only “for when quantum arrives” — it reduces the long-term value of traffic already on the wire.
The standard being built is called Post-Quantum Cryptography (PQC). NIST published the first algorithms in 2024. What comes next is implementation — and in telco networks, that is a years-long project, not months.
The question that should show up in architecture conversations now is: when do we start our PQC assessment? Critical networks — banking, government, mission-critical — cannot wait until the quantum horizon is on top of us to plan.
“We have moved beyond theory. Today we are using quantum computers for real industry use cases.” — Jamie Garcia, IBM
Sources: IBM Quantum blog, NIST — Post-Quantum Cryptography
5️⃣ Anthropic leaked its own source code — and the response was worse than the mistake
This story has it all: a simple technical error, a reaction that backfired, unreleased capabilities exposed to the world, and an operational security lesson that applies far beyond AI startups.
On March 31, a security researcher named Chaofan Shou found something while updating Claude Code — Anthropic’s coding tool, with more than $2.5B in annualized revenue. The official npm package included a debug file pointing to a ZIP in the company’s cloud storage.
That ZIP held the full source: ~500,000 lines, 1,900 TypeScript files. Internal architecture, unreleased features, roadmap, model performance data.
Within hours: more than 41,500 forks on GitHub. Impossible to put back in the bottle.
The root cause was painfully simple: a debug file shipped by mistake in the npm build. A misconfigured .npmignore. One line.
What came next was worse.
Anthropic issued copyright takedown orders to get GitHub to remove repos with the leaked code. The problem: the sweep hit more than 8,100 repositories — including legitimate forks of Anthropic’s own public repo, blocking developers who had nothing to do with the leak. The backlash was immediate. Anthropic ended up walking back most of the takedowns.
What did the code reveal?
Among other things: a “persistent assistant” that runs in the background when the user is idle, the ability to review past sessions and carry learnings across conversations, and something that caught the security community’s attention — a feature called “Anti-Distillation Mode,” designed to inject fake tools into the API when it detects a competitor might be recording traffic to train their own models.
Days earlier, Fortune had reported that Anthropic also accidentally exposed ~3,000 internal files, including a draft about its next model — known as “Mythos” or “Capybara” — which the document said would pose unprecedented cybersecurity risks.
Two security incidents in less than a week. While the company prepares its IPO.
The lesson is not only for Anthropic. The release pipeline is as critical as the code itself. Speed pressure at AI companies is so high that process rigor slips — and the cost of that slip can be huge. It applies to a $350B startup and to any ops team shipping a production change.
The leak does not sink Anthropic. But it gives every competitor a free engineering education — and reminds us all why review processes exist.
Sources: The Register — Claude Code source leak, TechCrunch — incident overview, TechCrunch — GitHub takedowns, Fortune — Mythos and cybersecurity risk, Fortune — second leak and source code
🧭 What I am taking away from this week
Five stories. One pattern.
The industry is in maximum-bet mode — and at that pace, mistakes scale up, strategic and operational alike.
Organizations that move fast without losing rigor are the ones with clear processes, architectures ready for the complexity ahead, and teams with judgment about when to accelerate and when to brake.
Speed without process is not agility. It is risk stacking up until it blows.
Which of these five threads hits closest to your role right now — infrastructure, AI integration, networks, cryptography, or release discipline? If you have a sixth angle, I would like to read it in the comments.
✍️ Claudio from ViaMind
“Dare to imagine, create, and transform.”
Also available in Spanish: Tendencias IA & Tecnología – Semana del 6 Abr 2026.