🔥 Can Your IT Team Survive Without an AI Agent in 2026?

AI agents are replacing operational workflows in telco and banking. Not as an experiment — as a real advantage.

It’s 1am. Sometimes 3am — when fewer users are connected and the impact risk is lower.

First, validation: dashboards show the service is running normally. All green. You can proceed.

The first script starts. You monitor it. Review the output. Log it in the ticket. Manual work in between. Another script. More monitoring.

When execution finishes, the change isn’t closed. Revalidation comes — another team confirming everything is still stable. That takes time. Always more than estimated.

Approved. Draft the stakeholder summary. Close the CR.

It’s 5am. Or 5:30.


But if something went off-script — a behavior different from staging, a step that didn’t respond the same way — the change most likely gets rolled back. And rollback isn’t free: it doubles the impact, consumes the remaining time, and puts the team against the clock.

By 6am the service has to be stable. Just like before. No excuses.

If things went badly: post-mortem report. Meetings. Explanations.

This isn’t a failure story. It’s how things are done today in most telco teams — with good engineers, defined processes, and still all the weight of coordination falling on people awake at 3am.

And honestly, I don’t use AI for this today either. But every time I finish one of those changes, the question is hard to ignore: how much of this could stop depending on me?


1️⃣ The work nobody puts on the roadmap

The problem isn’t that telco teams aren’t good. It’s that a huge part of their energy goes into coordination that nobody sees.

Change requests that need five approvals and three emails before starting. Scripts that someone runs manually in production because “that’s how it’s always been done.” Revalidations that take twice the estimated time because the other team has its own queue. Stakeholder summaries drafted at 5am with the adrenaline still running.

All of that exists. And almost none of it appears in any backlog — because technically it’s “already being handled.”

Operational debt accumulates just like technical debt. Only nobody measures it.


2️⃣ It’s not just telco — this is happening everywhere

When I tell this story to colleagues in other sectors, the reaction is always the same: “that happens to us too.”

And there’s something important to clarify before going further: the problem isn’t the upgrade. The problem is any change in production — and that includes modifying a configuration parameter, adjusting a timeout, changing an environment variable, activating a feature flag.

A one-line change. The same process around it.

Planning, impact analysis, approvals, coordination with other teams, maintenance window, execution, validation, revalidation, stakeholder communication, CR closure. The execution itself can take two minutes. Everything else can take days — and the night of the change, hours.

That’s what makes the process heavy. Not the technical complexity of the change, but the organizational weight surrounding it.

In banking it’s exactly the same. An adjustment to the parameters of a payment service, a configuration change in the risk engine, a certificate update — any of those require change control, prior validations, revalidation team, compliance summary. If something goes wrong before branches open, the impact is immediate and visible.

In critical SaaS platforms — billing, provisioning, internal tools — the challenge isn’t technical execution. It’s guaranteeing that the change was coordinated, validated, executed in the right order, and that someone was watching the dashboards at the right moment. Today that depends on people. An agent can take over that monitoring and escalate only when there’s a real anomaly.

The pattern is transversal: different industry, different stack, same weight of coordination. The complexity isn’t in the change — it’s in everything that needs to happen to make that change safe, traceable, and communicated.


3️⃣ What an agent could change in that flow

I won’t go into implementation details — that depends on each organization, each stack, each risk level.

But it’s worth showing where the clearest opportunities are:

Before the change: validate that the CR complies with the policy, cross-check with the change calendar, confirm staging passed tests. Today a person does that by reviewing three different systems. An agent can do that chain in seconds and only escalate if there’s a real exception.

During the change: monitor the output of each step, compare against expected behavior, automatically log in the ticket. The engineer is still present — but with eyes on what matters, not copying logs by hand.

After the change: generate the draft stakeholder summary, document the executed steps, update the CR status. What today depends on someone having the energy at 5am to write.

If something goes wrong: the full context is already in the log. The rollback has history. The post-mortem starts with facts, not memory.

In none of these cases does the agent replace the engineer. What it replaces is the burden of being simultaneously the one who executes, monitors, documents, and communicates — all at once, at 4am.


4️⃣ The roles that are changing — and the ones that won’t survive the change

There’s a conversation that IT teams are avoiding.

It’s not “are we going to be replaced?” That question is easy to dismiss. The hard question is different: what happens to professionals whose value lies in executing the process, not designing it?

The engineer who knows how to run scripts, follow the runbook, and document the change — that profile is going to have less and less space. Not because they’re bad. But because that’s exactly what an agent can do more consistently, without getting tired, without making the night-shift mistake.

What can’t be delegated as easily is judgment. Knowing that this specific change, even though it seems simple, touches a component that had issues last month. Knowing that the proposed maintenance window coincides with an important commercial event. Knowing when to stop even though the process says you can continue.

That knowledge isn’t in any runbook. It’s in years of operating real systems.

The problem is that not everyone will make that transition. Some teams will keep doing the same thing they do today — until the gap with those who already automated becomes too visible to ignore. And by then, catching up will be harder than starting now.

I’m not saying this as criticism. I’m saying it because I see it in the world I work in. The gap between teams that are evolving how they operate and those waiting for someone to tell them what to do already exists. It’s still small. It won’t be for much longer.


Closing

I’m still executing changes at 3am. Still coordinating with the revalidation team. Still drafting stakeholder summaries when it’s almost dawn.

I don’t have an agent doing that today. And I’m not going to say the solution is simple or within reach of any team tomorrow.

But what I do know, from inside this world, is that the conversation has already changed. Before, the question was whether AI could do something useful in operations. Now the question is when your team will stop doing manually what others have already automated.

That doesn’t generate anxiety in me. It generates urgency to understand the problem well before someone else solves it for me.

If you work in IT operations — in telco, banking, any industry where a poorly coordinated change has real consequences — I think it’s worth asking yourself that question now, calmly, before the pressure forces you to answer it in a rush.


✍️ Claudio from ViaMind

“Dare to imagine, create and transform.”


Tags: AI agents, IT operations, telco automation, change requests, platform upgrades, banking, IT roles 2026


Comments
Comments are shared between English and Spanish versions.

Subscribe

Get a monthly email with the best posts on innovation, technology, and the future. No spam.

* required

Intuit Mailchimp