Is Your IT Team Fighting Fires Instead of Building Products? Your Infrastructure Might Be the Problem

Table of Contents

Have you ever thought about server optimization in your business? A sprawling server infrastructure often feels like a safety net — more machines means more reliability. In practice, the opposite is often true: according to the Uptime Institute, a single underutilized server can drain thousands of dollars per year when you factor in power, cooling, licenses, and maintenance costs. And that’s just the direct expenses — the real cost is the time your IT team spends managing an overgrown infrastructure instead of developing your product. 

In this article, Miron Jakubowski, Tech Lead at Fabres, shares his experience from a server environment optimization project for a healthcare client and shows how to approach infrastructure consolidation — from identifying the problem, through the conceptual phase, to safe production deployment. 

Is Your Infrastructure a Bottleneck for Your Business? 

A McKinsey study published in the Harvard Business Review reveals a surprising finding: companies lose an average of 33% of after-tax profit when they’re six months late to market — compared to just 3.5% loss when they exceed their development budget by half. Time to deployment has a greater impact on financial results than the cost of development itself. 

What if your infrastructure is the very bottleneck that’s slowing down every deployment? 

server optimization project for one of the fabres clients
server optimization project for one of the fabres clients

 

According to the DORA 2021 report (Google DevOps Research and Assessment), elite teams deploy code 973 times more frequently than low performers, have 6,570 times shorter lead time from commit to production, and restore system functionality 6,570 times faster after an incident. These aren’t subtle differences — it’s a chasm that directly translates into business competitiveness. 

When Should You Consider server optimization? 

Not every sprawling infrastructure needs an overhaul. However, there are warning signs that indicate your architecture has become overkill: 

Long deployment times 

If deploying a fix to production takes an hour or more, something’s wrong. In one project I worked on, deployment alone (not counting diagnosis and bug fixing) took around 60 minutes. For critical bugs, that’s an eternity — and as McKinsey’s research shows, every hour of delay has a real impact on business results. 

Manual logins to multiple machines

If updating the operating system or configuration requires SSH-ing into each server separately, that’s a sign you’re missing automation. With 12 servers, it’s not just time-consuming — it’s prone to human error. 

Difficulty maintaining environment consistency

The more servers you have, the greater the risk that your test environment differs from production in ways that only reveal themselves after deployment. And then the midnight phone calls begin. 

Unjustified architectural complexity

Sometimes infrastructure grows organically, without a clear plan. It’s worth asking yourself: do we really need this many machines to run two frontend applications, one API, and a database? 

Where to Start: The Conceptual Phase 

Reducing servers isn’t something you can do on the fly. Before you touch any configuration, you need solid groundwork. Think of it like an architect drawing up blueprints before the construction crew shows up with excavators. 

Workshops with the team and stakeholders 

server optimization project for one of the fabres clients
server optimization project for one of the fabres clients

 Start with workshops where you map the current architecture and define requirements. Key questions include: How do servers communicate internally? Which elements must remain in a private network? How do you provide public access to applications without exposing sensitive components? What are the client’s security requirements? 

In the project I led, consultations with the Chief Security Officer and the client’s lead architect were crucial for developing a solution acceptable to all parties. Without this phase, it’s easy to end up with something that works technically but won’t pass security gates. 

Documentation through diagrams 

The output of your workshops should be detailed architecture diagrams — a visualization of what connects to what. This isn’t bureaucracy for its own sake. These diagrams become your map during implementation and help you quickly locate the source of problems when something goes wrong. And with infrastructure changes, something always goes wrong. 

Identifying what to keep 

Optimization doesn’t mean replacing everything. Identify the foundations that work well and should stay in place. In my case, it was the core cloud platform — we changed the architecture and number of servers, but didn’t migrate to a different provider. Sometimes the art lies in knowing what not to touch. 

Key Architectural Decisions before server optimization 

Every project is different, but there are proven patterns that help with infrastructure optimization. 

server optimization project for one of the fabres clients
server optimization project for one of the fabres clients

Server consolidation by environment 

 Instead of distributed machines (separate servers for each application in each environment), consider a simpler model: one server per environment, hosting all applications. In the project I led, we went from 12-13 servers down to 3 — one each for dev, test, and production. A seemingly small change, but the effects were immediate. 

This requires appropriate sizing, of course — a server hosting all applications needs higher specs. That’s why cost savings aren’t proportional to the reduction in machine count (in my case it was 10-20%, not 75%), but the management benefits are enormous. Suddenly, instead of monitoring twelve machines, you have three. That changes everything. 

Redesigning the network layer 

Changing the number of servers forces you to rethink your network architecture. Key elements include: load balancers for traffic distribution, private networks for application servers and databases (not directly accessible from the internet), and clearly defined entry points for external traffic. It’s a bit like redesigning a city’s road system — you need to ensure traffic flows smoothly while keeping unauthorized visitors out of restricted areas. 

Web Application Firewall (WAF) 

While optimizing, it’s worth considering adding a security layer that may not have existed before. A WAF is essentially an intelligent filter that analyzes incoming traffic and automatically blocks common attack patterns — vulnerability scanning, exploitation attempts through known weaknesses in popular frameworks. 

This is especially important if your application is publicly exposed. Attackers routinely scan the internet for vulnerable systems — a WAF cuts off this traffic before it reaches your actual application. One additional component, but incomparably greater peace of mind. 

Deployment process optimization 

While making infrastructure changes, it’s worth rethinking your entire deployment pipeline. A common problem: code rebuilds from scratch for every deployment to every environment. With components that take a long time to compile, this significantly extends deployment time. 

A better approach: build the code once (on the development environment), then deploy the ready artifact — a compiled, ready-to-run application package — to subsequent environments. This eliminates the risk of the same code behaving differently on test and production due to differences in the build process. Fewer surprises, more predictability. 

Safe Deployment: Staged Approach and Risk Minimization 

 You have your architecture, you have your diagrams, you have the green light from stakeholders. Now comes the hardest part — implementing changes without exposing users to downtime. 

Choosing the right timing for server optimization  

Schedule the work for a lower-traffic period. For the Scandinavian client project, we chose July — peak vacation season in that part of Europe, which meant lower system load and greater tolerance for potential issues. Timing is half the battle. 

Staged deployment: dev → test → production 

Never deploy infrastructure changes directly to production. The sequence should always be the same: 

Development environment — here you can experiment and make mistakes. It’s your laboratory where you test the new architecture without pressure. If something breaks, no one outside the team will notice. 

Test environment — now considering the team’s needs. Communicate planned work so you don’t block testing of new features. Some downtime is unavoidable, but it can be coordinated with the development schedule. 

Production — the highest level of caution. Here, every minute of downtime has a real impact on users and the business. 

Parallel infrastructure strategy 

For production changes, an approach I call “the bridge” works well: for a period of time, you maintain two infrastructures in parallel. The old one handles traffic, the new one is ready to take over. The switch happens through DNS record changes — an elegant solution that allows you to quickly revert to the previous configuration if problems arise. You have a safety net. 

In my project, production downtime was about 5-10 minutes — the time needed for DNS propagation and verification that the new infrastructure was working correctly. Users barely noticed. 

Preparing for the unexpected 

Even the best-tested configuration can behave differently in production. Have a Plan B and people on standby ready to react quickly. In my case, one of the frontend applications had trouble starting on the new infrastructure — but because we were prepared, we fixed it within minutes. The real magic is making everything look like it went smoothly — even if there was some improvisation behind the scenes. 

 

Measurable Results: Numbers That Speak for Themselves 

After completing the project, it’s worth measuring the effects. In my case, the key metrics looked like this: 

Deployment time dropped from about 60 minutes to 2-5 minutes. This isn’t a theoretical value — a week after implementation, we received a bug report at 10:00 AM, and by 10:30 AM the fix was in production. Including diagnosis and writing the solution. With the old infrastructure, the deployment phase alone would have taken as long as this entire process. The client was stunned — in a good way. 

Infrastructure costs dropped by 10-20%. This isn’t proportional to the server reduction (from 12 to 3) because the remaining machines have higher specs — but every IT budget saving is money that can go toward product development. 

Maintenance automation became possible. Servers now update automatically during scheduled windows — nights, weekends. The team stopped thinking about infrastructure in terms of daily maintenance and can focus on what really matters: delivering value to users. The infrastructure disappeared from view — and that’s the best sign it’s working as it should. 

Working with Less Popular Cloud Platforms 

It’s worth mentioning an additional challenge you might encounter: working with less popular cloud platforms. In my project, the infrastructure was hosted on Open Cloud Telecom — a solution with features similar to AWS, but with a much smaller documentation base and fewer examples. 

With popular platforms (AWS, Azure, GCP), most problems can be solved by searching Stack Overflow or official documentation. With less common solutions, you often need to experiment and learn through trial and error. That’s why the testing phase on the development environment is even more important — that’s where you’ll discover platform-specific behaviors before they become production fires. Better to spend an extra day on dev than an hour firefighting in production. 

Summary 

Server optimization is a project that requires careful planning and staged implementation. The key elements of success are: 

Solid conceptual phase — workshops, diagrams, stakeholder consultations. Without this foundation, it’s easy to end up with a solution that works technically but doesn’t meet business expectations. 

Thoughtful architecture — server consolidation, network layer redesign, deployment process optimization. Simpler solutions are usually better. 

Safe deployment — staged approach (dev → test → prod), parallel infrastructure strategy, preparation for the unexpected. Plan B isn’t paranoia, it’s professionalism. 

The results can be significant: dramatically faster deployments, lower costs, easier maintenance. And most importantly — infrastructure stops being an obstacle to product development and becomes what it should be: an invisible foundation that simply works. And lets your team finally focus on what they do best.  

If you have questions about infrastructure optimization for your project, contact Bartek or Wojtek, our Strategic Partnership Managers, to walk through the design process with a team that has delivered some of the most demanding implementations in Europe.  

What else? Read our case studies and discover how we work. 

 

 

 

Knowledge shaped by experience here:

Is Your IT Team Fighting Fires Instead of Building Products? Your Infrastructure Might Be the Problem

Have you ever thought about server optimization in your business? A sprawling server infrastructure ...

How to Build EV Apps That Dominate the Market: An Engineering Approach Proven in Scandinavian Projects

Migrating from a monolithic architecture to microservices promises scalability, flexibility, and fas...

A hidden cost in the supply chain optimized with master label

Migrating from a monolithic architecture to microservices promises scalability, flexibility, and fas...

Curious what’s next?

Unlock the
thinking that
turns complexity
into clarity.

Follow Fabres on Linkedin.

See more...

Server Optimization

Is Your IT Team Fighting Fires Instead of Building Products? Your Infrastructure Might Be the Problem

Have you ever thought about server optimization in your business? A sprawling server infrastructure ...

See more

How to Build EV Apps That Dominate the Market: An Engineering Approach Proven in Scandinavian Projects

Migrating from a monolithic architecture to microservices promises scalability, flexibility, and fas...

See more

A hidden cost in the supply chain optimized with master label

Migrating from a monolithic architecture to microservices promises scalability, flexibility, and fas...

See more

Curious what’s next?

Unlock the thinking that turns
complexity into clarity.

Follow Fabres on Linkedin.

Let’s build what others
only imagine.

See what precision can unlock. Contact us and start your
transformation right now.

    Magic, isn't it?

    We’ll send you a summary of our conversation — just leave your email address if you haven’t already shared it with our AI during the conversation

    The controller of your personal data is Fabres Sp. z o.o., based in Poznań,  Stanisława Małachowskiego 10
    61-129, Poland, KRS: 00005975001349, NIP:7822603892
    The data provided in the contact form (e.g. name, email address, company name) will be processed to respond to your inquiry and enable business communication, based on our legitimate interest (Art. 6(1)(f) of the GDPR).
    Your data may be stored for as long as necessary to fulfill this purpose or until you object to processing.
    You have the right to access, rectify, delete, or restrict your data, as well as to object to processing or lodge a complaint with the Polish Data Protection Authority (UODO).

    For more details, see our Privacy Policy.