Category Archives: devops

Everything Sucks – Managing IT Risks: Strategies for IT Professionals.

As someone who has worked in the IT industry for many years, I have realized that technology is far from perfect. In fact, I would go so far as to say that everything sucks when it comes to technology.

IT professionals constantly deal with a never-ending barrage of issues, from unexpected hardware failures to software bugs and infrastructure breakdowns. It is Murphy’s Law all the way.

And while we often joke about the shortcomings of operating systems like Windows, even the most reliable and robust systems like Linux are not immune to bugs and glitches. The sheer complexity of software development means that dozens of bugs are likely lurking in every thousand lines of code, making it impossible to catch them all.

It is everything

But it’s more than just problematic software. Even the best hardware can fail unexpectedly, despite companies spending large sums on the latest and greatest equipment. Mean Time Between Failures (MTBF) might offer some guidance, but it’s often a source of delusion rather than certainty.

And when it comes to infrastructure, the fragility of the Internet can be mind-boggling. For example, one broken fibre cable in Egypt caused widespread disruption to millions across Africa, the Middle East, and South Asia. Given the countless potential points of failure and the constant threat of cybercriminals, it’s a miracle that the Internet works at all.

And let’s not even go into all the problems around Border Gateway Protocol (BGP), which is a fundamental protocol that helps keep the Internet running. It is based on trust rather than security. This means that every network operator must trust the information provided by others, even if they have no direct relationship with them. What could possibly go wrong, right?

But not all is lost

Despite all these challenges, there are ways to mitigate the risks and prepare for the worst.

It’s important to perform risk analyses and prioritize resources accordingly. While protecting against every potential threat is impossible, it’s crucial to focus on the most significant risks and allocate resources accordingly.

Performing risk analysis is a critical step for any IT professional in preparing for the worst. It involves identifying potential risks and evaluating the likelihood of those risks occurring, as well as the potential impact they could have. By conducting a risk analysis, IT professionals can better understand where their systems and infrastructure are vulnerable and prioritize resources accordingly.

Risk Matrix

One common risk analysis method uses a risk matrix, which assigns likelihood and impact scores to various risks to determine their overall risk level. Once the risks have been identified and evaluated, IT professionals can develop strategies to mitigate them and prepare for the worst.


For example, if a company relies heavily on a particular system, it might identify the failure of that system as a significant risk. They could then develop a backup plan, such as having redundant systems or backup servers, to minimize the impact of a potential failure.

It is a continuous process

It’s important to note that risk analysis is an ongoing process. Risks can change over time, and new ones can emerge, so it’s essential to regularly review and update risk analyses to ensure that IT professionals are always prepared for the worst.

IT professionals must acknowledge technology’s flaws and take action to prepare for potential risks. By performing risk analyses and prioritizing resources, we can develop effective strategies to minimize the impact of unexpected challenges and ensure critical systems remain operational. Let’s make risk analysis and mitigation strategies a priority in our work and ensure technology works for us.

Shift Left

Note: This article was originally written for my blog in Portuguese back in 2021.

Shift Left is a practice in software development where the aim is to find defects as early in the process as possible. A study from NIST shows that the cost of finding and fixing defects increases exponentially the farther it is found in the development cycle. Therefore, the ideal scenario is to find defects as early as possible, ideally in the design phase.

As a DevOps professional, a large part of my work has been focused on code quality in our business unit, which primarily develops code for embedded systems, FPGAs, industrial automation, and industrial robot controllers. Given the nature of our products, the cost of developing high-quality code is immense, and each release cycle is exceptionally long.

To solve these problems, our team has been working to implement a Shift Left approach by following these steps:

Writing Unit Tests Concurrently with Code

Writing unit tests concurrently with the code is the most critical part of Shift Left. Developers should not wait for the next phase, testing, to see if there are any obvious bugs in the code. Instead, most testing should be done in the implementation phase through unit tests that must be run constantly. This also helps ease the test team’s workload and lets them focus on more important things than testing the basics.

Code Review

The next step is code review. Developers should create a branch, write the code (including tests), and send it for review instead of merging it directly into the trunk. Code review allows for the early detection of bugs and can prevent these issues from propagating to later stages of development, saving time and resources. Code review allows developers to learn from one another and share best practices, resulting in better code quality and more effective teamwork, again addressing them on the left. See Google’s best practices for code review.


Human beings don’t like to have their work criticized, so pre-commit hooks can reduce criticism’s human element. Using pre-commit, a bunch of tests can be pre-programmed to run in the code about to be committed. Linters, tools that check for leaking secrets, styling tools, and others can be used. Pre-commit does not allow code to be pushed if it does not pass QA on the developer’s machine. It ensures that the basics are covered before the code goes into review.

Text Editor and Plugins

The text editor/IDE that the developer uses is as far left as possible. Developers can use various plugins and tools to improve the code. For instance, Microsoft Visual Studio Code is an excellent editor with many useful plugins, including language servers, linters, and AI-based plugins like Co-pilot, Sourcery and TabNine. Sonarlint, a Sonarqube plugin, can analyze code and display issues as soon as the user saves it.

By following the steps outlined above, software development teams can significantly reduce the number of defects that make it to the testing phase. While no single solution guarantees perfect code, combining the tools and techniques discussed and a strong focus on code quality can help minimize the total project delivery cost and time. For further information on how these steps can help reduce defects and improve the overall quality of code, I strongly recommend reading Steve McConnell’s book Code Complete. Take action today to improve your software development process and achieve better outcomes for your team and business.