Sharing Our Passion for Technology
& continuous learning
〈  Back to Blog

Code Quality Metrics with Sonar, Part I

What is Sonar and Why it's Needed?

I was fortunate to be able to attend the 2011 edition of No Fluff Just Stuff. One of my favorite presentations was by Matthew McCullough on Sonar. Hence, when the issue of code metrics was raised at a partner, Sonar seemed like the right tool to use.

Our partner wanted to explore ways to measure and enforce software and code quality metrics. Their goals were to have quantitative measurements of their code quality and analyze those metrics to come up with a set of benchmark measurements. They wanted to utilize Sonar to discourage bad practices.

How can Sonar help?

Sonar can help achieve those goals in addition to providing tools to instantly evaluate and monitor the standings of any project with respect to these benchmarks. It can also help decision makers determine which issues, if tackled, provide the biggest increase in quality. Furthermore, this helps them assess areas of risk within their current software.

Sonar, according to wikipedia, is “an open source software quality platform. Sonar uses various static code analysis tools such as Checkstyle, PMD, FindBugs, Clover to extract software metrics, which then can be used to improve software quality”.

One of the main attributes of Sonar is that its users do not have to be restricted to developers or the technically savvy but rather it can provide helpful information to project managers, technical leads and IT leadership within an organization. This is made possible by Sonar’s plug-in architecture.

Plugins like SQALE and Technical Debt provide relevant information to managers and the business. They provide a high level overview of the project standing as it related to quality and cost.

At the same time, Sonar has popular development plugins like PMD, Checkstyle, Emma, and many others.

Why collect metrics in the first place?

This is the first of a series of blogs that describe how Sonar can be used. Each part in this series will include some technical details of how to incorporate Sonar into a development environment. Technical details to be discussed will include how to launch Sonar analysis from QuickBuild CI server using an Ant task, and collect metrics of Java projects that has JUnit unit tests and gather Emma coverage data and pass them to Sonar.

Before I get into the implementation details we need to talk about why measuring metrics is important. I’m not interested in the “new cool tools” out there if they don’t provide significant value to our partners and steer them towards success.

Here are some reasons why it is valuable to track metrics:

  • You cannot improve what you don’t measure: There are three scenarios related to the idea that the existence of a metric can help you decide your next action, plan it, and evaluate its results. These scenarios are as follow;
    • Scenario 1: Your team doesn't collect code metrics from projects. Therefore, your code base could be getting worse and worse without anyone ever noticing. You might start noticing when the technical debt (we will elaborate on this concept later) has reached a certain level where it's too expensive to address them, given the time and budget constraints. Collecting code metrics continuously can give your team the advantage of keeping the technical debt of your code base under control. For example, you can make it a rule that you don't allow your code base to extend beyond a certain threshold in terms of some metric values. Whenever that threshold is reached you are notified immediately through your continuous build.
    • Scenario 2: Time and time again I've witnessed teams that start refactoring because they are convinced the code base was bad in terms of performance, brittleness, instability, difficulty to maintain and/or to extend. While our intentions are good, we don't know what part of the code base is responsible for the issue we encounter. Hence, there is a good chance changes will be applied to the wrong code. Or we end up refactoring the right code in the wrong way. Or we only fix part of the problem. This is where metrics and tools like Sonar can help. Sonar points out the parts of the code that are causing problems. Once these issues are identified they can be prioritized and added to the backlog. Sonar helps teams identify and address issues with confidence.
    • Scenario 3: Another team is the best in the world and has remarkable instincts in identifying and correcting issues, but they fail to track the quantity of issues fixed in their triumphant voyage. Let's face it, managers and team leaders would definitely appreciate having a clear idea of how many improvements were made with their resources and budget. They also want to know which issues still need to be fixed in the future. Now, if you preserve a snapshot of metric values before the voyage, you could report something like this "... before our code base was 75% compliant with the company's best practices and now it's at 95%".
  • What you don’t measure, you cannot prove: This is relevant when the technical team needs to get buy-in from the stakeholders to allow them to spend time and resources to fix the code base. However the way we present information is very important. The stakeholders won’t have any idea what you mean when you say “swallowed exceptions”, tightly coupled classes, large amounts of duplication, etc. But they can come to understand that "our code has 14% of technical debt that will need X amount of days to fix it" and show them the numbers from SQALE saying that the product is not robust in terms of portability, security and efficiency. This way you're allowing management to make informed decisions and prioritize accordingly instead of having unsubstantiated requests for resources and time to fix the issues.
  • Broken Window Theory: This theory comes from a book entitled the Pragmatic Programmer, which states :
Don't leave "broken windows" (bad designs, wrong decisions, or poor code) unrepaired. Fix each one as soon as it is discovered. If there is insufficient time to fix it properly, then board it up. Perhaps you can comment out the offending code, or display a "Not Implemented" message, or substitute dummy data instead. Take some action to prevent further damage and to show that you're on top of the situation. We've seen clean, functional systems deteriorate pretty quickly once windows start breaking. There are other factors that can contribute to software rot, and we'll touch on some of them elsewhere, but neglect accelerates the rot faster than any other factor. You may be thinking that no one has the time to go around cleaning up all the broken glass of a project. If you continue to think like that, then you’d better plan on getting a dumpster, or moving to another neighborhood. Don’t let entropy win.

Continuously collecting and reviewing software metrics can help identify and fix “broken windows” before they affect other windows. The longer a bad design and bad code are left unfixed, the more vulnerable your code is to receiving additional hacks. However, if you keep an eye out for the symptoms of broken windows like : highly coupled classes, very complex methods and classes, low unit test coverage, etc, then you can act on them before they can attract more “badness”.

  • Prevention is the best medicine: It's true that fixing issues will take you some time after you start identifying them in your code, but once this is in place you will prevent the same problem from occurring in new code. For example, I found in one of our projects that we had a large amount of “Integer Instantiation" violations which meant that we were wasting memory space when we created integer objects. It took us sometime to clear them all out, but we were able to eventually remove them completely from the code. This is a rather simple violation but the point is that we didn't introduce new instances of the same problem that we were trying to fix. This gives you a morale boost knowing that your newer code is better than your older stuff.
  • Planning and Prioritizing: Having measurements for each area of concern and what it would cost to fix them is extremely valuable when you want to plan and prioritize the effort needed. In this case, you have a large amount of technical debt that needs concentrated effort to fix it. Measurements can be used to predict the amount of effort it will take to address each issue. This is a reactive approach to technical debt which you can replace with a proactive attitude. An example of a proactive approach would be that the team decides that whenever developers are changing a piece of code, they should fix any technical debt items related to that piece of code. Hence, killing two birds with one stone. This approach will help also with testing and quality assurance when your technical debt resolutions are tied to new features the testers will be already verifying.
  • Technical Debt Resolution: Getting technical debt under control gets easier when you start measuring and monitoring it. Ward Cunningham had this to say about it in his 1992 experience report titled The WyCash Portfolio Management System :
Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

Technical debt is the collection of “grey decisions” we make for the sake of meeting deadlines or coding something that “just works”. Ultimately we forget to refactor into “white code”. Therefore, when the grey materials amass with time, it can turn your code quality from a faint shade of grey to a darker shade. For me, the strongest and clearest manifestation of technical debt is when my team starts talking about rewriting a code base, as opposed to refactoring little by little. When we opt for rewriting instead of refactoring, it’s an indication that the technical debt has exceeded a certain level after which it becomes unfeasible to fix the issues and/or we have so many issues tangled with each others that we don’t know how to begin to address them. Uncle Bob argues against rewriting legacy code in his post called The Big Redesign In the Sky and prefers to fix the code base by applying smaller changes all the time. I’d say that we need to keep an eye on the technical debt all the time and address it promptly before it reaches an unmanageable level where it will be hard to convince the rest of the team NOT to rewrite.

What's next?

In this post I made a case for using and collecting software metrics for projects. I touched a little bit on what makes Sonar valuable in this area. In my next post, I will give an overview of the main features of Sonar that I find most helpful and intriguing. After that, I will go into the technical details of incorporating Sonar into our development environment.

〈  Back to Blog