How we implemented SonarQube and realized its great potential

How we implemented SonarQube and realized its great potential

We would like to share our experience of implementing a platform for continuous analysis and measurement of the quality of the SonarQube code into the existing processes for developing the DPO system (an addition to the Alameda depositary and clearing accounting system) of the National Settlement Depository.

The National Settlement Depository (Moscow Exchange Group of Companies) is one of the key financial infrastructure companies that stores and records securities of Russian and foreign issuers worth more than 50 trillion rubles. The growing volume of operations carried out by the system, as well as the continuous increase in functionality, require maintaining the high quality of the source code of the systems. One of the tools to achieve this goal is the SonarQube static analyzer. In this article, we describe the successful experience of seamlessly implementing the SonarQube static analyzer into the existing development processes of our department.

Briefly about the department

Our competence includes the following modules: payments to NSD clients, electronic document management (EDF), processing of trade repository messages (registration of off-exchange transactions), electronic interaction channels between clients and NSD, and much more. In general, a large layer of work on the technical side of operations. We work on the basis of applications. Applications of tellers are processed by analysts: they collect customer requirements and present us their vision of how it should fit into the program. Further, the standard scheme: code development - testing - trial operation - delivery of the code to the productive circuit to the direct customer.

Why SonarQube?

This is the first experience of our department in implementing a platform for code quality control - previously we did it manually, only code review. But the growing volume of work requires automation of this process. In addition, there are also inexperienced employees in the team who are not entirely familiar with internal development regulations and tend to make more mistakes. To control the quality of the code, it was decided to implement a static analyzer. Since SonarQube has already been used in some NSD systems, it didn't take long to choose. Previously, colleagues from other divisions used it to analyze the code of microservices in the Alameda system (NSD’s own depository and clearing accounting system), in the CFT (information system for accounting, balance, preparation of mandatory and internal reporting), in some other systems . For experimentation, we decided to start with the free version of SonarQube. So let's move on to our case.

Implementation process

We have:

  • automatic assembly of the system in TeamCity;
  • set up the process of uploading code via MergeRequest from a feature branch to the master branch in GitLab (development process according to GitHub Flow);
  • SonarQube configured to analyze the code for the DPO system on schedule.

Our Goal : implement automatic code analysis in CI/CD processes of AVE.

Need to customize: the process of automatically checking the code by a static analyzer with each MergeRequest to the main branch.

Those. the target picture is as follows: as soon as the developer uploads changes to the feature branch, an automatic check for new errors in the code starts. If there are no errors, then the changes are allowed to be accepted, otherwise the errors will have to be corrected. Already at the initial stage, we were able to identify a certain number of errors in the code. The system has very flexible settings: it can be configured in such a way that it works for specific tasks of developers, for each system and programming style.

Configuring QualityGate in SonarQube

The QualityGate analysis is a thing that we read in the bowels of the Internet. Initially, we used a different approach, more complex and somehow not quite correct. First, we ran the scan through SonarQube twice: we scanned the feature branch and the branch where we were going to merge the feature branch, and then we compared the number of errors. This method was not stable and did not always give the correct result. And then we learned that instead of running SonarQube twice, you can set a limit on the number of errors made (QualityGate) and analyze only the branch that you upload and compare.

How we implemented SonarQube and realized its great potential

For now, we still use a rather primitive code check. It should be noted that SonarQube is not compatible with some programming languages, including Delphi. At the moment, for our system, we analyze only PLSql code.

It works like this:

  • We analyze only PL/SQL code for our project.
  • QualityGate is configured in SonarQube so that the number of errors does not increase with the commit.
  • The number of errors on the first run was 229. If there are more errors during the commit, then merge is not allowed.
  • Further, subject to the correction of errors, it will be possible to reconfigure QualityGate.
  • You can also add new items for analysis, for example, code coverage with tests, etc.

Scheme of work:

How we implemented SonarQube and realized its great potential

In the comments of the script, you can see that the number of errors in the feature branch has not increased. So everything is OK.

How we implemented SonarQube and realized its great potential

The Merge button becomes available.

How we implemented SonarQube and realized its great potential

In the comments of the script, you can see that the number of errors in the feature branch has become more than allowed. So everything is BAD.

How we implemented SonarQube and realized its great potential

The Merge button is red. At the moment, there is no prohibition on uploading changes to erroneous code, but this is done at the discretion of the responsible developer. In the future, you can prevent such commits from being made to the main branch.

How we implemented SonarQube and realized its great potential

Self-dealing with bugs

Next, you need to check all the errors detected by the system, because SonarQube analyzes according to its strict standards. What he considers an error may not actually be one in our code. Therefore, you need to check and note whether this is really a mistake, or whether there is no need to edit in our conditions. Thus, we reduce the number of errors. Over time, the system will learn to understand these nuances.

What have we come to

Our goal was to understand whether it is expedient in our case to transfer code verification to automation. And the result lived up to expectations. SonarQube allows us to work with the languages ​​we need, does fairly competent analysis, and has the potential to learn from developer tips. In general, we are pleased with our first experience with SonarQube and plan to develop further in this direction. We expect that in the future we will be able to save more time and effort on code review and make it better by eliminating the human factor. Perhaps in the process we will discover the shortcomings of the platform, or, on the contrary, we will make sure once again that this is a cool thing with great potential.

In this overview article, we talked about our acquaintance with the SonarQube static analyzer. If you have questions, please write in the comments. If you are interested in this topic, in the new publication we will describe in more detail how to set everything up correctly and write code to do such a check.

Author of the text: atanya

Source: habr.com

Add a comment