Fear and Loathing DevSecOps

We had 2 code analyzers, 4 dynamic testing tools, our own crafts and 250 scripts. Not that all this was needed in the current process, but once you started implementing DevSecOps, you need to go to the end.

Fear and Loathing DevSecOps

Source. Characters created by Justin Roiland and Dan Harmon.

What is SecDevOps? What about DevSecOps? What are the differences? Application Security - what is it about? Why doesn't the classic approach work anymore? All these questions know the answer Yuri Shabalin of Swordfish Security. Yuriy will answer everything in detail and analyze the problems of transition from the classic Application Security model to the DevSecOps process: how to properly approach the integration of the secure development process into the DevOps process and not break anything, how to go through the main stages of security testing, what tools can be used, how to they differ and how to properly configure them to avoid pitfalls.


About speaker: Yuri Shabalin - Chief Security Architect in the company Swordfish Security. Responsible for the implementation of SSDL, for the overall integration of application analysis tools into a single development and testing ecosystem. 7 years of experience in information security. Worked at Alfa-Bank, Sberbank and Positive Technologies, which develops software and provides services. Speaker of international conferences ZerONights, PHDays, RISSPA, OWASP.

Application Security: what is it about?

Application Security is the security section that is responsible for application security. This is not about infrastructure or network security, but about what we write and what developers work on - these are the flaws and vulnerabilities of the application itself.

Direction SDL or SDLC — Security development lifecycle - Developed by Microsoft. The diagram shows the canonical SDLC model, the main task of which is the participation of security at every stage of development, from requirements, to release and release to production. Microsoft realized that there are too many bugs in the prom, there are more of them and something needs to be done about it, and they proposed this approach, which became canonical.

Fear and Loathing DevSecOps

Application Security and SSDL are not aimed at detecting vulnerabilities, as is commonly believed, but at preventing their occurrence. Over time, the canonical approach from Microsoft has been improved, developed, it has a deeper detailed immersion.

Fear and Loathing DevSecOps

The canonical SDLC is highly detailed in various methodologies - OpenSAMM, BSIMM, OWASP. The methodologies differ, but are generally similar.

Building Security In Maturity Model

I like it the most BSIMM — Building Security In Maturity Model. The basis of the methodology is the division of the Application Security process into 4 domains: Governance, Intelligence, SSDL Touchpoints and Deployment. Each domain has 12 practices, which are represented as 112 activities.

Fear and Loathing DevSecOps

Each of the 112 activities has 3 maturity levels: beginner, intermediate and advanced. You can study all 12 practices in sections, select things that are important to you, figure out how to implement them and gradually add elements, for example, static and dynamic code analysis or code review. You draw up a plan and work according to it calmly as part of the implementation of the selected activities.

Why DevSecOps

DevOps is an overall big process in which security needs to be taken care of.

Initially DevOps involved security checks. In practice, the number of security teams was much smaller than now, and they acted not as participants in the process, but as a control and oversight body that makes demands on it and checks the quality of the product at the end of the release. This is a classic approach in which security teams were behind a wall from development and did not take part in the process.

Fear and Loathing DevSecOps

The main problem is that information security is separate from development. Usually this is some kind of IB circuit and it contains 2-3 large and expensive tools. Once every six months, the source code or application arrives to be tested, and once a year pentests. All this leads to the fact that the release dates for the industry are postponed, and a huge number of vulnerabilities from automated tools fall out on the developer. It is impossible to disassemble and repair all this, because even in the previous six months the results were not dismantled, and here is a new batch.

In the process of our company's work, we see that security in all areas and industries understands that it is time to catch up and spin with the development in one wheel - in Agile. The DevSecOps paradigm fits perfectly into agile development methodology, implementation, support, and participation in every release and iteration.

Fear and Loathing DevSecOps

Transition to DevSecOps

The most important word in the Security Development Lifecycle is "process". You need to understand this before thinking about buying tools.

Just including tools in the DevOps process is not enough - communication and understanding between process participants is important.

People are more important than tools

Often planning a secure development process starts with choosing and buying a tool, and ends with attempts to integrate the tool into the current process, which remain attempts. This leads to sad consequences, because all tools have their own characteristics and limitations.

A common case is when the security department chose a good, expensive tool with a wide range of features and came to the developers to build it into the process. But it doesn’t work out - the process is designed in such a way that the limitations of an already purchased instrument do not fit into the current paradigm.

First, describe what result you want and what the process will look like. This will help to understand the roles of the tool and security in the process.

Start with what's already in use

Before buying expensive tools, look at what you already have. Each company has security requirements that apply to development, there are checks, pentests - why not transform all this into an understandable and convenient form for everyone?

Usually the requirements are a paper Talmud that lies on a shelf. There was a case when we come to the company to look at the processes and ask them to show the security requirements for the software. The specialist who did this was looking for a long time:

- Now, somewhere in the notes there was a path where this document lies.

As a result, we received the document a week later.

For requirements, checks and more, create a page, for example at Confluence - it's convenient for everyone.

It is easier to reformat what is already there and use it to start.

Use Security Champions

Usually, in an average company for 100-200 developers, there is one security officer who performs several functions and does not physically have time to check everything. Even if he tries his best, he alone will not check all the code that the development generates. For such cases, a concept has been developed - Security Champions.

Security Champions is a person within the development team who is interested in the security of your product.

Fear and Loathing DevSecOps

Security Champion is an entry point to the development team and a security evangelist all rolled into one.

Usually, when a security officer comes to the development team and points out an error in the code, he receives a surprised answer:

- And who are you? I see you for the first time. Everything is fine with me - my senior friend put “apply” on the code review, we move on!

This is a typical situation, because there is much more trust in seniors or just teammates with whom the developer constantly interacts at work and in code review. If, instead of a security guard, the Security Champion points out the error and the consequences, then his word will have more weight.

Also, developers know their code better than any security guy. For a person who has at least 5 projects in a static analysis tool, it is usually difficult to remember all the nuances. Security Champions know their product: what interacts with what and what to look at in the first place - they are more efficient.

So consider implementing Security Champions and expanding the influence of the security team. For the champion himself, this is also useful: professional development in a new field, expanding the technical horizons, pumping technical, managerial and leadership skills, increasing market value. This is some element of social engineering, your "eyes" in the development team.

Testing stages

Paradigm 20 by 80 says that 20% of the efforts give 80% of the results. These 20% are application analysis practices that can and should be automated. Examples of such activities are static analysis − SAST, dynamic analysis — DAST и open source control. I’ll tell you more about activities, as well as about tools, what features we usually encounter when they are introduced into the process, and how to do it correctly.

Fear and Loathing DevSecOps

Main tool problems

I will highlight the problems that are relevant for all instruments that require attention. I will analyze them in more detail so as not to repeat further.

Long analysis time. If it takes 30 minutes from the commit to the release for all tests and assembly, then the information security checks will take a day. So no one will slow down the process. Consider this feature and draw conclusions.

High False Negative or False Positive. All products are different, all use different frameworks and their own coding style. On different code bases and technologies, tools can show different levels of False Negative and False Positive. So see what's in your companies and for Your applications will show a good and reliable result.

No integrations with existing tools. Look at the tools in terms of integrations with what you already use. For example, if you have Jenkins or TeamCity, check the integration of tools with this software, and not with GitLab CI, which you do not use.

Lack or excessive complexity of customization. If the tool does not have an API, then why is it needed? Everything that can be done in the interface should be available through the API. Ideally, the tool should have the ability to customize checks.

No product development roadmap. Development does not stand still, we always use new frameworks and functions, rewrite old code into new languages. We want to make sure that the tool we buy will support new frameworks and technologies. Therefore, it is important to know that the product has a real and correct Roadmap development.

Process Features

In addition to the features of the tools, consider the features of the development process. For example, interfering with development is a typical mistake. Let's see what other features should be considered and what the security team should pay attention to.

In order not to disrupt the development and release deadlines, create different rules and different show stoppers - criteria for stopping the build process in the presence of vulnerabilities - for different environments. For example, we understand that the current branch is going to a development stand or UAT, so we don’t stop and say:

- You have vulnerabilities here, you will not go anywhere further!

At this point, it's important to tell developers that there are security issues to look out for.

The presence of vulnerabilities is not a barrier to further testing: manual, integration or manual. On the other hand, we need to somehow improve the security of the product, and so that the developers do not score on what the security finds. Therefore, sometimes we do this: at the stand, when it rolls out to the development environment, we simply notify the development:

- Guys, you have problems, please pay attention to them.

At the UAT stage, we again show warnings about vulnerabilities, and at the exit stage in the prom we say:

“Guys, we warned you several times, you didn’t do anything – we won’t let you out with this.

If we talk about code and dynamics, then it is necessary to show and warn about vulnerabilities only of those features and code that was just written in this feature. If the developer moved the button by 3 pixels and we tell him that he has an SQL injection there and therefore needs to be fixed urgently, this is wrong. Look only at what is written now, and at the change that comes to the application.

Let's say we have some functional defect - the way the application should not work: money is not transferred, when you click on the button, there is no transition to the next page, or the product does not load. Security Defects - these are the same defects, but not in the context of the application, but security.

Not all software quality issues are security issues. But all security problems are related to the quality of the software. Sherif Mansour, Expedia.

Since all vulnerabilities are the same defects, they should be located in the same place as all development defects. So forget about reports and scary PDFs that no one reads.

Fear and Loathing DevSecOps

When I was working for a development company, I received a report from static analysis tools. I opened it, was horrified, made coffee, leafed through 350 pages, closed it and went on to work. Big reports are dead reports. Usually they don't go anywhere, emails get deleted, forgotten, lost, or the business says it's taking risks.

What to do? We simply convert the confirmed defects that we found into a form convenient for development, for example, add it to the backlog in Jira. Defects are prioritized and eliminated in order of priority along with functional defects and test defects.

Static Analysis - SAST

This is code analysis for vulnerabilities., but it's not the same as SonarQube. We check not only for patterns or style. The analysis uses a number of approaches: by vulnerability tree, by data flow, by analyzing configuration files. That's all for the code itself.

Pros of the approach: identifying vulnerabilities in code at an early stage of developmentwhen there are no stands and ready-made tools, and incremental scan capability: scans a section of code that has changed, and only the feature that we are currently doing, which reduces the scan time.

Cons is the lack of support for the required languages.

Required integrations, which should be in the tools, in my subjective opinion:

  • Integration tool: Jenkins, TeamCity and Gitlab CI.
  • Development environment: Intellij IDEA, Visual Studio. It is more convenient for a developer not to climb into an incomprehensible interface that still needs to be remembered, but to see all the necessary integrations and vulnerabilities that he has found right at the workplace in his own development environment.
  • Code review: SonarQube and manual review.
  • Defect trackers: Jira and Bugzilla.

The picture shows some of the best representatives of static analysis.

Fear and Loathing DevSecOps

It's not the tools that matter, but the process, so there are Open Source solutions that are also good for running the process.

Fear and Loathing DevSecOps

SAST Open Source will not find a huge number of vulnerabilities or complex DataFlow, but they can and should be used when building a process. They help to understand how the process will be built, who will respond to bugs, who will report, who will report. If you want to carry out the initial stage of building the security of your code, use Open Source solutions.

How can this be integrated if you are at the beginning of the journey, you have nothing: neither CI, nor Jenkins, nor TeamCity? Consider process integration.

Integration at the CVS level

If you have Bitbucket or GitLab, you can do integration at the level Concurrent Versions System.

By event pull request, commit. You scan the code and show in the build status that the security check passed or failed.

Feedback. Of course, feedback is always needed. If you just did it on the security side, put it in a box and didn’t tell anyone anything about it, and then dumped a bunch of bugs at the end of the month, this is not right and not good.

Integration with code review system

Once, we set the AppSec technical user as the default reviewer in a number of important projects. Depending on whether errors were found in the new code or there are no errors, the reviewer puts the status on the pull request to “accept” or “need work” - either everything is OK, or you need to finalize and links to what exactly to finalize. For integration with the version that is being released, we have disabled merge if the IS test is not passed. We included this in the manual code review, and the rest of the process participants saw the security statuses for this particular process.

Integration with SonarQube

Many have quality gate in terms of code quality. It's the same here - you can make the same gates only for SAST instruments. There will be the same interface, the same quality gate, only it will be called security gate. And also, if you have a process set up using SonarQube, you can easily integrate everything there.

Integration at the CI level

Here, too, everything is quite simple:

  • On par with autotests, unit tests.
  • Division by development stages: dev, test, prod. Different sets of rules may be included, or different fail conditions: we stop the assembly, we do not stop the assembly.
  • Synchronous / asynchronous start. We are waiting for the end of the security tests or we are not waiting. That is, we just launched them and move on, and then we get a status that everything is good or bad.

It's all in a perfect pink world. In real life, this is not the case, but we strive. The result of performing security checks should be similar to the results of unit tests.

For example, we took a large project and decided that now we will scan it with SAST - OK. We shoved this project into SAST, it gave us 20 vulnerabilities, and we made a strong-willed decision that everything is fine. 000 vulnerabilities is our technical debt. We will put the debt in a box, we will slowly rake it up and start bugs in defect trackers. Hire a company, do everything ourselves, or have Security Champions help us, and technical debt will decrease.

And all newly appearing vulnerabilities in the new code should be eliminated in the same way as errors in a unit or in autotests. Relatively speaking, the assembly started, drove away, two tests and two security tests fell down. OK - we went, looked at what happened, corrected one, corrected the second, drove the next time - everything is fine, no new vulnerabilities have appeared, the tests have not failed. If this task is deeper and you need to understand it well, or fixing vulnerabilities affects large layers of what lies under the hood: a bug is brought into the defect tracker, it is prioritized and fixed. Unfortunately, the world is not perfect and tests sometimes fail.

An example of a security gate is an analogue of a quality gate, in terms of the presence and number of vulnerabilities in the code.

Fear and Loathing DevSecOpsWe integrate with SonarQube - the plugin is installed, everything is very convenient and cool.

Development environment integration

Integration options:

  • Starting a scan from the development environment even before the commit.
  • View results.
  • Analysis of results.
  • Synchronization with the server.

This is how getting results from the server looks like.

Fear and Loathing DevSecOps

In our development environment Intellect IDEA it just appears an additional item that says that such vulnerabilities were found during the scan. You can immediately edit the code, see recommendations and flow graph. All this is located at the developer's workplace, which is very convenient - you do not need to follow the rest of the links and watch something extra.

Open Source

This is my favorite topic. Everyone uses Open Source libraries - why write a bunch of crutches and bicycles when you can take a ready-made library in which everything is already implemented?

Fear and Loathing DevSecOps

Of course, this is true, but libraries are also written by people, also include certain risks, and there are also vulnerabilities that are periodically, or constantly, reported. Therefore, there is a next step in Application Security - this is the analysis of Open Source components.

Open Source Analysis - OSA

The tool includes three major steps.

Finding vulnerabilities in libraries. For example, the tool knows that we are using some kind of library, and that in CVE or in bug trackers there are some vulnerabilities that relate to this version of the library. If you try to use it, the tool will warn you that the library is vulnerable, and advise you to use another version where there are no vulnerabilities.

Analysis of license purity. This is not very popular with us yet, but if you work with a foreign country, then there you can periodically get an attack for using an open source component that cannot be used or modified. According to the policy of the licensed library, we cannot do this. Or, if we have modified it and use it, we must post our code. Of course, no one wants to post the code of their products, but you can also protect yourself from this.

Analysis of components that are used in an industrial environment. Imagine a hypothetical situation that we have finally completed development and released the latest release of our microservice to the industry. He lives there wonderfully - a week, a month, a year. We do not collect it, we do not conduct security checks, everything seems to be fine. But suddenly, two weeks after the release, a critical vulnerability in the Open Source component comes out, which we use in this particular assembly, in the industrial environment. If we do not record what and where we use, then this vulnerability simply will not be seen. Some tools have the ability to monitor vulnerabilities in libraries that are currently used in prom. It is very useful.

Features:

  • Different policies for different stages of development.
  • Monitor components in an industrial environment.
  • Control of libraries in the contour of the organization.
  • Support for various build systems and languages.
  • Analysis of Docker images.

A few examples of leaders in the field who are engaged in the analysis of Open Source.

Fear and Loathing DevSecOps
The only free one is Dependency Check from OWASP. You can turn it on at the first stages, see how it works and what it supports. Basically, these are all cloud products, or on-premise, but behind their base they still go to the Internet. They do not send your libraries, but hashes or their values ​​that they calculate, and fingerprints to their server in order to receive news of the presence of vulnerabilities.

Process Integration

Perimeter library controlthat are downloaded from external sources. We have external and internal repositories. For example, we have Nexus inside Event Central, and we want to make sure that there are no vulnerabilities with a "critical" or "high" status inside our repository. You can set up proxying using the Nexus Firewall Lifecycle tool so that such vulnerabilities are cut off and not included in the internal repository.

CI integration. On the same level with autotests, unit tests and division into development stages: dev, test, prod. At each stage, you can download any libraries, use anything, but if there is something hard with the “critical” status, you should probably draw the attention of developers to this at the stage of entering the prom.

Artifact integration: Nexus and JFrog.

Integration into the development environment. The tools you choose should have integration with development environments. The developer must have access to the scan results from his workplace, or the ability to scan and check the code for vulnerabilities before committing it in CVS.

CD integration. This is a cool feature that I really like and about which I have already talked about - monitoring the emergence of new vulnerabilities in an industrial environment. It works like this.

Fear and Loathing DevSecOps

We have Public Component Repositories - some tools outside, and our internal repository. We want only trusted components to be in it. When proxying a request, we check that the downloaded library has no vulnerabilities. If it falls under certain policies that we set and necessarily coordinate with the development, then we don’t upload it and we get a rebuff to use a different version. Accordingly, if there is something really critical and bad in the library, then the developer will not receive the library even at the installation stage - let him use a version higher or lower.

  • When building, we check that no one slipped anything bad, that all components are safe and no one brought anything dangerous on the flash drive.
  • We only have trusted components in the repository.
  • When deploying, we once again check the package itself: war, jar, DL or Docker image for the fact that it complies with the policy.
  • When entering the industrial environment, we monitor what is happening in the industrial environment: critical vulnerabilities appear or do not appear.

Dynamic Analysis - DAST

Dynamic analysis tools are fundamentally different from everything that has been said before. This is a kind of imitation of the user's work with the application. If this is a web application, we send requests imitating the work of the client, click on the buttons on the front, send artificial data from the form: quotes, brackets, characters in different encodings to look at how the application works and processes external data.

The same system allows you to check for template vulnerabilities in Open Source. Since DAST does not know which Open Source we are using, it simply throws "malicious" patterns and analyzes the server's responses:

- Yeah, there is a deserialization problem here, but not here.

There are big risks in this, because if you conduct this security test on the same stand that the testers work with, unpleasant things can happen.

  • High load on the application server network.
  • No integrations.
  • The ability to change the settings of the analyzed application.
  • There is no support for the required technologies.
  • Difficulty of setting.

We had a situation when we finally launched AppScan: we knocked out access to the application for a long time, received 3 accounts and were delighted - finally, we will check everything! We launched a scan, and the first thing AppScan did was get into the admin panel, pierce all the buttons, change half of the data, and then kill the server altogether with its own mail form-requests. Development with Testing said:

“Guys, are you kidding me?! We gave you accounts, and you put the stand!

Consider possible risks. Ideally, prepare a separate stand for testing information security, which will be isolated from the rest of the environment at least somehow, and conditionally check the admin panel, preferably in manual mode. This is a pentest - those remaining percentages of efforts that we are not considering now.

It is worth considering that you can use this as an analogue of load testing. At the first stage, you can turn on the dynamic scanner in 10-15 threads and see what happens, but usually, as practice shows, nothing good.

A few resources that we usually use.

Fear and Loathing DevSecOps

Worth highlighting Burp Suite is the "Swiss knife" for any security professional. Everyone uses it and it's very convenient. A new demo version of the enterprise edition has now been released. If earlier it was just a stand alone utility with plugins, now developers are finally making a large server from which it will be possible to manage several agents. It's cool, I recommend you try it.

Process Integration

The integration is pretty good and simple: start scan after successful installation applications on the stand and scanning after successful integration testing.

If the integrations do not work or there are stubs and mock functions, it is meaningless and useless - no matter what pattern we send, the server will still respond the same way.

  • Ideally, a separate test bench.
  • Before testing, write down the login sequence.
  • Testing of the administration system is only manual.

Process

A little generalized about the process in general and about the work of each tool, in particular. All applications are different - one works better with dynamic analysis, another with static analysis, the third with OpenSource analysis, pentests, or something else in general, for example, events with waf.

Every process needs to be controlled.

To understand how the process works and where it can be improved, you need to collect metrics from everything you can get your hands on, including production metrics, metrics from tools and defect trackers.

Any information is helpful. It is necessary to look in various sections at where this or that tool is better used, where the process specifically sags. It may be worth looking at development response time to see where to improve the process based on time. The more data, the more cuts can be built from the top level to the details of each process.

Fear and Loathing DevSecOps

Since all static and dynamic analyzers have their own APIs, their own launch methods, principles, some have schedulers, others do not - we are writing a tool AppSec Orchestrator, which allows you to make a single entry point to the entire process from the product and manage it from one point.

Managers, developers, and security engineers have one entry point from which they can see what is running, configure and run scans, get scan results, and submit requirements. We try to get away from pieces of paper, translate everything into a human one that development uses - pages on Confluence with status and metrics, defects in Jira or in various defect trackers, or embedding into a synchronous / asynchronous process in CI / CD.

Key takeaways

The tools don't matter. Think about the process first, then implement the tools. The tools are good, but expensive, so you can start with the process and fine-tune the interaction and understanding between development and security. From the point of view of security, there is no need to “stop” everything in a row. From the point of view of development, if there is something high mega super critical, then this must be eliminated, and not closed to the problem.

Product quality - common goal both security and development. We do one thing, we try to ensure that everything works correctly and there are no reputational risks and financial losses. That is why we promote the approach to DevSecOps, SecDevOps in order to establish communication and make the product better.

Start with what's already there: requirements, architecture, partial checks, trainings, guidelines. It is not necessary to immediately apply all practices to all projects - move iteratively. There is no single standard experiment and try different approaches and solutions.

Equal sign between IS defects and functional defects.

Automate everythingthat is moving. Anything that doesn't move, move and automate. If something is done by hand, this is not a good part of the process. Perhaps it is worth reconsidering and automating it too.

If the size of the IB team is small - use Security Champions.

Perhaps what I talked about will not suit you and you will come up with something of your own - and that's good. But choose tools based on the requirements of your process. Don't look at what the community says that this tool is bad and this one is good. Perhaps it will be the other way around on your product.

Tool requirements.

  • Low False Positive.
  • Adequate analysis time.
  • Ease of use.
  • Availability of integrations.
  • Understanding the product development roadmap.
  • Ability to customize tools.

Yuriy's report was chosen as one of the best at DevOpsConf 2018. To get acquainted with even more interesting ideas and practical cases, come to Skolkovo on May 27 and 28 DevOpsConf within festival RIT++. Even better, if you are willing to share your experience, then submit an application Submit your report by April 21st.

Source: habr.com

Add a comment