Technical details of the Capital One bank hack on AWS

Technical details of the Capital One bank hack on AWS

On July 19, 2019, Capital One received the message that every modern company fears: a data breach has occurred. It affected more than 106 million people. 140 US Social Security numbers, one million Canadian Social Security numbers. 000 bank accounts. Unpleasant, agree?

Unfortunately, the break-in did not happen on July 19 at all. As it turned out, Paige Thompson, aka Erratic, made it between March 22 and March 23, 2019. That is almost four months ago. In fact, it was only with the help of external consultants that Capital One was able to find out that something had happened.

A former Amazon employee has been arrested and faces a $250 fine and five years in prison... but there's a lot of negativity left. Why? Because many companies that have been hit by hacks are trying to shrug off the responsibility of hardening their infrastructure and applications amid rising cybercrime.

Either way, you can easily google this story. We will not go into drama, but will talk about technical side of the matter.

First, what happened?

Capital One had about 700 S3 buckets that Paige Thompson copied and bled.

Secondly, is this another case of misconfigured S3 bucket policy?

No, not this time. Here, she accessed a server with a misconfigured firewall and performed the entire operation from there.

Wait, how is that possible?

Well, let's start by logging in to the server, although we don't have a lot of details. We were only told that it happened through a "misconfigured firewall". So, something as simple as misconfigured security groups or a web application firewall configuration (Imperva) or a network firewall (iptables, ufw, shorewall, etc.). Capital One only pleaded guilty and said it closed the hole.

Stone said Capital One did not initially notice the firewall vulnerability, but quickly responded once it became aware of it. It certainly helped that the hacker allegedly left key identifying information publicly available, Stone said.

If you're wondering why we don't delve into this part, please understand that due to limited information, we can only speculate. This makes no sense given that the hack depended on the breach left by Capital One. And unless they tell us more, we're just going to list all the possible ways that Capital One has kept their server open in combination with all the possible ways someone could use one of those different options. These gaps and methods can range from wildly stupid gaffes to incredibly complex patterns. Given the range of possibilities, this will turn into a long saga with no real conclusion. Therefore, we will focus on the analysis of the part where we have facts.

So first takeaway: know what your firewalls allow.

Establish a policy or proper process to ensure that ONLY what needs to be opened is opened. If you are using AWS resources such as Security Groups or Network ACLs, obviously the checklist for verification can be lengthy... but as many resources are created automatically (i.e. CloudFormation), it is also possible to automate their auditing. Whether it's a homemade script that scans new objects for vulnerabilities, or something like a security audit in the CI/CD process... there are many simple options to avoid this.

The “funny” part of the story is that if Capital One had closed the hole from the start… nothing would have happened. And so, frankly, it's always shocking to see how something really pretty simple becomes the sole cause of the company being hacked. Especially one as big as Capital One.

So, the hacker is inside - what happened next?

Well, after breaking into an EC2 instance… a lot can go wrong. You're practically walking on a knife's edge if you let someone go that far. But how did she get into S3 buckets? To understand this, let's discuss IAM Roles.

So, one way to access AWS services is to be a User. Okay, that's pretty obvious. But what if you want to give other AWS services, such as your application servers, access to your S3 buckets? That's what IAM roles are for. They consist of two components:

  1. Trust Policy - What services or people can use this role?
  2. Permissions Policy - what does this role allow?

For example, you want to create an IAM role that allows EC2 instances to access an S3 bucket: first, the role is set to a Trust Policy so that EC2 (the entire service) or specific instances can "take over" the role. Accepting a role means that they can use the role's permissions to perform actions. Secondly, the Permissions Policy allows the service/person/resource that has "assumed the role" to do something on S3, be it accessing one specific bucket... or more than 700, as in the case of Capital One.

Once you are in an EC2 instance with the IAM role, you can obtain credentials in several ways:

  1. You can request instance metadata at http://169.254.169.254/latest/meta-data

    Among other things, at this address you can find the IAM role with any of the access keys. Of course, only if you are in an instance.

  2. Use AWS CLI...

    If the AWS CLI is installed, it is loaded with credentials from the IAM roles, if present. It remains only to work THROUGH the instance. Of course, if their Trust Policy was open, Page could do everything directly.

Thus, the essence of IAM roles is that they allow one resource to act ON YOUR BEHALF on OTHER RESOURCES.

Now that you understand the roles of IAM, we can talk about what Paige Thompson did:

  1. She gained access to the server (EC2 instance) through a breach in the firewall

    Whether it's security groups/ACLs or their own web application firewalls, the hole has probably been pretty easy to close, as stated in the official records.

  2. Once on the server, she was able to act "as if" she were the server herself.
  3. Because the IAM server role allowed S3 access to these 700+ buckets, it was able to access them

From that moment on, all she had to do was run the command List Bucketsand then the command Sync from AWS CLI...

Bank Capital One estimates damage from hack at $100 to $150 MILLION. Preventing this kind of damage is why companies are investing so much in cloud infrastructure protection, DevOps, and security experts. And how valuable and cost-effective is moving to the cloud? So much so that even in the face of more and more cybersecurity challenges The overall public cloud market grew by 42% in the first quarter of 2019!

Moral of the story: check your safety; audit regularly; respect the principle of least privilege for security policies.

(Here see the full legal report).

Source: habr.com

Add a comment