Update 11/09/2017:
2 days back, AWS released New S3 Security features which included an indicator on each bucket that identifies if it is publicly accessible or not. This is identified based on your bucket ACLs and bucket policy. However, individual objects could still be accessible publicly based on object ACLs which is not identified by AWS. This security feature throws away the need to audit S3 buckets for public access atleast at the bucket level.
--------------------------
Way back in 2013, security researchers, Robin Wood and H D Moore, had discovered a flaw on the internet that allowed several S3 buckets to be publicly accessible by anyone. Even recently, there are similar holes within the S3 buckets that are being found by other researchers. One such example of Schoolzilla exposing their 1.3 Million students' data through a publicly accessible S3 bucket can be found
here. There are several other examples from bug bounty programs like that. One from HackerOne is detailed
here.
So, the goal of this post is to provide an understanding of such S3 bucket misconfigurations and identify/audit if your buckets are publicly accessible.
Your S3 bucket can be made publicly accessible in the following three ways:
1)
Using Access Control Lists(ACLs):
Each individual user or predefined group of users can be provided permissions to access the bucket/object. These permissions can be READ, WRITE, READ_AP, WRITE_AP and FULL_CONTROL. If these permissions are tied to individual users then they are well and good. However, they can also be tied to any of the predefined groups, namely, AllUsers, AuthenticatedUsers and LogDelivery. You can read in detail about them
here.
Issue:
Out of these, any permissions to
AllUsers group makes the bucket publicly accessible and to
AuthenticatedUsers group makes it accessible to any user who is authenticated to AWS. Effectively making it publicly accessible too. These groups can be seen as Everyone and AWS Authenticated Users in the Bucket ACL as shown in the image:
Such buckets can simply be accessed using the URL
https://<bucket_name>.s3.amazonaws.com OR
https://s3.amazonaws.com/<bucket_name>
2)
Using custom S3 policies:
An S3 bucket policy looks as follows:
It is in JSON format and contains the following important components:
Principal --> used to attach this policy to a user or amazon resource. Here, the principals are provided using the Amazon resource number.
Action --> used to specify any S3 command that is allowed in the policy. Here, the PutObject and PutObjectAcl actions are allowed.
Sid --> Is simply a name given to the policy.
Resource --> used to specify the resource that this policy is for. It could be the bucket, all objects or specific objects within the bucket. Again, this is provided using the Amazon resource number for the bucket.
Condition --> used to give the condition for accessing the resource. For example, the access can be limited by source IP address. This is optional.
There can be several combinations of the above that can create multiple policies as per requirement. We will be focussing on the mistakes in setting up policies that can lead to security issues.
Issues:
- Few critical mistakes can be made while selecting the Principal value. If the value assigned is "*" or {"AWS": "*"}, then effectively any AWS user can make use of the S3 actions specified for the resource. Not even limited to the current account. Essentially, making the bucket and/or its objects public.
- Similarly, for purposes of audit, care must be taken while assigning S3 Actions in a policy. If the value assigned is "*" or {"AWS": "*"}, then every action, including putting and deleting objects can be performed by the assigned Principal(user).
- In the same manner, the Resource value can be a concern if all values are selected instead of the bucket/object under question and restrictive Conditions can not be effective if they are not mentioned correctly. For example, the IPAllow policy is nullified if an IP address is never entered. Yes, this can happen in real life.
3)
Using Web hosting and policies:
Web hosting allows your S3 bucket to be browsable from a web browser and can ease access to normal users. You can enable web hosting on an S3 bucket by enabling it from the Properties section of the S3 bucket as shown in the image below.
However, you still need to set a policy on the S3 bucket for the actual access as described
here. A simple policy with the action s3:GetObject can be sufficient for this. The contents of the bucket will be hosted at the URL https://<bucketname>.s3.amazonaws.com.
The S3 policy, in the article, named
PublicReadForGetBucketObjects allows such access and you can specify exactly what objects/resources can be accessed using a we browser. This added benefit has its own security implications.
Issue:
The policy associated with it, as described in the article, sets
Principal value as
"*" and S3
Action as
"GetObject" by default. This essentially makes the resources accessible by any user even outside the account(public) as well as browsable. Care must be taken while configuring such access. A simple IPAllow condition with a whitelisted IP address can effectively secure the bucket with such access.
Those are all the ways you can accidentally make your s3 bucket publicly accessible.
Okay. So, if you have a large number of S3 buckets to audit for such public access, you will need to script this. You can use the
AWS Cli Api to do this or the
boto3 library in python. If you want to audit your S3 bucket, without reinventing the wheel and writing your own script you can directly use NCC Group's tool called
Scout2 to do so.
That's it folks. This post was meant to only go over these simple misconfigurations. I might publish a script to just audit S3 public access soon. Anyways, I hope you found this educational.