Thursday, December 7, 2017

Protecting your phone identity

Hello after a while now. Been many months since my last post.
Anyways, I want to share some insights about phone/sms identity attacks that were inspired from this t-mobile complaint on reddit. In there, the customer is a victim of the “phone porting attack” that only requires some PII on you and social engineering the gullible carrier customer care.
Following on this, I read a few other articles on attacks around phone identity theft and there is one good incident analysis article by Coinbase. Here the attacker obtains some PII of the user, calls customer care to set up a phone forward and eventually ports the number to another carrier. It is an incident that took place more than a year ago but is still relevant for some important takeaways as phone number based identity is still very insecure:










  • It is fairly easy to obtain PII on individuals online, which is used in social engineering the customer care.
  • Customer service provided by mobile carriers is the weak link here. However, one can secure against it with a customer service pin/password(need to request from your carrier). This is also used to protect against “SIM swap fraud” (an older attack) as elaborated here. In short, “SIM swap” or “SIM splitting” is simply calling the customer service with your PII to get a new SIM card on your name. 
  • There is a good overlap between personal and professional accounts/security. As such, facebook, twitter, google and several other personal/professional accounts can have their password recovery hinged on SMS verification method and should be proactively changed.   
  • Having a VOIP number that is solely used for account verification/resets instead of the real number can prove very useful in the above case. Google voice has no customer care and is not susceptible to other phone network attacks.            
  • Use of password managers to generate long, random strings for passwords is supported by this article in the case that your SMS 2FA is compromised.
  • And as we all know, moving away from SMS 2FA is a necessity especially with the difficult to detect SS7 protocol attacks (the app called SnoopSnitch can detect it but requires root).
In many incidents of such attacks, the target is to steal money from bank accounts or trade accounts the user has in his possession as they are easily tied with phone numbers for "security". With the Equifax breach opening doors to identity theft for almost everyone in the U.S., phone identity theft can be much more easier and lucrative for attackers than the standard phishing attacks. This is something we all should be aware about and take precautions against, atleast till the phone carriers can do their job better.


Tuesday, May 23, 2017

Is your S3 bucket public?

Update 11/09/2017:
2 days back, AWS released New S3 Security features which included an indicator on each bucket that identifies if it is publicly accessible or not. This is identified based on your bucket ACLs and bucket policy. However, individual objects could still be accessible publicly based on object ACLs which is not identified by AWS. This security feature throws away the need to audit S3 buckets for public access atleast at the bucket level.

--------------------------
Way back in 2013, security researchers, Robin Wood and H D Moore, had discovered a flaw on the internet that allowed several S3 buckets to be publicly accessible by anyone. Even recently, there are similar holes within the S3 buckets that are being found by other researchers. One such example of Schoolzilla exposing their 1.3 Million students' data through a publicly accessible S3 bucket can be found here. There are several other examples from bug bounty programs like that. One from HackerOne is detailed here.

So, the goal of this post is to provide an understanding of such S3 bucket misconfigurations and identify/audit if your buckets are publicly accessible.

Your S3 bucket can be made publicly accessible in the following three ways:

1) Using Access Control Lists(ACLs):
Each individual user or predefined group of users can be provided permissions to access the bucket/object. These permissions can be READ, WRITE, READ_AP, WRITE_AP and FULL_CONTROL. If these permissions are tied to individual users then they are well and good. However, they can also be tied to any of the predefined groups, namely, AllUsers, AuthenticatedUsers and LogDelivery. You can read in detail about them here.
Issue:
Out of these, any permissions to AllUsers group makes the bucket publicly accessible and to AuthenticatedUsers group makes it accessible to any user who is authenticated to AWS. Effectively making it publicly accessible too. These groups can be seen as Everyone and AWS Authenticated Users in the Bucket ACL as shown in the image:


Such buckets can simply be accessed using the URL
https://<bucket_name>.s3.amazonaws.com      OR
https://s3.amazonaws.com/<bucket_name>

2) Using custom S3 policies:
An S3 bucket policy looks as follows:


It is in JSON format and contains the following important components:

Principal --> used to attach this policy to a user or amazon resource. Here, the principals are provided using the Amazon resource number.

Action --> used to specify any S3 command that is allowed in the policy. Here, the PutObject and PutObjectAcl actions are allowed.

Sid --> Is simply a name given to the policy.

Resource --> used to specify the resource that this policy is for. It could be the bucket, all objects or specific objects within the bucket. Again, this is provided using the Amazon resource number for the bucket.

Condition --> used to give the condition for accessing the resource. For example, the access can be limited by source IP address. This is optional.

There can be several combinations of the above that can create multiple policies as per requirement. We will be focussing on the mistakes in setting up policies that can lead to security issues.

Issues:
  1. Few critical mistakes can be made while selecting the Principal value. If the value assigned is "*" or {"AWS": "*"}, then effectively any AWS user can make use of the S3 actions specified for the resource. Not even limited to the current account. Essentially, making the bucket and/or its objects public.
  2. Similarly, for purposes of audit, care must be taken while assigning S3 Actions in a policy. If the value assigned is "*" or {"AWS": "*"}, then every action, including putting and deleting objects can be performed by the assigned Principal(user). 
  3. In the same manner, the Resource value can be a concern if all values are selected instead of the bucket/object under question and restrictive Conditions can not be effective if they are not mentioned correctly. For example, the IPAllow policy is nullified if an IP address is never entered. Yes, this can happen in real life.

3) Using Web hosting and policies:
Web hosting allows your S3 bucket to be browsable from a web browser and can ease access to normal users. You can enable web hosting on an S3 bucket by enabling it from the Properties section of the S3 bucket as shown in the image below.

However, you still need to set a policy on the S3 bucket for the actual access as described here. A simple policy with the action s3:GetObject can be sufficient for this. The contents of the bucket will be hosted at the URL https://<bucketname>.s3.amazonaws.com.
The S3 policy, in the article, named PublicReadForGetBucketObjects allows such access and you can specify exactly what objects/resources can be accessed using a we browser. This added benefit has its own security implications.
Issue:
The policy associated with it, as described in the article, sets Principal value as "*" and S3 Action as "GetObject" by default. This essentially makes the resources accessible by any user even outside the account(public) as well as browsable. Care must be taken while configuring such access. A simple IPAllow condition with a whitelisted IP address can effectively secure the bucket with such access.

Those are all the ways you can accidentally make your s3 bucket publicly accessible.

Okay. So, if you have a large number of S3 buckets to audit for such public access, you will need to script this. You can use the AWS Cli Api to do this or the boto3 library in python. If you want to audit your S3 bucket, without reinventing the wheel and writing your own script you can directly use NCC Group's tool called Scout2 to do so.

That's it folks. This post was meant to only go over these simple misconfigurations. I might publish a script to just audit S3 public access soon. Anyways, I hope you found this educational. 

Saturday, March 11, 2017

Story of PHP crypt()

Note: My friend uses hashing and encryption interchangeable in his argument. However, hashing is performed in the following code.

A friend of mine got a piece of PHP code before me and said that it was an amazing "encryption" method which used a randomized salt value to encrypt the password. Well, I asked that is something usually done, what is so amazing about it. He then, said that it was because the code did not store or use the salt but only used the hashed password value to verify it!

Initially, I smirked at that thought and then he decided to show me the code for it. It kind of looked as follows.

Hashing:

















Here, the main focus is the crypt function which hashes the salt value and password.
Interesting part of his argument was in the verification code.

Verification:

























So, the main argument, according to him, was that the verification used crypt function with only the hash value and initial password. No salt! He sounded super confident too!

Well, that kind of stumped me for a moment. It was against my traditional knowledge of crypto. How can the salt be random during hashing but the salt is never used in verification? So, I decided to dig a bit deeper.

At an initial glance, it looked to be legitimate, as we see that the verification only compares $pass with the output of crypt($password,  $pass). Looking just at the hashed value did not help me much. It looked like this:

T$2a$10$.qckuIvEAfe8bb9lVLbBZuxvWJ7vAyyaSJ4ppAmc5C/wL38Vx7x86

This looked like a random value initially but then you see that $2a$10 was also appended to the base64 encoded random value in the code. This value actually became a part of the salt. Then printing out the salt made it all clear.

.qckuIvEAfe8bb9lVLbBZw==

16 characters of the salt were actually present in the hashed password. Only the "w==" values were not. As if it was truncated to 16 characters only. Running through few other iterations that returned a different salt and hashed value suggested the same thing. That meant crypt function internally only used 16 characters of the salt value and that is how the during verification the salt value was actually supplied to crypt. Now, it made complete sense. But wait, it is the exact same way as salted hashes work. It is also how linux stores its passwords in the /etc/shadow file.

The $2a specifies the algorithm used, in this case it is eksblowfish. $10 specifies the number of iterations used. Next 16 characters is the salt value as per crypt and the rest is the hashed value.
For further details about all the formats and algorithms used by linux systems to store passwords look here or at other online resources. Further details of how crypt function can be used for salted hashed is here.

The interesting thing was that this friend was technically very senior to me and this incident just made it more clear to me that not everyone understands crypto! He actually picked up the code from here and did not put some thought behind it. But, this was fun for me and probably educational for him 😉 

Friday, February 24, 2017

Subdomain Takeover 2

So, (referring to my previous post) after the initial post about subdomain takeover by researchers from Detectify, they were contacted by another security researcher named Szymon Gruszeck. He provided them with another method and its POC to take over a subdomain which he had found a year back. You can find his post here.

Screenshot of the website racing.msn.com used for his POC:

So, my TL;DR for this attack is:

Overview:
A subdomain can simply be taken over if it has a CNAME pointing to an expired domain.

Attack Scenario:
1) The attacker finds a subdomain(victim's subdomain) whose CNAME is pointed to a domain whose registration has expired.
2) The attacker simply buys the expired domain and phishes anyone that visits the site.

Example:
The researcher performed the attack for the subdomain racing.msn.com whose CNAME was pointing to an expired msnbrickyardsweeps.com.

Further attack: 
1) Attacker can create a valid SSL certificate for his own website and the victim will implicitly trust the domain because the browser will show the https secured symbol.

Requirements:
Victim's current subdomain needs to be pointing to an expired domain that can be bought

Cause:
1) Victim's mistake. Again.
2) Service provider for the domain does no validation. But you can hardly blame them here.

Extent:
1) As mentioned in previous post, most service providers of domains do not verify domain ownership
2) More importantly, the same attack can be performed with DNAME, NS and MX fields. 

Look at Detectify's blog post for more details. Have fun!




Tuesday, February 21, 2017

Subdomain Takeover used to hack President Trump's website!

On 19th Feb, President Trump's website, secure2.donaldjtrump.com, was hacked by the Iraqi hacker calling himself "Pro_Mast3r". The website was defaced to the following:


 Credit: g33xter
However, the interesting bit is that, apparently, this hacker contacted the security news reporter Brian Krebs saying that he used the Subdomain Takeover attack described here to do so. 

So, here's my TL;DR about it.

Attack Scenario:
1) The victim has a website setup on the subdomain of one of the several service providers like Heroku, Github, Bitbucket, etc.
2) The victim is no longer using that service but they did not remove the redirect to it.
3) Attacker creates an account with the service provider claiming that the domain is theirs.
4) Attacker uses the domain to phish legitimate users.

Requirements:
1) Victim must still redirect to the subdomain in some way.
2) Victim must not own/use that subdomain anymore. 

Cause:
1) Victim's mistake, obviously.
2) Service providers do not verify subdomain ownership.

Extent:
Detectify has identified 17 providers (which became over a 100) that did not verify subdomain ownership,Heroku, Github, Bitbucket, Squarespace, Shopify, Desk, Teamwork, Unbounce, Helpjuice, HelpScout, Pingdom, Tictail, Campaign Monitor, CargoCollective, StatusPage.io and Tumblr.

For further details read their blogpost. Enjoy.

Thursday, October 27, 2016

Beginnings...

Hello everyone! This is me trying to get myself started into blogging something about InfoSec every now and then. I have been reluctant to put the effort until now. I am doing this just to share the knowledge I gain and remember the stuff that I read myself! Haha.

Hopefully, with your feedback I can improve my writing skills and understanding of security concepts too!

So, let's get started...