AWS has LOTS to offer when it comes to cloud security. In this episode, Jim talks to Ranjit Kalidasan, a Senior Solutions Architect, who will discuss the Shared Responsibility Model for Cloud security and then review the key tools and observability use cases you need to know about to ensure your AWS cloud solution is protected.
Listen to other Navigating the Cloud Journey episodes here.
EP 10: Navigating the AWS Security Ecosystem
Jim: Hey everybody, Jim Mandelbaum again from Gigamon and I have a really cool guest today. Ranjit Kalidasan he's a Senior Solution Architect at AWS. Before we begin, why don't you do a quick little intro about yourself and your background.
Ranjit: Hey, Jim, nice to be here, my name is Ranjit Kalidasan. I am based in Boston. I'm working for AWS for about three and a half years now. I am part of a group called North America ISV Partners Team. So within that team, I'm part of the security ISV team. That means that I handle security ISVs as my partners. So I have a mixed bag of partners. I have partners in the SIEM space, which is a Security Incident and event Management. Partners in the identity space and partners in Container Security. I'm so glad to be here to join the podcast with you Jim.
Jim: This is going to be fun. I want to get AWS's perspective on some things. And the first thing that I want to talk about is a little baselining. So, every time I ended up getting in deep with clients, we always talk about the Shared Responsibility Model, because I think there's a complete lack of understanding as to what is the responsibility of AWS and what is the responsibility of the subscriber using AWS's Services. So, you want to take that from an AWS perspective?
Ranjit: Absolutely Jim. So, when we talk about security in AWS, it is very important that we talk about Shared Responsibility Model. So what do we mean by Shared Responsibility Model is, there is a responsibility for AWS as a Cloud Service Provider, and there's a responsibility that the customer should take care of.
A simple way of defining the Shared Responsibility Model is AWS takes care of the security of the Cloud and the customer takes care of the security in the Cloud. So what do you mean by off the Cloud and in the Cloud? So when you take AWS Services, we host the Services in global regions and within the regions we have availability zones. These are all discreet set of data centers where the Services are serving the customers.
So, we handle the security of those data centers, starting from the physical security, like, the access controls, video surveillance, and from the security of the underlying infrastructure, like from compute network and storage, like the perimeter security.
So, customers are responsible for security in the Cloud, meaning that they are responsible for securing their data and application in the cloud. So, we have about 280 Services in security Services and features that customers today can leverage to secure their application data in the cloud.
If you take an example, let's say a customer is using our Amazon EC2 or elastic compute cloud and they host a web application in AWS. The security of managing the infrastructure where the EC2 runs is AWS's responsibility. And the security of the application and the data is customer's responsibility. That includes patching the operating system, for the EC2 instances, making sure that the application's security are all configured, making sure that encryption is configured.
Jim: Okay so before you dive deeper into that, you brought up access controls and you were thinking access controls, not only from the physical. But protecting the customer side. Also, you guys have a complete set of access controls. Maybe you can talk about at the baselining. If I'm somebody that's looking at deploying new infrastructure into AWS, and I want to start with some basics are around controlling access to my environment, to my EC2 instances. What do you have for me to do that?
Ranjit: So, we have the ability that we provide to customers to set the access controls at the different levels within AWS. When we talk what access controls we have to talk about identity and access management. Identity and access management is a primary authentication Service and is used across our Services to authenticate and authorize the API request. As you know, we are a Cloud Service Provider, so anything and everything is built around those APIs. So, those APIs need to be authorized and authenticated. It starts with the IAM. So customers can set up policies within IAM and control how they can provide this access to the users. Or they can also have the ability to integrate with a different identity provider using another Service that we have called Secure Token Service or STS which is what we always recommend to customers to federate access with your current identity provider so that you have made the same posture across AWS Cloud.
Jim: Hold on one second. I'm going to do a self-plug there. You're talking a lot about identity and identity in the Cloud. For those of you who follow my podcast, I recommend you go back. I did an interview with Diana Volare from Netflix, and it was all around identity and access management. So we dove really deep into it and how it relates to the Cloud. So, I think that you'll get a lot of value, especially when you understand what Ranjit is talking about. I encourage you to go back and listen to that.
Sorry. That was a selfish plug to get people to listen to the other podcast. Please continue. IAM and integration with your other third party IAM integration. I agree with you, that's really critical because everything boils down to access and controlling access and the identity can be a human, it could be a Service, it could be anything actually accessing the data nowadays. It can be IOT, it can be OT, it can be anything. And that means that you have to control the access in a more automated process.
Allright, let's kind of dig into that a little bit because it's a little more than just identity. We've got controls around endpoints. We've got multi-organizational structures. Maybe you can talk a little bit about that.
Ranjit: Oh yes. So, there are different levels where customers can set these policies within IAM. So, there are different types of policies that customers can configure in IAM. So, there are identity policies. The identity policies are the policies attach to the IAM identities like IAM users, or groups, or roles.
And there's also resource based policies. These are the policies you define on the resources, where the resources reside. For example, S3 bucket policy. It is the policy defined at the resource level to control the access to the S3 buckets in this example. You mentioned about organizations. AWS Organizations is a Service that we provide to customers to manage the multi accounts site. How can they set up a a multi accounts organization units and apply some policies at the organization level?
There are special policies we call Service Control Policies, or SCPs, that helps customers to define those policies to apply at the OU levels. For example, customers want to limit or restrict their access to their users not allowing them to go across the regions beyond their geographical control and also enabling access to only certain Services. All these guardrails/boundaries you can set with SCPs.
And there's also other types of access controls you can set like Permission Boundaries, very similar to SCPs, at the organization level. Permission boundaries works within the individual AWS account level. So, you have a different level where you can set the policies.
You also mentioned about the endpoints. All of our Services are Web Services which means that any customers could access those Services over the internet using a public endpoint. But when a customer has application running within a VPC, for example, they don't have to go traverse over the public endpoint to talk to the public Services or AWS Services. So what can they do? So, they can define something called VPC endpoints. It gives them the ability to go and route the traffic using AWS backbone. That's the purpose of VPC endpoints. So we also have resource policies that you can attach in the endpoint. Gateway policies where you can define and control who can access those endpoints and what kind of resources, they can access using those endpoints.
When an API call is made, there is obviously a policy evaluation logic. So it probably starts with the organization level. When in a customer invokes an API, IAM is going to first check the SCPs so that API access is allowed. If it is allowed, then it checks the resource-based policies. Let's say someone is trying to access the S3 bucket. The S3 bucket policies are being evaluated. Then it goes and checks the identity policies. Basically, you need to have a policy that should provide access at least in one of these levels and should not be denying the access. In AWS denial takes the precedence. For example, if they have an identity policies that allow them to access an S3 bucket, but the bucket policy restricts it, then their access is denied. So, this is how the access controls works within AWS, there are multiple levels, multiple logic has been applied to provide access to the resources.
Jim: And for those of us that have had those multiple levels and tried to troubleshoot it and try to figure it out, that's always been a challenge. But there is an easier way to try to figure out why now with that analyzer, do you want to talk about the Access Analyzer a little?
Ranjit: Oh, yes. IAM analyzer is a Service that helps customers to, first of all, monitor the access they have given to external entities. Probably external AWS accounts or external third-party entities. So access Access Analyzer also helps to analyze those access patterns. And the other feature that Access Analyzer has is customers can model an IAM policy based on the recent access events. So, Access Analyzer uses Cloudtrail, now I'm talking about another Service called CloudTrail.
Cloudtrail is a Service that is an auditing Service which records all the API access into AWS. Whenever anyone invokes an API call into AWS Services, there will be an entry in CloudTrail. Who is invoking the API, what resources they're accessing and what are the result of that action, whether it's did they access to resources or not? All these details will be sent to CloudTrail. So IAM Access Analyzer uses this information for the recent audit events, and it can help customers to model an IAM policy. So that's another cool feature of IAM Access Analyzer.
Jim: It's also really good for troubleshooting when something doesn't work, because it will go out and determine where it broke down. So, to your point earlier, where I'm controlling access, where first I've got an organization, then I'm looking at the identity that that role or the role that that identity is taking from that organization trying to access an S3 bucket. And all of a sudden it gets rejected. It'll actually show you the stages and where it failed so you know where to look. Okay. I've got a policy on my S3 bucket that's default deny, well, okay. It's still sitting in default mode. We got to go open that up. So, it's a really good tool for not only troubleshooting, but also as you said, modeling and trying to figure things out.
Jim: Now, since we're talking about access, I want to bring up something that every cloud vendor says we have all of these root privilege accounts, and we give you security to protect them yet less than 25% of the entities out there do it. And we're talking about MFA. One of the simplest things to do, turn on MFA, yet so few do it. We know that AWS has MFA, but it's more than just the root account. There are other things you can do with MFA. Maybe you can talk about MFA quickly and then you can dive into other uses that people don't think about because usually when we think about MFA, it's, I'm coming in on my phone and I'm authenticating just my root account, but there are other uses.
Ranjit: Right. So multifactor authentication or MFA is a simple and a best practice that adds an extra layer of protection on top of AWS resources and accounts. In AWS you enable MFA for AWS account and for an individual IAM users that you create under AWS account, and you can write the IAM policies to add conditions to enforce MFA access. You can set the policies so that the users need to have an MFA to access any resources. So, there are different kinds of MFA options that are currently supported by AWS. Customers can bring in virtual MFA devices like Google Authenticator, Duo Mobile, and other solutions. I personally like Duo Mobile because I don't want to enter any numbers.
Jim: Using push technologies is nice, yea.
Ranjit: Yeah, it is nice. The other option that customers can use as a U2f or universal second factor security devices. U2f is an open authentication standard that enables securely access to online resources. And customers also can use hardware MFA FOBs. Remember that old thingy that we used to carry that will generate random numbers?
Jim: I come from RSA. I know it very, very well.
Ranjit: So, these are the different options that customers can use.
Ranjit: And you also brought up an interesting point. When we talk about the MFA, many people think that it's only used for accessing the AWS console. But in AWS we can also use MFA for any API access. So this is where a MFA is integrated with the Secure Token Service or the STS. Where you can invoke an API call, getsessiontoken and provide your MFA credentials so it will give you temporary credentials. And after that, using those credentials, you can do the API access. So, MFA is used for both console as well as for API access within AWS.
Jim: So that's really great because what we're trying to do is we're trying to make sure that the right people, the right resources, have the right access to gain different things. But since we're going to talk about access and the right people, insider threats is always an issue. Now we have real insider threats, which are people maliciously doing things. And we also have the "I didn't know better" kind of conversations. With those S3 buckets that get exposed to the world, we all know about them.
Jim: All right, there's a lot of talk around insider threat, and we know that there are people doing malicious things inside, but I think more often it's just people making mistakes. It's the person that puts together an S3 bucket but doesn't understand when he or she exposes something, they expose it to the world. So, Zero Trust is one of the things people are doing to try to eliminate that possibility. If I can minimize the impact of any resource. So, what are you guys doing from an AWS perspective to enable those of us making that journey to have better Zero Trust models in place?
Ranjit: Sure, Jim, that, that's a very interesting topic. AWS views Zero Trust as a consumption model and the set of mechanisms that focus on providing security controls on digital assets. The model does not solely or fundamentally depend on traditional network controls on network perimeters. Zero Trust begins with making no assumptions about trust for any system or access request.
So, AWS helps to simplify the Zero Trust implementation because our Cloud Services are built from the ground up. So we have the capabilities that provide the core building blocks of the Zero Trust architecture as standard features. AWS Services provide this functionality with simple API calls. Customers do not have to maintain any complex infrastructure or additional software components to maintain Zero Trust within AWS. Our Zero Trust models starts from signing of the API requests. So we have a very powerful signature, V4 signing method, that we currently use. As you know, AWS is a leading Cloud provider, and we process billions of API requests every day. At any given point of time, it's always millions of API requests. And each and every of these API requests is processed by a very powerful network with TLS cryptographic algorithms so that the Zero Trust starts from the API request.
The next key aspect of the Zero Trust model is the Service-to-Service engagement. When individual AWS Services need to call each other, they rely on the same mechanism that a customer's use today. We do not differentiate between our Services with the customer identities. For example, let's say for EC2 auto scaling, I need to respond to a scaling request. So, it requires an IAM authorization which is known as a Service link role. The Service link role needs to be defined to enable the Auto Scaling Service to talk to an EC2 Service to go and invoke the scaling request to spin up additional EC2 instances. This is an example of a Service-to-Service model applying the same principles that we teach for the customers.
Jim: Let me stop you there and ask you a question on that. So, we talked about the signing when we've got Service to Service gets signed. And then as you auto-scale, those are automatically auto signed as well. That's part of the process to ensure the integrity of those instances that are auto scaled as well. Correct?
Ranjit: So, every API request that a Service to Service makes it needs to be authorized by the same principles that we apply for the other identities. So that's the concept here. So, to your question, so every API request needs to go through that process to get authorized by the IAM to invoke those API calls. In this example if the auto scaling Service needs to spin up additional EC2 instances for a scale-out policy, it needs to go and invoke those API calls. It has to go through the same mechanism of policy evaluation logic we were talking about earlier. That's the concept here.
Jim: I'm just trying to illustrate that because of the automation piece. We're talking about leveraging the security measures for the Zero Trust through the signing, through the audit, through the auto-scaling. But the important thing is, is we're leveraging the automation, which means that scale up, scale down, still keeps you within that Zero Trust model. Continue, please.
Ranjit: Yeah. So, another key aspect is the binding of the network and security, identity security. In a traditional network security relies on the secure perimeter and focuses on securing the network itself. So anyone getting the access beyond a secure network perimeter are being trusted, they get access to do stuff. But with Zero Trust environment, network centric test models are augmented by other techniques which we call identity centric controls.
So, in AWS, we look at this approach, we don't want to keep this as a binary choice between a network and identity. How could we bind it together so that we enable both network and identity security to implement a strong Zero Trust model? A classic example is how we design a VPC and VPC features like subnets, private subnets, public subnets, so customers can place their mission critical applications and protect the application using the network boundaries using security group rules and NACL network access controls.
At the same time. If let's say if there's an application that's running in an EC2 instance that is placed within a private sub-net. And if you assume that application needs to make a call to an S3, to get an S3 object or put an S3 object, it has to go and cross those network boundaries. So there has to be some network routing and network controls be enabled. So, it needs to have the security group permissions to go and reach out to the S3 endpoints. Then we can attach an endpoint policy where you can further refine these access controls, identifying who's making the calls and if he is authorized. An additional layer, we can also apply the bucket policy, an additional layer of access control there. So how can the application running in an EC2 go and make a call to an S3 bucket? It has to cross those network boundaries and controls and also it has to have IAM permissions to go and make those calls that is defined by the IAM policies. This is where we're binding the network and identity constructs together.
Jim: I always say it all falls back on identity, doesn't it? It really does. Even if we're talking network access.
Jim: So, one of the things that, that when I'm dealing with this, I always rely on a really cool tool, GuardDuty. Can you talk about GuardDuty? Where are the use cases for it lie and what your thoughts are?
Ranjit: Oh, sure. GuardDuty is a threat detection Service that continuously monitors your AWS accounts and workflows for malicious activity.
So GuardDuty uses machine learning, anomaly detection, and integrated threat intelligence, to identify and prioritize potential threats in a customer's AWS environment. GuardDuty gets these events by analyzing different data sources. It uses AWS CloudTrail, Route 53 DNS logs, VPC flow logs, and we recently announced a feature where GuardDuty can take, EKS elastic Kubernetes Service control plane logs for analysis.
And GuardDuty detection types can be classified into four primary classification types. So, we start with the reconnaissance where it detects any activity by an attacker; unusual IP activity, intra VPC port scanning, unusual failed logging request patterns, or unblocked port probing from a known bad IP site. This would fall under reconnaissance.
The other type of GuardDuty detection type is instance compromises where it indicates any instance compromises, like some crypto currency mining so popular nowadays. Back door, command and control activity. If they see the instances running any malwares or if it is running any Denial of Services, activities, are we detecting any unusually high traffic patterns within the EC2 instances. This all would fall into an EC2 compromise detection types.
The other type of activity is the account compromise where we detect the API calls and identify a pattern. If there is an account or particular API credentials that has been hacked.
The last key detection type is a bucket compromise which is all about the S3 buckets. Has there have been any unusual activities happening with S3 buckets? Unusual API access patterns.
So, these are the different types of detection types that GuardDuty provides to customers. Very, very useful for customers.
Jim: My suggestion is anytime an S3 bucket is exposed publicly it should send all kinds of alerts and flags, but that's just me.
Jim: You've talked about a couple of different things. This is all around visibility. This is around I need to see what's happening in my virtual networking. I need to see what's happening in my user accounts. There's lots of different tools for detecting what's happening. So, without, I don't want this to be a plug around Gigamon, but more around the tools that are inherently available for you as an AWS customer to help you.
So you want to talk about AWS Config, CloudWatch? We've talked about GuardDuty. The Security Hub. Do you want to dive into some of that? You've already talked a lot about CloudTrail a little bit, but do you want to dive deeper into those tools?
Ranjit: We spoke about CloudTrail. Again, to reemphasize on CloudTrail. It tracks all the user activity and the API activities. It's built as a data lake for all the API audit related records within.
Jim: I usually refer to that as my log collector.
Ranjit: Yeah, log collector, yes. We spoke about GuardDuty. Within GuardDuty, I was talking about VPC flow logs.
So VPC flow logs is a feature that you can enable to capture the metadata information of the traffic flowing in and out of the VPC. It doesn't inspect the content, but it captures the metadata, and customers can use the metadata to analyze the pattern of the traffic and see any unusual activities.
The other Service is Amazon Inspector. Amazon Inspector provides a vulnerability management Service within AWS. It does a continuous vulnerability assessment. Amazon does this service primarily on EC2 instances, but we also launched a feature recently which can also extend this capability to our containerized Services like ECS and EKS.
Another important Service is Amazon CloudWatch. So, Amazon CloudWatch is an observability Service. It provides a complete visibility of a customer environment and applications using the key observability data like logs, metrics, and traces. So that's what CloudWatch provides.
Another important Service is Amazon EventBridge. So, Amazon EventBridge is a serverless Event Bus that makes it easier to build event driven applications at scale. Amazon EventBridge pretty much integrates with all of our Services so customers can go and trap any kind of events and they can build an automation solution or simply notify a user. There are many different use cases where Amazon EventBridge has been used. it is also now integrated with many other SaaS partners where customers can also receive the events from the SaaS partners and build some automation.
Another important Service is AWS Config. As the name suggests it is our configuration management Service. AWS Config continuously monitors and records your AWS resource configurations and allows customers to automate the evaluation of the recorded configurations against any desired configurations. You can track back the history of the configuration changes, the relationship between different AWS Services. You can also use a pre-built or customized tools to track and detect configuration drifts. An important feature of AWS Config is a Conformance Pack where customers use a Conformance Pack to determine a baseline configuration. And deploy Conformance Packs and detect any configuration changes and they can create any mitigation actions using other Services like Lambda or Step Functions.
Jim: Can I throw one other thing out there? I know that there's a lot of talk around the Services that are log based or metadata based, but there's also a lot of demand now for packet. And so VPC Mirroring is something that is coming up a lot. Do you want to just take 30 seconds and talk about VPC Mirroring, because I think that a lot of people don't understand what it really is. And the value.
Ranjit: Yeah. So VPC Mirroring the actual Service name is a Traffic Mirroring. Traffic Mirroring is an Amazon VPC feature where you can use to copy network traffic from elastic network interface from Amazon EC2 instances. So basically, you can mirror the network traffic and dump the entire IP packets for content inspection and threat monitoring. And you can customize the configuration by defining a traffic filter to create the rules for mirroring so you don't have to dump any or every other packets, but you can define a rule on what kind of packets and from which IP source that the packets are coming and dump it into your target.
So, customers today can use open-source tools like Zeek or Suricata to monitor the traffic as part of the Service, or they can also bring a lot of our ISV offerings to monitor the traffic. I know Gigamon also got a solution Gigamon VUE cloud site, I believe, as a Service (GigaVUE Cloud Suite for AWS) that integrated with the traffic mirroring that customers can use to monitor their traffic.
Jim: Yeah. And I think the important thing here is that overdependence on logs only gives you part of the picture. So now what AWS is doing by using that traffic mirroring is giving you not only logs, but you also have access to the packet so you can get an entire picture of what's going on.
Jim: Now, one last thing I want to ask you about. You have a tool that I'm really surprised more people don't take advantage of. It comes with even an individual signing up, gets it, but then of course, there's higher levels based on your subscription. That's Trusted Advisor. And I don't think enough people know about it. I don't think enough people use it. Can you talk about Trusted Advisor a bit?
Ranjit: Yeah, sure Jim. Trusted Advisor is a very useful tool. So, AWS Trusted Advisor provides recommendations that help customers to follow AWS best practices. Trusted Advisor evaluates a customer's accounts using pre-configured checks. Again, different principles like security, fault tolerance, performance, and Service quotas. Trusted Advisor takes a customer account and monitors against these different elements and provides recommendations on what are the best practices customers should follow. For example, if MFA is not enabled for root account, Trusted Advisor will say that it's a best security practice to go in and enable them.
Jim: And then, and that S3 bucket that's exposed to the world that will raise a flag.
Ranjit: Yes, that would raise the flag. So these are the type of the elements that it will check and provide recommendations. And any kind of customers can get Trusted Advisor a Service. It only depends on what kind of a support level they have with their AWS account. If the account has a basic and developer support level, they'll get the code checks. But for the customers who are using business and enterprise support, they'll have a complete comprehensive check against these different elements on security, fault tolerance, performance, and Service quotas. So, they can use Trusted Advisor to get a quick overview where they are with their account and take actions.
Jim: Yeah. What I love about it is, is it breaks it down and AWS has what they refer to as the pillars. And what it does is it goes through all the different pillars and then looks through best practices in each of them. And then gives you a report that shows you where you're deficient. And as you said, as a basic developer account like myself, I get a lot of this and there's a lot of value to that. It's going to catch the obvious stuff hopefully. And then as you get to a business account, it dives deeper. It's going to dive deeper into your APIs. It's going to dive deeper into how things are signed. It's going to dig a lot deeper, so it's going to take a little longer to run, but it gives you a much more deeper inspection.
So one of the things, folks, if you're either just starting in AWS or you're already there. And you don't know about Trusted Advisor go into that console type Trusted Advisor and run it because you're going to find a lot of things that would take you a lot of time otherwise to find it. Or worse, when someone knocks on your door and says, hey, your data's being exposed in the wild.
So, before we end this. Are there any takeaways, any places you think that the listeners should go to read more about these security tools? Is there a page or a reference that folks should hit?
Ranjit: Yes, Jim, there is a security landing page. Your customers can go into https://aws.Amazon.com/security. That is a security landing page. I would definitely recommend customers to go and look around the page and to get an idea of our different security Services that we have, and all the different resources that we have collected in that webpage.
I would also recommend customers to go and look at our security white paper. You talk about the pillars and that the Trusted Advisor section. You were right, we have a well-architected pillars around security, performance efficiency, resiliency, cost. So security is one of the key pillars. So there is a white paper that helps customers to guide on applying the best security principles on AWS. So definitely would recommend that. And also, there is other resources for Zero Trust where customers can go into www.aws.amazon.com/security/zero-trust that will take them to that Zero Trust resources.
Jim: Yeah. And I would also say for folks that are looking at Zero Trust. While AWS has their Zero Trust document, there's also a really good resource at NIST because they have the Zero Trust initiative as well. As well as the Cloud Security Alliance. Everybody is coming up with best practices around Zero Trust in the Cloud. So, most people, you know, let's be honest, don't live in only one cloud environment. So, getting AWS's perspective, comparing it to what NIST is recommending with Cloud Security Alliances and come up with a combined best practices for your organization. We don't want to lead you down one path. We want you to be able to be successful in many.
Ranjit, I want to thank you very much for joining. This was a lot of fun. I gained a lot of knowledge. Is there anything that we didn't cover that you want to get out before we drop?
Ranjit: No Jim, thanks for giving this opportunity, I really loved having this discussion with you.
Jim: It's fun. Thanks a lot for coming everybody.
Ranjit: Thank you.