For enterprise organizations that are moving to the cloud, they often utilize multiple cloud solutions and vendors. This, naturally requires a different approach to planning. Furthermore, to secure these environments requires a comprehensive strategy that is well documented, flexible, performant, and meets your compliance needs. Recorded live at Cisco Live 2023, Stephen Goudreault and Nitin Negi from Micron Technology break all of this down.
Listen to other Navigating the Cloud Journey episodes here.
Steve: So here we are today at Cisco Live to discuss Navigating The Cloud Journey. With me today I have Nitin from Micron, do you want do a brief introduction?
Nitin: Yeah. Thank you, Steve, I run the cybersecurity engineering and operations here at Micron Technology. Been working with Micron for almost the last 15 years. Did a lot of evolution and transformation in the digital security space. And one of the most recent one has been the Cloud security journey that we have picked up and very excited for this podcast.
Steve: My name is Steven Goudreault. I'm the Cloud Security Evangelist here at Gigamon. I have almost 30 years experience in IT networking. I did 11 years on and off at Tipping Point/ Trend Micro where I had access to over a 100,000 true positive PCAPs. And I got to do a lot of playing with attacks and exploits.
Steve: So, with the Cloud journey, and this is a pretty network centric crowd, I wanted just do a brief overview of virtualization because I find that that mirrors the journey to the Cloud very well.
So, at first it starts with a business need and say, hypothetically, you want to know how many days off employees took. Well before computers, you'd have to count all the slips of paper that were stored in the drawer somewhere. Once computers came along, you had to go get a computer, find space, put software on it and then you could run the command to figure it out.
Along comes virtualization. You can put many computers into one and you can log in and check it. The Cloud then allows you to lift all of that into the Cloud. And once containers and serverless comes along, you don't even need that anymore. You're not even emulating the operating system. You just need the database query and the data source.
So, when we discuss the Cloud journey, you're usually probably somewhere along that. Has that mirrored your experience?
Nitin: Yeah, I mean it's almost been a traditional versus a Cloud native approach that has been picked by most of the organizations, and ours was not a different journey. If you have to look from a traditional environment perspective, it's been monolithic and tightly coupled infrastructure requirements that have been for the traditional data centers. But that's the beauty of the Cloud is you can scale in and out, scale up and down, and it's really an agile kind of methodology that has been picked by Clouds. Now, when it comes to the necessity of what kind of approach has to be taken in traditional data centers which used to be more peak capacity models, selection was done right from the beginning. So heavy investment of front was probably the core requirements.
With the Cloud journey that we adopted. The worst was to use forecasted model, have the right billing access controls towards those forecasting models, and then pick the right compute relational models with the agile development that is made across with different teams and a strong collaboration between the DevOps and DevSecOps teams.
Steve: So, what you had to do is you had to account for the stampeding herd, the rush hour. And if you were to take that approach into the Cloud, you're probably going to have a lot of cost overruns and headaches, right? Because you should grow east, west, not tall, right? Grow wide, not tall. So depending on where you are on that journey, you probably have a multi-Cloud situation. And multi-Cloud can mean number of things. It could mean on-prem, it could mean private Cloud, which is basically virtualized on-prem, or the public Cloud.
Steve: And, and one of the key differentiators I like to point out in the public Cloud, there's no layer one, no layer two, and most of layer three's been abstracted away from you. So some of your tools that you're taking in the public Cloud may not work or may not translate well. Have you had any experience with some of your tools that did not translate well in a hybrid or public Cloud environment?
Nitin: Well, definitely, you know, in a environment where latency is probably the key selection, there are applications which are very latency sensitive.
Steve: A manufacturing plant, robot arm needs to know within a certain amount of time what...
Nitin: Exactly, exactly, so those which has to run in real time. Probably those applications are where you have to still be running in your either private Cloud infrastructures within your from premises, but you should be able to run in like a container serverless kind of a fabric. But at the same time, it will still be necessary for running high compute analytics, taking the right actions from the amount of data that is getting in, just within your storage accounts, in your Cloud, to grow your data, and then produce the right analytics out of it. That's where your requirement is on the public Cloud platform as well.
So the way we see it is that private Cloud is probably more of a scalable solution which is container driven where you can define your applications which are more core and sensitive from a latency perspective, but you can pretty much migrate most of your lift and shift or your refactoring or redesigning of your applications into the public Cloud as well.
Nitin: Now to answer on the multi-Cloud, yes, we pick up a multi-Cloud journey. We have to be agnostic of what Cloud platforms we are using because we have to present that flexibility to our end users as well. Because you might find certain things very convincing in one versus the other. The features, compute, networking, storage, and the amount of PaaS services that are being grown up across these Cloud platforms have been tremendous. So you have to provide a flexible platform for your end users in order to get the advantage of the kind of kind of services that you want to have them play with innovation and growth mindset.
Steve: So, kind of what I'm hearing then, from a manufacturing standpoint, you're probably going to keep some things private Cloud forever for latency reasons. But for other things that aren't necessarily as latency driven, we could explore a public Cloud scenario for deployment or expansion. So it's kind of a particular use case where some use cases will override anything that a Cloud provider offers. And in this case, the manufacturing machines need below a certain level of latency that you feel you can guarantee more easily in the private Cloud.
Nitin: Correct. You have to pick up a hybrid Cloud approach at the end of the day to make sure what is being catering to your organization as a requirement. It's not necessary that you have to jump to the public Cloud because it's an offering that's available to everyone. You have to still be very considerate of the right calculations and measurements across your different industry requirements.
Steve: So let's talk about some of the operators that we have. So we have the classic NetOps, right? And then we have SecurityOps DevOps, DevSecOps, and they all have some very different goals, and they don't necessarily speak the same language.
I've seen CloudOps and NetOps in the same room, speaking the same words, and they come out of it with two completely different understandings, right? So NetOps has to keep the network running at all costs. SecurityOps may tear the network down to save it, and DevOps, their goal is they got to publish a software every day with or without any level of known or unknown risk. So they'll accept any level of unknown risk as long as they're getting their software pushed out every day. How have you dealt with that in your Cloud journey?
Nitin: Yeah, so, it has always been a challenge to start from DevOps versus DevSecOps platforms. So, both start on the different side of the spectrum. The DevOps is more about the agility of how to run fast and get their development done fast. And for the SecOps has always been how do we secure with a deny first kind of a thing? Shift. Right. So now what you have done is there is a lot of awareness and training that has got accumulated with the DevOps. They've been trained on what is required for them to do the right kind of development using Terraform or policies, or Lambda scripts. With the SecOps model, I think what they are trying to get into is they still are very responsible for the guardrails around the IaaS, or PaaS, or SaaS services where they are supposed to be. They're still very valuable for the core business defining that security around the specific kind of service functions or control functions that are required for their framework. So eventually what we'll see is we'll see a collaboration or cross breeding between the two groups, where DevOps will start to have more awareness and training on how do I secure; it's kind of becoming a programing module within their entire curriculum. Whereas SecOps is learning a little bit of DevOps, but they will not be ready to release their entire skillset around the security code functions.
Steve: Well, I think it's impossible for anyone group to know it all. And I worked in QA for years and this is a very, very familiar story which is teach the developers some best practices, some input validation. Don't leave your password, "admin admin". So this seems to be a very old story being reinventing itself. And while I think it would be nice to have DevOps always do that, I'm not sure that's ever going to happen, it's going to be a journey because as new graduates come along, they have to learn the old rules again. And every few years in cybersecurity, we see the same exact same problems pop up over and over again. So, I kind of think it's maybe up to putting in guardrails so they could stop doing that. What is your take on that?
Nitin: Absolutely, and I think what's very important is also with these kinds of establishments also comes the right amount of Cloud security governance model that defines the right delegation, segregation of duties, and the right roles amongst the permission filters.
Let's assume you have the Cloud security governance model for your organization. What are the underlying foundations for it? Architecture creates a very important necessity for it. When I say architecture, it includes a lot of things. Your service level agreements, your contracts, your risk, and then comes the next aspect is of the data workflows where you do a lot of data access, data validation, data testing, data reviews, and then comes to your operating model. Operating models, mostly around your roles, negative or positive role approaches. Then comes your cost controls as well, right? You have to define the right billing access controls, automated budgetary actions. Because Cloud is not coming for cheap, right? So everything from a core foundation of Cloud security governance models has to be really really well rehearsed and learned between the individual roles, either for the DevOps or the network engineers, or the security admins, or for the architects, or for the business.
Everybody should play a very vital role in defining that entire enterprise architecture level view of getting the Cloud security governance model created.
Steve: It seems to me that that's something management is going to have to step in and do to try to break some of those mindsets and silos, right? Because each group still wants to fix the thing they're responsible for. And they don't always have the bigger picture.
Steve: So I want to discuss something you brought up and that's the underlying Cloud infrastructure. Because I still think that there's a serious challenge, we have in the industry today because as we grow, it becomes apparent the old way we. do things doesn't really work.
And one of the things I've seen is logging and some of the Cloud native services. We make assumptions on how things work based off RFCs that were written a long time ago, but nothing's really enforcing that. So we're seeing an ever increasing level of sophisticated, either threat actors or DevOps or network people, using applications and protocols that are impossible to detect from traditional logging alone, or they're using nonstandard ports to evade that altogether. And it's been my experience most Cloud tools don't work if you deviate off of the standard port. What has been your experience for that?
Nitin: I think, like we said, in cybersecurity space, as long as you're able to detect and monitor, that's your core of making any strategy.
So, detection logging is absolutely one of the important requirements in the business where you can take sufficient action. Now, coming back to how do we do it in the Cloud? Deep Observability is something that we have to identify with the right platforms that are being defined to get you the desired outcomes.
So, getting that east to west visibility is extremely important. I know that everyone can do the north to south kind of a visibility and can define the right automated response actions. I think the necessity is now for solutions like where Gigamon others come into, picking up the ability to get the application contextual driven outcomes based on the logging that gives you that east to west visibility, but also gives you a capability to feed it to your security appliances, your platforms, that can take more directive actions on controlling the threats landscape, right? So, if you don't have the sufficient logging, we are going to miss big time in the presence of how we define the enterprise level of detection and preventative actions in the future.
Steve: So, it's been my observation that security vendors are focused on something very very different. They're looking at stopping the Shellshock, the Heartbleed attacks that are in flight. But what they're not necessarily looking at is some of the application's protocols that are in use every day. It's really impossible to tell the difference between SMB version 1 or v3 through logging alone.
There's a CISA advisory that you should be turning off SMB version one and two, but I looked at the Windows product documentation, it's a single line. it can be turned off of the single line. That also means it can be turned on with the single line. Right? It's possible if you compromise the machine, get in, then turn on SMB version one, you could compromise and elevate your privileges even more. And I think just knowing what's out there is a huge gap we have.
Steve: And here at Cisco Live talking to people on the floor, there seems to be a general consensus that logging as has existed for the last 20 years, isn't meeting the needs anymore, and something needs to come in to fix that on the next level. What, what is your take on that?
Nitin: Yeah, I think, as you said, logs are just logs, right? But how do you get more intelligent and smart actions out of these logs? And that's where I was going with the application profiling. There has to be better way of analyzing these logs that can translate into a better way of taking actionable outcomes. The businesses are driving their technology enabled outcomes based on the right kind of analytics, and I think the basis for that is logs. Now, in order to identify threats and take proper action on it, you need to identify those applications. So service abuse port abuse and all those things.
So you have several features of packet slicing and other application context, so you have the right way of making the right selection from a logging perspective. You have to be very agnostic of what is the application that is really really important for you from a logging perspective. You can have data level, you can have system level, you can have application level, you can have network level of loggings all coming across, but you have to really define what is really critical for an organization.
Not everything relevant makes a significant use case for your incident response actions or some other actions for your cyber sec activities. So you have to be really cognizant of what really is important in an organization and how intelligently can you do some of these kind of filtering of the right amount of logging that's being required. And I think that's a major gap, technology gap today. We are seeing more and more vendors like Gigamon coming back and giving more valuable attributes and metadata into the feeds that helps us to take better automated response and actions.
Steve: So, we have the paradox, having not enough information and too much information at the same time. So it kind of sounds like we may not have the right information. And maybe a little bit of baselining is appropriate for a particular organization. Also, the security landscape has changed.
Nitin: You were telling me a story about RDP. How in the past if you saw RDP, you wouldn't really care about it. But today actually, that's probably a much bigger problem, do you want to elaborate on that a little bit?
Yeah, so, I think if we go back almost 10, 15 years back, we were looking at a traditional way of looking at signatures, right? Where you used to look at the metadata in the signatures for a specific session around RDP where you would rather focus more on the critical and high severity of the actionable and the likelihood of having an exploit attack within an RDP session.
But now, as the industry is going into the Zero Trust kind of a platform, I think more than external threats, insider threats are becoming more and more important, and that's where having those individual transactions should track down the insider threats and correlating it with the perspective of what you can gleen or what can we lose from a data perspective, and what were the outcomes that define the Zero Trust model from an access perspective, from a role perspective, from an application access perspective.
I was taking example of RDP as a general user, might not need access into your most sophisticated or sensitive environments. But that now has tremendously changed in Zero Trust platforms where you have to be really agnostic of your selection of the application access database and the roles.
Steve: Well, I think what Zero Trust is doing, it's giving voice to something we've had a hard time putting our finger on and that's a lot of threat actors are using legitimate software, legitimate protocols that are built in, baked-in to the workload. And they're just using them for bad things. So if you can't see it happening I think we fixate a little too much on the high severity of CVEs or other high impact threats. And a lot of threat actors will use the medium level and then use legitimate applications. So I worked in cyber security for years and there's always a rush to patch the high level CVEs but are those really the ones most exploited? Right? But when we look at some of the CISA advisories that came out recently, Volt Typhoon, a Chinese state threat actor, that's installing proxies on Windows machines so any inbound traffic on Port 9999 gets reflected back out onto the private class IP address on port, I believe it was 8334. Now if you're looking at a log, you're like, wow, I got a lot of weird traffic on 9999. Okay, weird, but not a big deal. If you actually can see the application's protocols in use, you're going to say, wait, I've got TLS, I've got SSH, I've got DNS all on 9999. All of a sudden, that light, it's like lighting a signal fire for detection, right? When before again, you have paradox of too much data and not enough data.
Steve: So I wanted to talk about something very interesting that may be important to our people that are going to the Cloud and that's discuss some of the cost models and maybe some ways to dial that in. Because if you accidentally set up a recursive search in one of your Cloud workloads, you're going to pay a lot on extra compute costs. So, what do you guys have for that?
Nitin: Well, you've got to have some kind of a smart analytical tools around the cost analysis that has to be integrated in your budgeting forecast, trends as well. Back were the days when you used to do your annual forecast or annual budgetary decisions. But with the Cloud, because it's such a dynamic state, you have to pick up, weekly, and now even some people are doing, daily kind of a forecasting trends. So you got to have a really strong look on how your compute, networking, and storage in the Cloud has been utilized over a period of time. And what is your next forecast trends? Now, you can use a lot of analytical tools integrated with strong platforms and Cloud providers like Azure, AWS, and CGP, that gives you very strong indicators on how the cost has to be done.
Additional things have to be done in the Cloud is you have to define who has the access to deploy. Who has the ability to control, and who has the ability to take action, monitor the Cloud growth, and remediate these unused deployments. So there is going to be a strong watch list that has really created over period of time.
Steve: And that's a challenge and that's a journey because you're never going to reach like a nirvana end state on that. And you know, as someone who's deployed things in the Cloud manually, it can be very frustrating the first couple passes because you wait 30 minutes for a deploy it errors out, you make a change, you redeploy you wait 31 minutes, right? It errors out. You make the other change. And after a couple hours you're like, you know what, I'm just going to open it all up, open up all the permissions and then it works. And once it's working, you forget to go close all the extra permissions you had opened up. So is that kind of a problem you're seeing in some of your Cloud deployments? Because I know I'm guilty of it.
Nitin: Well, we picked up a route of, you know, audit, remediate, and deny. So for us, the challenge was that how do we pick up a service that has to be really, really important for an organization, basically whitelisting of the services, right? So when you do that, what you do is you define your guardrails really successfully around each service.
Don't present your service to just be available for the end users in the marketplace, right? Run them through your own Terraforms or do your own service controls around these IaaS and PaaS services that are available with each of these CSPs.
One more thing that has to be really, really important when it's coming to the Cloud cost was you have to understand the billing access controls as well. You also have to define your forecast in terms of the consumption of the security enhancements that are being done. You have to calculate the weightage of the risk that's associated with the service versus the compute. And I think you have to start embracing a lot of Cloud native security technologies and some of the Cloud security posture management. But also start adding a lot of compliance strengths as well because that's going to help you drive, not from a cost perspective, but bring your security to an ecosystem or eco platform of having the right decisions to make.
Steve: So, I'm personally really excited about the new DoD plus model 152 controls because they're finally coming out with something actionable. And I think that's going to eventually be one of the strongest Zero Trust models that survives because it's actually giving people a, a roadmap of what to do.
So as we are in the, the closing minutes, do you have anything else that's top-of-mind that you'd like to share?
Steve: Zero Trust. Zero Trust is probably a thing which is very specific to every organization. It can mean different things to different organizations. Like for us it might mean something else. For a financial, it might mean something else. But at the end of the day, when you have to start for a Zero Trust journey, you have to start with something that you really want to do for that organization like blueprinting. A blueprint should exist for you and a forecast, and a strategy, and a program should exist for you to even start and pick up that journey.
If you don't have a program, if you don't have a charter, where do you want and what's important for you as a Zero Trust, I think that's where the important slice is. Like for us, securing our IP is probably one of the most important things. So, we have to pick up our journey in the Zero Trust with the mindset what we really want to do across. And for that, the guardrails, whether in the private Cloud, public Cloud, with the right amount of observability and the detection of monitoring has to be really critical.
And, I think in certain sectors, especially in Fed space, you no longer have to sell them on it. They're just trying to figure out how to operationalize it.
So I'm glad we had this conversation. Thank you everybody for attending.
Thank you. Thank you, guys.