Meet Carson Sweet. He's the Chief Cloud Security Officer at Fidelis Cybersecurity and has been in the infosec business for 30 years. In this episode, Carson shares some important best practices and advice that are key to understanding how to secure the Cloud whether your migrating, building new apps, or starting from scratch.
Listen to other Navigating the Cloud Journey episodes here.
Mike: Welcome to the Navigating the Cloud Journey podcast series. I'm Mike Valladao your host. Today, we dive deeper into security with yet another cloud entrepreneur. Our guest today is Carson Sweet. He's a six-time cybersecurity founder over the past 30 years. In 2010, Carson co-founded a company using cloud computing and big data analytics to implement security monitoring and control within a product called Halo. The company was CloudPassage.
In May of 2021, during the peak of the pandemic, CloudPassage was acquired by Fidelis Cyberecurity. Carson continues to advocate cloud security at Fidelis where he is currently Chief Cloud Security Officer. Welcome to our program Carson.
Carson: Thank you, Mike. I appreciate you having me on the podcast.
Mike: Let's talk about security. What's the big deal about cloud security? How is it different?
Carson: It's a great question. We often talk about what would we refer to as the 3 D's. Those are some of the main differences, that are very technical and operational in nature, that companies encounter and security teams encounter when they adopt cloud and DevOps.
The 3 D's are Distributed, Diverse and Dynamic.
From a distributed environment perspective, when you adopt cloud Infrastructure as a Service and Platform as a Service, you have multiple cloud providers that, that have different security capabilities, different security features. Those cloud providers have different shared responsibility models, that have to be reconciled. Plus, you have the legacy data center environment that, that becomes very complex, multi-cloud and hybrid cloud environments, that create new issues that security teams have to deal with.
The Diverse “D” is about the technologies that are available to developers. Where in years past, J2EE was extremely popular. There were a few platforms that were used to develop applications. Now there are hundreds, maybe thousands. When you consider a Platform as a Service capabilities that a developer can implement via API into their application, or as Infrastructure as a Service, they can leverage SaaS services that they interact with via API. Just the raw number of potential technologies that can appear in an application stack today is an order of magnitude greater than what it was say 10 years ago.
Mike: Why is that, Carson? Why are there so many? Do we really need that many?
Carson: I think that it comes down to cloud giving cloud vendors the ability to put products out much more quickly. That's one of the things that's attractive about, SaaS and PaaS and things like that. If you look at the SMS world as an example, where people would build gateways and go through these processes to do this, the folks at Twilio really revolutionized and completely disrupted that market. It was so popular that many vendors chased them in. And so in many cases you have vendors reacting to these services and they're developing their own services. And it's just a very frothy market as more and more entrants come in. And honestly, cloud infrastructure and DevOps in many cases makes the development of applications faster. So, it's actually easier to put a product in market and it's cheaper than it would have been 15 years ago when it would have to be a data center deployed SaaS product. The markets are red hot, and in many cases, vendors are reacting to disruption by cloud native companies.
Mike: I like your term of 'frothy market' because that's exactly what it is. Things are changing so quickly. So, we're still on Diverse, let's go into Dynamic. A little bit about DevOps here.
Carson: Sure, DevOps really goes hand-in-glove with cloud. The DevOps model is one that leverages automation at an extreme level. And it's very beneficial to application development teams and application performance teams. It's just a great model for development, deployment and maintenance. But it's very fast.
An an extreme example that I can refer to is Etsy. They literally do thousands of code and infrastructure deployments a day. It's incredibly fast. Where 15 years ago you might have one giant deployment once a quarter. So that rate of change is really what the security teams are having to change the thinking change the tooling automate more because that rate of change, you just can't use the traditional methods of risk management. It has to be automated in order to succeed.
Mike: The old waterfall method in which you came out with something on occasion doesn't work anymore. It's so much faster. I have some applications that are asking me to click two or three times a day to get the newest, latest version of software, which I find by the way, really annoying.
Mike: The fact is, they are putting better stuff in day by day. And that has shifted the way things work. You also mentioned something quickly about shared responsibility between the cloud user and the cloud provider. Explain that security break down.
Carson: Sure. So, if you think about the way cloud infrastructure and PaaS services work, the cloud provider, like Amazon or Google or Azure, they take more of the operations off your plate. Therefore, from a security perspective, they have to do more of the security for the components that you don't interact with anymore. For example, with Infrastructure as a Service, the user does not interact with the virtualization layer much or at all. And so, the security of the virtualization layer, we're in a data center, that would be a responsibility of the security team. Now that the virtualization layer has essentially been outsourced as part of Infrastructure as a Service, the provider is responsible for it. And so, the shared responsibility model defines the demarcation between the responsibility of the provider and the responsibility of the user.
And these models are slightly different as you go from Infrastructure as a Service to Platform as a Service, and then to SaaS there's more that the provider does for you. With SaaS, you show up with your data and your users. And with Infrastructure as a Service, you're handed a server perhaps, and it's your responsibility to deal with everything from the server up.
So, understanding the shared responsibility model for this specific provider that your enterprise is engaging, is one of the very first things to understand as you start to see cloud adoption happen in your organization.
Mike: Let me ask a question about that. Let's say you have a financial provider like a bank. If I have something that goes wrong on my credit card, the bank will make it right for me. Are you going to have the same thing happen with your cloud provider? If they have an issue, are they going to make you whole?
Carson: That question is exactly why you need to study the specific provider's shared responsibility model, because those things are defined therein. Really the way the shared responsibility model works, the provider will commit to keeping security at a certain level for the infrastructure and the components that they manage for you. Beyond that, it's up to you. An analogy that I use very often is it's an apartment building model. The building management is responsible for the lobby, and the front door, and the elevator, and the hallways, everything that is common, that’s shared among the tenants. But once you walk through the door of your actual apartment what happens inside is up to you. It's your responsibility. If you decide you wanted to be a maniac and hang the key to your front door of your apartment on the door. If you get robbed, that's your problem, right? Because it's your responsibility to take care of your part of the environment. And the shared responsibility model is very similar.
Mike: They're not going to give you that money back, because that's your responsibility. Correct?
Carson: It really depends on the contract that has been negotiated with the provider. But money being given back is not a common thing. But the more important point is that the provider does not take responsibility for all of your security. That's why it's a shared responsibility model.
Mike: And Carson, the main reason why I'm pushing this point is I want people to realize they still have to manage this. Security is a big deal. This is not something that you're just giving away to somebody else to take care of. You still have to be responsible for your own security.
Risk management. Let's talk about that.
Carson: Sure. So, risk management, that changes dramatically again because of the speed associated with cloud and DevOps, and the of change, I've said many times that change is the enemy of security. If you wrote a perfect application and you had a perfect platform and it was perfectly secure, which of course is theoretical, and then you never changed it, it would always remain in that secure state.
It's when you add a new feature, you expand the infrastructure, you make changes to the environment. Does that change introduce a new risk that wasn't there, has it undone something, some security control that we need to be in place?
And that's where risk management changes dramatically with cloud. 20 years ago, you would sit in rooms and talk about the risks of various technical changes that would happen. And there were change control meetings. And today that sort of slow measured cadence just doesn't exist anymore. When code is written and released automatically and there's no human involvement, risk management components have to be automated as well. But the process of risk management itself has become much less of a human exercise and much more matter of implementing risk tolerance, measurement tools if you will, in automation. It's the speed that really changes the risk management landscape for cloud.
Mike: Now, when one first goes into AWS, Azure, Google Cloud. What's the things that they should be thinking about when you first enter into those agreements? What should you be doing?
Carson: Well, the first thing is when we've already talked about it, is the shared responsibility model. So really understanding what you still have to do is really critical.
Once there's a good handle on the responsibilities that you own in this new model. The second key thing to do is to remember that it's not going to be exactly the same. We've worked with a lot of companies that had a lot of difficulty and a lot of pain and in some cases compromises because they try to use the same technology, the same approaches, the same architecture for a cloud environment that they would use in a data center. And the environments are simply different.
One good example, very specific technical example of that is the concept of blast radius. Many people think that a cloud environment is one Amazon account. Essentially the entire data center, all the servers, everything in one giant account, that's rarely the case. What typically happens is a DevOps team will implement a production account and a development account for one application. That's the most extreme version and actually quite common.
Mike: Some of our users have not really gotten into this deeply. So, explain what you mean by an account. How does that work?
Carson: So an account is simply the cloud provider account that you log into to manage the resources that are there. So, you could think about logging into account just as you would think of walking into the front door of the data center. The account is where all of the resources and assets and things live that you interact with and that you build your applications upon.
Mike: But if we take your example here of walking into the data center. Again, traditionally that's been one account. So, in this case, how is it broken up?
Carson: It's typically broken up by application and that's actually a benefit for security in many ways, because it reduces what we call the blast radius of a compromise. So, if a single application is compromised in an environment where everything is, there is an ability for an attacker to traverse the environment and move out to other applications. Just take the idea that all 500 applications have been moved to the cloud. Now there are 500 accounts that each has its own set of authentication and so forth.
So, it actually contains compromises. However, that's very different in terms of architecture. So back to the question, the architectures are very different. And so, you can't just assume that the same models, the same security architecture models and things can be leveraged. It is quite different and that's important to understand. Probably the other big thing to think about very early on is APIs. The API interactions available to a security team to interact with cloud environments are very powerful. There are a lot of tools and platforms out there, like the one that we built with CloudPassage, that do that for you and simply synthesize the various data sets into something that the security team can leverage to do what they need to do to get their work done. But those API's are gonna be very important.
Another component is SOAR. The SOAR environments today, are very powerful automation environments, but they rely on APIs.
Mike: And if you would please explain what SOAR is and how that has to do with your XDRs . Let's define some of these terminologies as we're using them.
Carson: Sure. So, SOAR is a category of security technologies. The acronym stands for Security, Orchestration, Automation, and Response. And that essentially is a set of tools that allow you to build automated workflows that interact with other systems.
They're external to the actual things that you're securing, but usually the security team owns and runs them. Splunk acquired a company several years ago called Phantom that was one of the better, in my opinion, anyway, one of the better SOAR platforms. So that automation and having people on your team that understand APIs, can program against APIs, and do those integrations, that's a relatively new skill set for many security teams. So that's another good thing to think about. It's important to think about early since it is such a difficult thing to hire for. That's something that most teams want to try to get in front of.
Mike: And you're right. Because SOARs are really using APIs to do everything else. It could be APIs going out to microservices, you could get the data from anywhere that you need. And the APIs are kind the glue that brings a whole thing together. So, that's something that has to be secured as well.
Is there anything in particular that you do to secure the APIs themselves?
Carson: The authentication componentry there is very important. There are a lot of authentication protocols today that exists for API as they're very well-defined. And it depends on the type of API implementation that you're interacting with. The API implementation itself, we'll define what that is, but understanding how to manage that rotation of credentials, API, credentials, and so forth, storing those securely, not just clunking those API credentials in a plain text file. These are the kinds of things to think about. And then for development teams, security teams that are helping a development team implement an API. It's not much different from a security perspective than a web application. Because the user can send input into the API. They can interact with that API. They can try to send in badly formed things. They can try to execute scripts. A lot of the technology on the backend is very similar to a web application in that it uses the same HTTPS protocol to communicate. So many of the things that you would implement in a web application, input cleansing and things like that. Very similar set of control requirements for implementing an API.
Mike: Let's move on to some of the pain points and landmines. What would you advise listeners to avoid?
Carson: Again, the biggest challenge that we've seen is attempting to reuse a lot of the data center tech in the cloud. The cloud environments are very different. The architectures are different. A good example of that is agent-based technology. One of the things that we had to do very early on, address a lot of the pain points that people have with agents. Security agents, they tend to be larger, they tend to communicate from some sort of a control system into the agent. If you think about a cloud environment, you'd have to open up an environment to push data into an agent. And then the agents themselves, the amount of memory that some of these things use is pretty immense. We've seen agents and it's not uncommon, that require two gig of RAM. And if you think about a cloud environment, a cloud application where that big giant SUN Eaton case server has been replaced with 6 or 700 very small very efficient instances, that's a very good example.
It's very difficult to justify the extra cost of adding two gigabytes of memory to every one of those EC2 instances so your security agent will run. That's a really good example of how the architecture drives new requirements for security tools. So, there's a broad set of tools out there that have worked well. There are a lot of vendors that have done a great job of engaging cloud and evolving their technologies to be cloud aware and to be able to operate efficiently and effectively in the cloud. That's really the thing to look at is, are my vendors using exactly the same tech that's in the data center? We've seen that fail many times.
Mike: Let me stop you there if I can, because there's a couple, I think salient points here. First of all, you're right, it's not exactly the same as it was in a legacy environment. So, we have to do things a little bit differently. Sometimes that does mean new tools. And sometimes, however, there's ways that you can ease in. You might be able to use some of the same analytics for example and bring them into a centralized location.
Mike: It really depends on what you're doing. However, you mentioned something here about "don't continue to use the same tools". Would you suggest that you build your own?
Carson: That's a great question Mike and we've seen a lot of that. I'll go back to one of my analogies, building your own tool set is like getting a puppy. It seems like a great idea when they're small and they're cute and they're fun, but when they turn into work, it may not be so much fun. There was one major financial organization that tried to build their own toolset and they had a massive, very public, compromise because their tool failed.
And it's very difficult to do. It's not just building the tool and it's not that the teams don't know how to do it, they don't have the skillsets at all, of course they do. But what very quickly happens is those tools have to be maintained. They have to be evolved as the cloud provider's APIs change. As new services are added you have to build new content to be able to evaluate. New API endpoints need to be added. It is a constant process to keep those tools up. And what eventually happens is the security team starts to look a lot more like a development team. And then from an operational perspective, it's very hard. It's hard enough to get security people to run your security organization, but to get security people who can develop applications and to have developers that you have to add to your team. You become a product company, really quickly and it's an extremely expensive way to go. Given the tools that are out there today, given that most of the good cloud native tools have got great APIs, you can put together a great infrastructure by selecting the right set of solutions and integrating those with API integrations. But building your own tool set we've never seen that turn out well.
Mike: I'll push back on that a little, because I have seen it turn out well for a few very select customers. We hear the term FANG, the Facebook, the Googles, the Apples. They do this very well, but on the other hand, they can drop hundreds of millions of dollars to build those tools. The average company does not have those kinds of resources.
Carson: That's right. And those companies have product teams. The important thing to remember is that's not the security operations team trying to build their own tool, that's Facebook and Google investing in an internal product with product managers and QA teams and everything that's needed. So that's very different and you're right they do because they have to.
Mike: And people are somehow or another, they're glued to what these big companies are doing. And they think if this company is doing this, then we should try it too. And that's not always the right answer. Yeah. So, I agree with you. Don't build your own tool sets unless you are Goliath. And if you are, then that makes certain sense.
Mike: Good. But still, you mentioned automation. Let's talk a little bit more about automation. And while we're talking about automation, let's talk about things like Kubernetes. How does that come into all this?
Carson: Sure, Kubernetes is a really interesting and very cool technology. Our platform is actually a Kubernetes orchestrated microservice architecture. So, we know that environment very well. We have lots of customers who have adopted it. The combination of orchestration, real true orchestration of services and microservices, combined with DevOps and to go a little bit deeper into the automation world of DevOps. Continuous deployment and continuous delivery. That's really where the rubber meets the road in terms of automation, of code development, infrastructure development. Because the infrastructure now with cloud environments, it's just more code. Its software defined. So, you are pushing code out the same way that you would push code out for the application itself. So it becomes one pipeline of automation tools where a developer will make a change, push it into a code repository, an automation tool like Jenkins will detect the change has occurred, will take that environment and build a test environment, run thousands, tens of thousands of tests on it. And then push that code forward into the production environment. And it all happens within minutes and it happens automatically. And the same thing for changes to infrastructure.
So that level of automation requires that the security team integrates with that pipeline. And that's really where there's an opportunity here for security to create a level of automation they've never had before. There are a lot of security tools Halo is one, there are lots of other tools that will directly integrate with that continuous deployment pipeline so that security, as we've always said we wanted, becomes part of the process. That reality exists in many organizations today. It's not nearly as complicated. It sounds difficult because it's foreign to a lot of people.
Mike: It does sound complicated. As we're talking about this, all the different components, it almost sounds like we're now spending all of our time on security and not the products that we're actually trying to do, but you're advocating that we implement the two together.
Carson: Absolutely. And there's a big trend and a big movement in the security world called Shift Left. And the idea is that you're shifting a lot of the security functions, if you will, think about a timeline from left to right, from development to deployment, you're shifting it as far towards the development side, as you can.
There was a great study done probably four or five years ago that found that fixing a security flaw, a CVE, for example, that may have found its way into the application. Fixing something like that before it's pushed into production is seven times less expensive and time consuming because you're getting it before it goes out the door.
And of course, there's a window to the right on the timeline of the production deployment. From deployment to detection of the issue when there's a window of opportunity for someone to exploit the vulnerability. So the idea of shifting left and leveraging DevOps, making DevOps part of the solution which most DevOps teams welcome, is very powerful. And in my opinion, in the 10 years we've been working in this domain, that is the biggest opportunity that security teams have is to make DevOps part of the solution shift security to the left. It saves a lot of money, a lot of time, provides great visibility. It provides feedback for the dev teams and they're getting trained in security all the time. But it all happens as part of the DevOps workflow.
Mike: From a management perspective, how do you go about creating this and embracing the concepts of Shift Left? Because it has to be done from management perspective I think, first.
Carson: It does in some ways, but the interesting thing about DevOps has been any DevOps teams don't report into the CIO organization any longer. They report into the business units in many cases. So there is a bit of democratization of this entire idea of developing and deploying apps that's happened. So in many cases, a security team can engage the DevOps team that is security friendly, for lack of a better term, which many are, and this is what we typically see with larger organizations. We've worked with organizations that have literally a thousand plus DevOps teams. These are huge orgs. But you find one and you decide together that we're going to try this. We're going to figure out what are the controls that need to be done. Code scanning, vulnerability, scanning server configuration, all of the different pieces that have to be done. Let's see how much of that we can get into our continuous deployment pipeline as part of the unit testing.
Mike: You're not saying get away from doing pen tests separately as well are you?
Carson: Definitely not. So, there's still a need to do those things, but many companies do those things at the end. And they have a giant laundry list of problems they find. And frankly, it's frustrating for the development team, because then they have this laundry list of things that they have to go back and fix now that they've deployed it. If you can find 90% of the issues before you even put it out to do a pen test, the pen test is going to find far less and there's going to be less cleanup that has to happen. So, it's checking your homework before you turn it in. That's the way we refer to it very often. And it creates a lot of efficiency and prevents that "window of vulnerability" between deployment and that pen test when you find the problem, and it really reduces friction between the ops teams and security.
Mike: And that's important because again, you want to use the security resources you have in place. You've been doing this a long time. The things that you've seen in the past, you want to make sure that you're doing it properly and we're just not dropping pieces of it as sometimes happens when you have things that are happening very rapidly. Sometimes you behind the ball instead of ahead of it.
Carson: That's right. And when you have a shift this massive, which is one of the reasons we founded the company, there are always opportunities to bring forward security, in the wake of bringing forward everything else. And the key thing there is to figure out how to harmonize, not fight it, but how to harmonize. Go with it, harmonize, figure out how it can make security better. That's really the big opportunity that security teams have over the next 10 years.
Mike: Makes sense. You also mentioned CVE hygiene. Until some of the recent hits, the average person had no idea what a CVE is. All of a sudden now, they're much more important. Can we expect this to be a continuing situation here? Where are we going?
Carson: It will be, as it has been for the last 15 years, 20 years, since the National Vulnerability Database was created, it will continue to expand and grow.
There are still software components that drive these applications and that's where CVEs live. A CVE for those who may not be familiar with it is a vulnerability record. The CVEs are maintained by the National Vulnerability Database, NIST and MITRE do this for the industry. And those vulnerabilities can be detected by tools like Halo as an example. And there are many tools that can detect the presence of these vulnerabilities in these software components. And those CVEs are the vulnerabilities or exposures that attackers use to penetrate an environment.
Mike and Carson: And in order to get on the CVE list, of course, something has to be a known vulnerability. And you're right. These organizations, they find them, and it could be there's something that is found by a customer. It could be found by the company itself. Bug bounties are really popular now. Explain that. A lot of people don't know what that means.
Carson: Sure, bug bounties essentially pay to find vulnerabilities. There's a great company called Bugcrowd that does this as a service. You will essentially publish a notification "Attention all penetration testers come test this environment", and here's where to go. And we'll give you $500 or a $1,000 dollars or something for every bug you find. So, you're literally crowdsourcing the penetration testing of your application and it is very powerful. It is a great model. All the large companies to do it. Many small companies are doing it. It's a big help in terms of addressing the the talent shortage that's out there.
And it's also become a badge of honor. There are a lot of people that before they're hired by a security company, they're asking how many CVE's have they been able to identify. Absolutely. So, it's a major change. It's a shift that we've seen in the industry.
Mike: And a lot of us comes down to the use of Open Source. Why don't we discuss that for a couple minutes?
Carson: Yeah, that's a great question and a very complicated topic. Struts is a good example of an Open Source package that was used heavily in web applications and a really big vulnerability was found in it. That's a really good example. Log4j more recently. Heartbleed in Open SSL is actually one of the most heavily tested and therefore has the most CVEs of any software package in existence, I believe it's still the case. So, these Open Source packages, what's different about them is there's not a company, a commercial enterprise, who is accountable for the vulnerabilities. And testing happens, but there's not a team that goes off and does that stuff.
Mike: And with that point, when you're dealing with an association, a software group, whatever it happens to be something like Apache, you can't go back to Apache say, Hey, you caused this issue and therefore I want some kind of restitution.
This is something that has been given out openly. It's like having a patent and then opening up for the world to use. It's well-tested software in many cases, however, anytime you do any kind of software, there could be issues and there could be backdoors. We have to assume some of that risk when we take on Open Source software.
Carson: Yeah, absolutely. Which is the case with commercial software as well, if you really dig into the EULAs you're still on the hook and there won't be any renumeration if you have a problem and you're compromised. However, the good news is that the Open Source software community has really put a lot of time and energy into vulnerability testing, publication of CVEs in the last 10 years or so. So, it's improved a lot. So, there are resources that you can look at to determine if a particular software package has had vulnerabilities and how quickly the community has responded to them. So much, like evaluating a vendor, you want to know how fast does this vendor deal with these CVEs Do they have a patch available in 24 hours or a week or a month? Because that window is very critical. And so you can see similar things now with these Open Source packages, which is great. The challenge with Open Source packages. And in many cases is branching. It's very easy to create a compromised version and make it look very similar to the original.
And there's a new challenge around all this as well now, and that is Open Source Docker images. So, there are a huge repositories on the internet of Open Source Docker images where someone has pre-built a container where they've built a web server, a database and an application stack into a container so a developer can just grab it and instantiate it within a minute or two, and they've got to develop environment, which sounds wonderful, and in many cases it is wonderful.
But not knowing specifically who built it, has it been tested. Those things become very dangerous because it's very easy for that downloaded image to suddenly get pushed forward into production and, it turns into Typhoid Mary because one image can be instantiated into thousands or even millions of individual instances. So, you create a massive attackable surface area. So, there's a whole new around Docker security, Docker Hub security identifying where someone may have grabbed an unapproved or an untested or a rogue image and used that in an application stack. So, it's a complex issue. It is improving, but in any case, that is something that requires real dedicated ongoing attention.
Mike: I saw something recently saying with the CVE issues we should really get away from using Open Source software. That's impossible. You can't do that because if you take a look at even some of the basic elements that we're using for encryption, security algorithms. Most of that is Open Source in one form or another. Even if NIST is basing things like AES for Advanced Encryption Standards, it's still Open Source anyway you look at it and we will continue to use this this stuff for years.
Carson: And then if you go way back into the early security days where a crypto algorithm was published openly so it could be scrutinized. That was really important. And people just would not use crypto algorithms, the Skipjack algorithm in the Clipper chip back in the nineties when that came out, and it turned out there was a backdoor in it.
And so, the idea of Open Source is good and the idea that lots of people that are not being paid by someone to look at it meaning they have an allegiance or, their judgment may be compromised by a paycheck. That's a good thing conceptually. The problem that we run into really with Open Source is that people in many cases, they're not careful enough about what they're consuming.
Again, back to analogies, you gotta read the ingredients. You don't just buy anything off the shelf and eat it. You look at the ingredients you're understanding who made it, what's in it, what's it going to do to me? You have to do something similar for Open Source components to understand that is this something that's going to cause me harm in the process of trying to gain time and convenience with it.
So, it's there are a lot of improvements made. It is a bit of a wild west area to think about. But I really believe that with the improvements that have happened the last 10 years, it really comes down to being critical. Looking at Open Source with a critical eye, look before you leap.
Mike: And even when you start buying business applications. So many of those also have Open Source. For security applications, if you take a look at what was Bro and Suricata, and now you're seeing a lot more, Zeke is the rename of Bro. There are a lot of business commercial applications that under the covers is still Open Source.
Carson: That's exactly right. The companies that use Open Source, Oracle, the big shops. They put a lot of dollars behind the Open Source movements they do. And they put resources and people and they give people time to work on Open Source projects. So that helps. But you're exactly right. That doesn't mean all the risk goes away. But again, there will be vulnerabilities. There will be flaws that's part of the game, which is what security is all about is identifying where those risks exist and what exposures exist in an environment.
This is what security is here for is to enable enterprises to use these kinds of things safely. It's really a matter of how you think about it. And when you start to think about cloud and DevOps, how to do it at speed, that's really what it comes down to.
Mike: As we conclude this episode, I want to recap some of the points we made. First, we must combine security in as we're building products. This is a continuous process that never stops. Today, we've reviewed the shared responsibility model, as well as how to secure both Open Source and APIs.
We've discussed Shift Left, Bug bounties and the pitfalls of Common Vulnerabilities and Exposures. If nothing else, I think we've shown the security realm is indeed vast.
Carson, any final points that you would like to make before we conclude?
Carson: I think that if there's a single thing to, point out to anyone who's starting the cloud journey or maybe getting pulled into it, is just think different and look for the opportunities. Where it can be daunting and, scary and concerning, there are also opportunities abound and just look for those and open your mind to the possibilities. And I think that most companies will find that cloud is a net benefit to their security program.
Mike: Excellent advice. Carson, where can people find you?
Carson: So Fedelis and CloudPassage, we're still merging things together. Over the last 10, 11 years, we have built a huge database of articles and data sheets and white papers and cloud security blueprints. There's an enormous resource center on the website that we always like to invite people to come and dig into. There's information there that we hope people find to be educational more than salesy, if you will. So, take a look there for sure. If anyone needs to get in touch, they can find me through Fidelis and I would be more than happy to engage whatever I can help with.
Mike: Thanks for being on the program. I appreciate the warmth of the security that you and your products bring to our industry.
Carson: I think we've covered a lot of ground Mike, and it's been a really interesting conversation and I appreciate you having me on the podcast.