Alex Stamos wants security by design to become an industry standard because, as he said, we keep making the same mistakes over and over again.
Stamos, now SentinelOne’s chief trust officer, has more than 20 years of experience in the security industry. He previously served as CSO at Yahoo and Facebook, and founded security consultancy Krebs Stamos Group in 2021 with former CISA director Chris Krebs after former President Donald Trump fired Krebs for challenging Trump’s baseless claims of fraud. widespread voting following the 2020 US presidential election. SentinelOne acquired Krebs Stamos Group last fall and hired both founders in the process.
Beyond this, Stamos is independently one of the most prominent voices in the information security space. More recently he made a big impression with a LinkedIn blog he published in January in which he discussed Microsoft’s prioritization of security revenue following a breach the tech giant suffered at the hands of a Russian state actor identified as Midnight Blizzard.
Last month, Stamos spoke with TechTarget Editorial to discuss security by design, a concept that refers to the practice of prioritizing security in product development above all else. The concept is a common point of discussion in security spaces. In May, for example, CISA announced that 68 organizations, including SentinelOne, had signed the cyber agency’s Secure By Design Pledge in which software publishers pledged to make measurable progress toward applying security-by-design principles in their organizations.
During the interview, Stamos also talked about Microsoft’s recent security issues and the risks of generative AI (GenAI).
Editor’s note: This interview has been edited for clarity and length.
Why is security by design important to you?
Alex Stamos: I’ve been doing this professionally for over 20 years and we keep making the same mistakes over and over again. We have new technologies, new waves of products. But every time there’s a new wave of products, we end up going through the same cycle of getting really excited about a new platform or a new design paradigm and then realizing that security wasn’t something we considered too deeply in the first place. Then there is a huge amount of research done by independent researchers, by people and companies, by academic researchers. And then the rounds and rounds of people promising to do better next time.
I think from my perspective it would be good to start codifying what we’ve learned in each of these security eras so that when we get to the next time, even if the technological details are different, the fundamental concepts are the same.
The phrase “secure by design” came up a lot at the RSA 2024 Conference. Microsoft used that terminology with its Safe Future Initiative. It came up a lot when companies talked about AI and CISA focused on it as well. How do you explain this moment that is having security by design?
Stamos: From Microsoft’s perspective, I see it more as a marketing cover for the fact that they really lost their way on security. They have made big decisions at the executive level to prioritize revenue over shipping products that are safe from the start. For them, the biggest problem is their addiction to security income, which is not reflected in that model in the way others use it. I’m leaving Microsoft aside from everyone else here because Microsoft has a real fundamental problem: it sells fundamentally insecure products and then charges you more to make them secure. That’s not security by design. It’s more about greed to get the best out of your product design decisions.
As far as security by design in general, I think one of the reasons you’re seeing it is because CISA has been effective in using it as general terminology to try to get companies, especially those that sell enterprise products, to eliminate all the fundamental aspects. types of vulnerabilities. I think one of the reasons you saw this so much is that CISA has been effective in including this in White House documents and promises. They’ve also been using security by design as an umbrella idea that includes what we used to call the secure development lifecycle or SDL.
How does SentinelOne interact with security by design principles?
Stamos: One thing we all learned from SolarWinds is that adversaries, from the highest level down to standard ransomware actors, have found that they are going after the supply chain, whether it’s the code supply chain or the Cloud. – is a great way to get a lot of benefits with relatively little effort. We have tens of millions of machines running our agents. We have tens of thousands of critical customers who trust us by installing their software and using us as their security products.
We know we’re a prime target for adversaries, so it’s important for us to publicly demonstrate that we’re doing everything we can to first prevent those attacks and eliminate entire classes of flaws in our products and then promise that if something bad happens, We will be honest and open about it. And those are all important parts of CISA’s security by design commitment. None of those things were completely new to us, but I liked how CISA brought it all together into a cohesive whole. And I think it’s important for us to demonstrate that as a security company, we have an obligation to help support CISA’s efforts to standardize security by design. That’s why we are participating.
Are you optimistic about companies implementing security by design when it comes to generative AI?
Stamos: I think generative AI and AI in general are a huge benefit to defenders. There is currently a significant advantage for defenders in using AI. And much of it will last, although the bad guys will catch up. One of the reasons we are so far ahead is that companies have been collectively spending billions of dollars over years and years of doing this research. AI has been something SentinelOne has used from the beginning, not GenAI, mainly classifiers. But the whole idea of SentinelOne was that instead of having signature-based detection, you train the AI based on behaviors and look for behaviors both on the endpoint and in the cloud.
Alex StamosTrusted Director, SentinelOne
It took years and years to do it. As a result, I think the defenders have the advantage, and that’s a good thing. Many of the uses of generative AI now have to do with improving the workforce and making people much more powerful and productive. That’s something we absolutely need in terms of security. The number of companies that have a multi-level SOC (security operations center) and all the staff that can handle all of their alerts and conduct their investigations internally and do it to the level necessary is incredibly small. Allowing fewer people to use AI to be much more efficient at their jobs and then eventually allowing AI to make its own defensive decisions, under the supervision of humans, is the way forward to improve things.
As for security issues caused by AI, this is an interesting challenge because we are still very early in understanding the adversarial mechanisms that can be used to manipulate generative AI systems. Anyone who tells you they can completely protect your AI installation is lying to you. Because from my perspective (again, being the old guy) this is like the late 90s/early 2000s, where if someone in 1999 said they could create a perfectly secure web application, maybe they wouldn’t know it, but they were lying. completely. . This is because three-quarters of the interesting defects in web applications have not yet been invented. That’s where we are with GenAI. Until we have a couple of decades of research and vulnerabilities, you can’t have much confidence.
If you are deploying GenAI, first, it is critical to do so in places where adversaries cannot potentially manipulate it. Many companies think of AI as an internal workforce issue. That’s great because if you already have inside information that is semi-reliable, it reduces the risk. If you put an AI in a place where bad guys can talk to it, I think it’s risky right now.
Second, we need to create a risk management framework that is humble in the face of the fact that we don’t understand how these systems work right now. We don’t understand how they can be manipulated. You have to be attentive to everything that happens and act quickly if you notice any type of new manipulation.
Do you think we’re moving toward senior executives, such as those without security experience, prioritizing the development of secure software over short-term profits for shareholders?
Stamos: Between SEC rules, lawsuits and major attacks, boards of directors have understood that they have a direct responsibility for security. The problem is that they still don’t know how to manage it. They are slowly moving towards a model where they have the structures and people necessary to truly understand and manage their security teams. I think for boards of directors, the critical things are: first, you have to have a technical risk committee that is separate from the audit committee. Auditing (people looking to see if money has been stolen and if the accounting makes sense) is totally different than helping to manage a team that understands all the adversary risk issues and deals with them.
And two, have a technologist on your board of directors. A couple of forums have started this, but they are moving slowly. Tech executives always want CEOs, CMOs, and people who really add value to the flashy aspect of making money. But it is critical to have at least one person who can sit there and watch the CISO give his 80-slide presentation and know whether he is being (misled) or not.
It is extremely rare that I come across a board where I feel they are able to effectively manage the CISO. And I think that’s something that boards really need to strive for: having the right technical skills on the board to absorb, manage and provide useful feedback to a security team.
Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.