Nate Lee on Shifting Left to Outsmart Threats

Nate Lee on Shifting Left to Outsmart Threats

If you’re anything like me, then you’re probably constantly thinking of the future of cloud security and the role of AI, and how to create a robust security culture within your teams. Thankfully for you (and me), this week's edition of Cloud Control is with Nate Lee , CISO & Principal at Cloudsec.ai , who dives into it all.

Nate started in the early days as the tech-enthusiast in a non-tech world and found himself eventually leading the charge at Cloudsec.ai . His perspectives on blending technology, culture, and processes for a solid security strategy are a must-read - why? Because it’s like having a blueprint to navigate how you secure your AI systems, understand common missteps in cloud security and how to avoid them, and harnessing AI to future-proof your security strategies. Nate discusses these topics and more, providing actionable advice and insights that can totally change the way you handle cloud security.

Jump into the full interview and find out how to kick your security strategy up a notch, create a culture that puts security first, and keep up with the breakneck speed of cloud tech ??

Question 1 ??

Before we dive into the deep end of cloud security, let's start with your background. You've been in the industry from engineer to CISO, and now leading Cloudsec.ai . What sparked your passion for security, and what keeps you motivated and interested today?

Answer 1 ??

I’m not originally from the Bay Area. Where I grew up, I was the tech person surrounded by people who looked at tech as a curious obsession of the socially awkward.?

Before I’d ever heard of terms like threat modeling or secure-by-design, thinking about how systems could be abused or broken was just another part of how I would think about technical problems. Once you have an idea about how they can be broken, the next logical step is to spend time to work out how to prevent that from happening.

This is going to sound a bit cliché, but from the perspective of the tech scene, when I moved to San Francisco, it was like Dorothy stepping out of the house and seeing everything in color for the first time. I was suddenly surrounded by people everywhere pushing the boundaries of how software could be built and run. There’s a culture of always questioning why things are the way they are that resonates with me to this day and is what led me to fully focus on the security side of technology. These days, I find it tremendously motivating to be in a position to work across multiple organizations with security leaders to develop and improve their programs.


?

This is where building a culture that enables people is invaluable. You want them to know enough to discern something that could go wrong, whether that’s a phishing email, a log entry or a new authorization flow. You then need them to feel comfortable enough with the security team that they want to reach out for help with questions because they know they won’t be judged for their lack of domain knowledge.

Question 2 ??

You've been everywhere from the engineering trenches to leading the charge as CISO at multiple organizations, big and small. When you're at the helm, how do you weave together tech, culture, and processes into one solid security strategy?

Answer 2 ??

This question really dives into the core of what the role entails and why it’s so interesting. You can’t be solely a technologist and succeed at the organizational level. Conversely, it’s extremely difficult to do the job if you don’t have an understanding of the systems you’re working to secure.?

Culture for me is always the starting point. It’s easily overlooked since it’s less tangible and quantifiable than the technical components and controls but it’s what underlies a strong program. If the technical controls are the bricks in your wall, the security culture you cultivate is the mortar that holds it all together. It’s extremely difficult to build and operate effective technical controls and processes without the right mindset and approach to the problems at the cultural level.?

Most people want to be secure and want to do the right thing. They’re not, however, experts in the field so it’s critically important that the security team connects with them in a way that fosters open communication. You can’t make experts out of everyone and technical controls are often not so comprehensive that a human error can’t lead to breach.?

This is where building a culture that enables people is invaluable. You want them to know enough to discern something that could go wrong, whether that’s a phishing email, a log entry or a new authorization flow. You then need them to feel comfortable enough with the security team that they want to reach out for help with questions because they know they won’t be judged for their lack of domain knowledge. The only way to have this happen consistently is if the team puts in the time to build relationships and a positive reputation across the company beforehand.

When working with teams, I like to emphasize empathy towards others as a fundamental way to improve security. Organizations have people at the front lines of many different attack vectors and if you don’t have a path for them to provide your team with insight on what they’re seeing, you’re cutting yourself off from a major feed of information.

Tech and process are obviously also critically important but I find that most technical security teams already have the requisite skills to implement. The key to a strong strategy here is ensuring mindfulness about the tradeoffs made with each decision. It could be a project, a control, an additional process or any number of other decisions that we make every day. For example, when you prioritize project A vs. B without really analyzing the business value and need, it’s easy to call it a success when you finish since it probably makes something incrementally better. What needs to be considered is if the decision led to the best possible outcome for the business. If you spent a bunch of time reviewing network ACLs, that’s great and it ostensibly improved security but if your biggest risks were around account takeovers and you don’t have MFA, is it really a success to have spent time on firewall rules?

It’s easy to improve security in a vacuum - there’s the old adage that you can unplug a server to make it really secure. Context is everything when defining a strategy and you need to ensure your strategy and plans align with the business and its goals.


?

Question 3 ??

We all love the thrill of pushing out new features and growing fast, especially in the SaaS game. How do you keep pushing the boundaries of innovation while making sure security guardrails are in place?

Answer 3 ??

This is a great question and one for which I don’t think there’s a single answer. Each business is going to be different from the perspective of risk appetite, engineering flows, priorities, etc. Smaller companies are going to be more willing to take chances to get features out and gain traction before the money runs out. As you grow however, you have more to lose from a breach and hopefully, you have more resources to apply to the problem so there’s also a transitionary period between the phases that’s also an interesting problem to work through.

The “paved road” approach championed by the Netflix team is a great mindset to approach the problem with. Nobody wants to be insecure or build insecure software, but they have a job to do, and that tends to be measured by velocity and by proxy, the ability to release features. If you can build the tooling that meets the engineers where they are and provides a simple way to solve a security problem, they’re generally happy to use it.?

When the security team ends up as a chokepoint in the development process, you want to step back and look at the problem from the standpoint of how you can build functionality with a clear and easy path to eliminate the problem from occurring in the first place. This could be tooling in their IDE, libraries or frameworks for logging or authorization or anything else that offloads complex work to easy-to-use shared services with secure defaults.


?

Question 4 ??

You're big on "Shifting Left" across the business landscape. Could you walk us through how this plays out in real life at Cloudsec.ai , especially in getting security and engineering to work collaboratively and towards a common goal?

Answer 4 ??

This is an area where there’s often low hanging fruit for security teams to make a huge impact and it’s where we help a lot of clients address security issues in a way that keeps developer velocity high. The general principle is that the sooner you catch or ideally, prevent problems from occurring, the cheaper it is to remediate.

This could be scanners that check code for vulnerabilities and secrets within the developers IDE so they can fix it before it ever gets merged in, automated dependency updates to make ongoing maintenance something that happens by default or doing threat modeling as part of the initial design to account for issues before you start to build. In all those cases, the cost to fix the issues before they’re released to production vs. the cost of dealing with them when you find them after the fact can be an order of magnitude greater so making the effort to ensure that security is a core part of your development process really pays dividends.

The key to success is ensuring the security team works closely with the engineering team and actively solicits feedback, especially negative feedback. Teams working on the controls, systems and tools that affect the development process should be developers themselves with an innate understanding of the process. This enables them to actively collaborate and build trust by being responsive to developer concerns in a way that’s difficult to do if you haven’t lived through it yourself.


?

Question 5 ??

AI is reshaping every tech corner, including security. From your vantage point, how is AI transforming cloud security practices, and what's one AI application you find particularly game-changing?

We’ve only seen the smallest fraction of the impact that the coming decade will bring to both attackers and defenders. Initially, the impact will be increased offensive capabilities as attackers automate a lot of the research that goes into targeting individuals - creating whole campaigns with custom tailored, grammatically correct phishing emails and as we saw recently, video deep fakes of executives. This will put a huge emphasis on defenders to rethink how identities are managed and what trust means in the context of a world where you can’t necessarily believe what you see.

Answer 5 ??

We’ve only seen the smallest fraction of the impact that the coming decade will bring to both attackers and defenders. Initially, the impact will be increased offensive capabilities as attackers automate a lot of the research that goes into targeting individuals - creating whole campaigns with custom tailored, grammatically correct phishing emails and as we saw recently, video deep fakes of executives. This will put a huge emphasis on defenders to rethink how identities are managed and what trust means in the context of a world where you can’t necessarily believe what you see.

We’ve only seen the smallest fraction of the impact that the coming decade will bring to both attackers and defenders. Initially, the impact will be increased offensive capabilities as attackers automate a lot of the research that goes into targeting individuals - creating whole campaigns with custom tailored, grammatically correct phishing emails and as we saw recently, video deep fakes of executives. This will put a huge emphasis on defenders to rethink how identities are managed and what trust means in the context of a world where you can’t necessarily believe what you see.

On the defender side, we’ll see much, much more competent event correlation and response capabilities from tools that leverage the power of LLMs. Right now the token windows are too small, the cost of inference is too high and the agents aren’t reliable enough. Once we start seeing advances like lean models specifically trained on logs and threat feeds for attack detection, agents that consistently do what you expect and models that can update vulnerable dependencies automatically, even across major versions with breaking changes, the tables will turn decisively in favor of the defenders. They’ll be able to feed the tools with knowledge of their systems to leverage traffic flows, vulnerabilities, regular patterns, etc., responding in real time.

It’s quite an exciting time.


?

Question 6 ??

With AI integration comes the task of securing AI itself. In your experience, what unique challenges does AI present in cloud security, and how should companies gear up to tackle these?

Answer 6 ??

Like everything else AI related, this area is evolving faster than it’s humanly possible to keep up. The OWASP team has done a great job raising awareness with the ML and LLM Top 10 lists highlighting the most common vulnerabilities that every dev and security professional should be aware of.?

I’m currently contributing to broader community efforts with the Cloud Security Alliance and OWASP as well as developing a course for O’Reilly in efforts to put more practical guidance in the hands of developers. Expect to see many more resources available in the coming months.?

If you’re building LLM based tools, think about how you’re providing time and training to upskill your teams with the knowledge they’ll need to build this next generation of applications in a secure and predictable manner. Give them room to experiment and understand the nuances of working with the technology, keeping in mind that if your foundation isn’t built solidly that the cost to go back and fix will be many times greater than the time to do it right in the first place.


?

Question 7 ??

Shifting gears a bit, what's a common cloud security oversight you've noticed companies make time and again? And in the spirit of learning from those hiccups, how would you advise rectifying them?

Answer 7 ??

Given they’re common, not a whole lot of surprises here - I’d point to key management, public buckets and patching as the big oversights. They’ve been so common that there’s a lot better tooling now to prevent you from shooting yourself in the foot but you still have to ensure you’re paying attention. You want to make sure you’re utilizing scanners that look for secrets being accidentally checked into code, that you know what cloud providers are in use and that you have public bucket prevention/detection enabled and that you utilize something like Renovate to help automate dependency updates.


?

For security professionals, we all need to be thinking about how we’re preparing to enable our businesses to take advantage of these new capabilities.

Question 8 ??

Looking down the road, how do you envision AI and cloud technologies evolving together? Are there any upcoming trends that security professionals should be preparing for now?

Answer 8 ??

Right now, the hyperscalers are offering foundational models and will be moving up the stack, adding abstraction layers to make them easier to use and build with, much as they did with all the services you see offered on top of IaaS today. While existing tools and libraries have already lowered the barrier to entry for building AI based applications, this next generation of abstractions will put AI and the ability to build in the hands of even more people, many of them with minimal engineering experience.

For security professionals, we all need to be thinking about how we’re preparing to enable our businesses to take advantage of these new capabilities. The democratized power, flexibility and efficiency they’ll bring will be a massive competitive advantage so will you lead the way in delivering that value or be seen as a hurdle to get past?


?

Question 9 ??

Nate, as we dive deeper into the integration of AI within cloud security, what strategic shifts do you think organizations need to make to fully leverage AI for enhancing their security posture?

Answer 9 ??

One way or another, AI will be integrated into every tool that security teams use. The efficiency gains will be so great that its use will become table stakes for every business unit. It’s important to remember that leveraging AI to improve security posture isn’t just about using AI within the security domain but being a key driver in enabling the business to leverage AI in a secure manner. It’s entirely possible in the early days that the best value for your teams’ time will be spent building tools and training that enables other teams to utilize AI securely rather than strictly thinking about it from the perspective of AI within the security domain.

Security teams need to understand how the models work and the implications of the different use cases and architectures so they are able to meaningfully contribute to the discussions weighing the tradeoffs of different design, training and integration options. Being able to influence these discussions at the design phase is the only way to be secure-by-design and of course, are a great way to build more connections across the organization. It also gives you the deeper understanding necessary to build and evaluate the tools that fall directly within the security domain.


?

This is a wide open field limited only by the creativity of the implementing teams. We need to be thinking broadly about how they can use AI to help enable other teams which extends the reach and influence of our own teams. There’s so many amazing improvements waiting to be had but you have to go get them.

Question 10 ??

Last but not least, fostering a security-first culture is vital. How do you see AI and cloud security innovations influencing or enhancing security culture within organizations?

Answer 10 ??

This is a wide open field limited only by the creativity of the implementing teams. We need to be thinking broadly about how they can use AI to help enable other teams which extends the reach and influence of our own teams. There’s so many amazing improvements waiting to be had but you have to go get them. Just off the top of my head, a few ideas:

  • Chances are that the security team has more knowledge about the AI space than most other departments, use that to your advantage! Connect with other internal teams early in their AI journeys to gain insight into critical business processes and put yourself in the place of being an enabler. Collaborating to deliver productivity increases they’ll enjoy day to day in a secure manner builds the bridges necessary for a security-aware culture.

  • With the power of LLMs, you could build an interactive chatbot to provide employees with instant answers about your security program and best practices. It could redirect to the right team member where it’s not able to help, making it more likely they’ll ask knowing that access to the right information is easy and friction-free.

  • You could use it to build a better, more focused and interactive natural language security awareness training program that beats out the stale early-2000s videos with illustrations of shadowy hackers in hoodies and matrix-like backgrounds that somehow still survive to this day.

要查看或添加评论,请登录

Iftach Ian Amit的更多文章

社区洞察

其他会员也浏览了