Team Topologies sketches for Security

Team Topologies sketches for Security

This post shares some observations about how the work of similar people in security differs and how those differences can be represented in the visual language of Team Topologies.

Agile Stationery is a business that supports the work of many gurus, coaches and experts. We build interesting products with our partners that enable problem solving, communication, and sometimes teaching. From this position, we get to see a lot of what different experts are doing with overlapping audiences. We are lucky to have worked with both co-authors of Team Topologies and several experts in the realm of Threat Modelling and cyber security.

It was while preparing for conferences on Team Topologies and Threat Modelling that it occurred to me that the differences between the separate threads of the cyber security world can be articulated in terms of Team Topologies. Further, it looks like the Threat Modelling branch of the security world seems to assume (or aspires to adopt) a radically different team topology.

So, in part to celebrate the new re-writable modelling shapes that were being packed this weekend, I grabbed some spare shapes and roughed up these three team topologies.

In each diagram the flow of change is from an idea on the far left (yellow label) to code running in production somewhere right of centre (blue label).

Old School Cyber Security

The kind of cyber security work I've experienced has turned up at banks, where the culture and working practices lag those in other industries in many respects but where security is obviously taken seriously.

In this world, cyber security happened in one of two ways:

  • pre-release reviews of the design and codebase required before release - a type of quality gate.
  • penetration tests conducted by white hat hackers after the change is in production

Either of these meant filling forms, provisioning credentials and preparing long documents to hand-over the app to the cyber security or pen testing team. It then meant waiting a very long time for a long document to come back. We would do little to nothing about the document, having generally believed everything in it to be mitigated already, and then we would release to production. The only real impact made by the process seemed to be occasional late rework of dubious value, and some lingering frustration.

The other strategy allowed code into production, then later told us it was secure, or perhaps that it wasn't and we needed to do something about it. I've seen this at banks and in ecommerce. The most frequently visible impact of this was a swollen backlog and increased stress for engineers. Engineers want to go back and look into security feedback, but often can't justify the time. Serious problems - like log4shell - always get escalated into an emergency - interrupting all other work - and get fixed immediately.

There was a lot wrong with this picture. It is well known by now that late feedback and handovers are both productivity killers and releasing potentially insecure code into production is obviously less than ideal.


Here is the handover highlighted with a red label. The Sec function appears in purple on the right working collaboratively with devs, and perhaps with platform, at or after the point where change has flowed into production. In visual terms, the collaboration is represented by the pink bit is under the electric blue label.

Post-Accelerate Cyber Security

Accelerate treated us to a short chapter on security pointing out that most of the relevant regulations specify tests that, it turned out, can be automated. Of course, the regulations are themselves years behind the curve - as most regulation is - and automation of those checks further behind still. In terms of my own experiences, these things are still not adopted everywhere and unfortunately uncommonly in the teams I have been working in. There have been teams in banks and in ecommerce within my network that work this way. From what I read, it seems like the industry is now catching up with this way of doing things.

In team topologies terms we have a new security function collaborating with the platform team to deliver infrastructure for the dev-ops pipeline. This is the security team contributing their expertise to make sure automated vulnerability scanning etc is in place and the records of changes are well formed.

Rework can still be introduced as the focus is still on the middle part of the flow of change, at the point commits are pushed to source control. At this point, weeks may have passed since the initial idea was floated. It is also likely that some subtle issues and inherent threats are not picked up by automations. On the other hand, the automation is working much earlier and more predictably for engineers and is way more helpful. Things like Dependabot and Renovate change the economics of software design as a result, but that is another story.

The red handover point can perhaps be removed and the late stage activities reduced, streamlined or eliminated. You would probably still run pen tests and bug bounties to seek out production vulnerabilities.


This diagram shows a second instance of the security function in purple making a brief contribution in the middle of the flow change - the CI/CD or dev ops pipeline. As far as I can tell, this is usually delivered through the platform team.

Enter Stage Left: Threat Modelling

Shifting left to the moment commits are pushed is great. The field of Quality, however, have been talking about "testing the requirements" within my direct earshot as early as 2014.Back then the tools for that were already well established - specification by example, BDD, followed up with Gherkin, Cucumber etc for automation. You could get training in how to "test requirements" using a whiteboard and dry-erase marker before writing any code.

Threat modelling is security shifted left by the same amount as the quality field have shifted left, and the team topology for this feels similar:


This topology still shows two instances of the security function, but one works with Dev from the idea stage onwards. Contributions to CI/CD remain in place but collaborative threat modelling is now happening on whiteboards (the white TM label)

Finally, this team topology allows for the elimination of rework by addressing issues in designs, not code. If your teams are spending time on explicit and well structured design activities, perhaps involving Product, Brand and UX, then security can be invited along for the ride too. Techniques like OWASP Cornucopia and Elevation of Privilege are basically games of "spot the bogeyman" where familiar patterns of cyber security error are sought out and the implicit knowledge of Product and Dev functions can be brought into play.

That red handover on the right of my diagram? I'm not qualified to suggest removing it, perhaps there is some verification work that can still be usefully done, but I think it would be a very limited exercise if security have been involved since before the build began.

What does this all mean?

The difference in team topologies highlights a possible reason why threat modelling seams to be less well adopted. The organisational focus / industry buzz is on what certain groups of people are doing at certain points in the value chain, and there is loads of value in maintaining attention there. Shifting left is recognised as valuable but it has already happened once. The value of doing it again and adding security concerns to a different set of team interactions is a separate question and one that many won't be ready to answer. That's OK, of course it is, but if security is a key part of your mission or something you use to differentiate your services then broadening your attention will prove worthwhile.


As you can tell these are observations and thoughts from my perspective as an application engineer and director in Agile Stationery. I'm fortunate to be meeting partners from the world of TT and TM over the next few weeks and hope to be able to reflect on this with them. The article will be updated as I learn more, so do come back for that. If your insider view differs from this "first draft of the truth" then let me know in the comments.

?? Adam Shostack

Leading expert in threat modeling + secure by design. Training ? Consulting ? Expert Witness. "Threat Modeling" + "Threats: What every Engineer Should Learn from Star Wars." Affiliate Professor, University of Washington.

5 个月

Very thought provoking!

Matthew Skelton

CEO at Conflux / Co-author of Team Topologies ??- Disrupting transformation via Team Topologies, fast flow, and Adapt Together??

5 个月

This is really timely because ?? Adam Shostack and I are discussing ways of bringing together our two sets of expertise. In particular, the ideas in Secure by Design are highly relevant: https://www.manning.com/books/secure-by-design?query=secure

要查看或添加评论,请登录

Agile Stationery的更多文章

社区洞察

其他会员也浏览了