Software Engineering Manual of Style, 3rd Edition

Software Engineering Manual of Style, 3rd Edition

"The ones who are crazy enough to think they can change the world, are the ones that do" - Steve Jobs.

Introduction

This post is seeking early peer review of sections of a textbook I'm writing entitled:

Software Engineering Manual of Style, 3rd Edition (2023)
A secure by design, secure by default perspective for technical and business stakeholders alike.

The textbook is 120 pages of expansion on a coding style guide I have maintained for over 20 years and which I hand to every engineer I manage. The previous version was about 25 pages, so this edition is a bit of a jump!

Information security is holistic, not heuristic, so to fix it we need to address the whole of information security - not just dabble around the edges.

The text covers the entirety of software engineering at a very high level, but has intricate details of information security baked into it, including how and why things should be done a certain way to avoid building insecure technology that is vulnerable to attack. Not just tactical things, like avoiding 50% of buffer overruns or most SQL injection attacks (and leaving the rest of the input validation attacks unaddressed). This textbook redefines the entire process of software engineering, from start to finish, with security in mind from page 1 to page 120.

Secure-by-design, secure-by-default software engineering. The handbook.

Safe coding and secure programming are not enough to save the world. We need to start building technology according to a secure-by-design, secure-by-default software engineering approach, and the world needs a good reference manual on what that is.

This forthcoming textbook is it.

I'm trying to write it so it's processes and methodologies:

  • Can be baked into a firm by CXOs using strategic management principles; or
  • embraced directly by engineers and their team leaders without the CEOs shiny teeth and meddlesome hands getting involved.

Writing about very technical matters for both audiences is hard and time consuming, but I think I'm getting the hang of it!

Version history of pre-publications in this document

  1. 22-Jan-23: Initial publication.
  2. 23-Jan-23: Added discussion around adding Security Testing to SDLC. See Except 1.
  3. 10-Feb-23: Added The Software Engineering Standard Model (SESM). See Excerpt 2.
  4. 10-Feb-23: Added section on Project #CompulsoryRectificationNotice
  5. 25-Mar-23: Added the revised version of the Software Engineering Standard Model (SESM) & a proof showing usability testing has moved into the functional testing domain (Excerpts 2A & 2B)
  6. 25-Mar-23: Added an update to the Pillars of Information Security from 4 to 11 different considerations. See Excerpt 3.
  7. 10-Apr-23: Added initial version of the "Iterative Process of Modelling and Decision Making", pre-edit. See Excerpt 4.
  8. 20-Apr-23: Added Revision 4A - the revised Iterative Process of Modelling & Decision Making
  9. 20-Apr-23: Added Excerpt 5: Vendor Backdoors + Minimum Responsible Security Controls
  10. 22-Apr-23: Finalised Revision 4B - the revised Iterative Process of Modelling & Decision Making
  11. 28-May-23: Added Excerpt 6: The Lifecycle of a Vulnerability

(NB, new sections add at bottom of page)

Abstract from the cover page

"The audience of this textbook is engineering based, degree qualified computer science professionals looking to perfect their art and standardise their methodologies and the business stakeholders that manage them. This book is not a guide from which to learn software engineering, but rather, offers best practices canonical guidance to existing software engineers and computer scientists on how to exercise their expertise and training with integrity, class and style.

This text covers a vast array of topics at a very high level, from coding & ethical standards, to machine learning, software engineering and most importantly information security best practices.

It also provides basic MBA-level introductory material relating to business matters, such as line, traffic & strategic management, as well as advice on how to handle estimation, financial statements, budgeting, forecasting, cost recovery and GRC assessments.

Should a reader find any of the topics in this text of interest, they are encouraged to investigate them further by consulting the relevant literature. References have been carefully curated, and specific sections are cited where possible."

The book is looking pretty good: it is thus far what it is advertised to be.

Why me

Imagine summoning a penetration testing, reverse engineering, data science, Bayesian AI genie that knows every algorithm of significance intimately - that's me.

One Salt Lake City tech company banned me from using one of their platforms because, and I quote "don't let him login to the platform. If he lays hands on it for 30 seconds he'll reverse engineer the whole thing and steal our technology. You don't understand". I didn't give two hoots about the particular company and I've found myself having to use their platform repetitively for more than a decade, so this amusing problem has been inconvenient at times but mostly flattering. And yes it's true, I can reverse engineer almost anything in about 30 seconds and tell you where to go if you want to pentest it, or replicate it's functionality. My value system however, prevents me from misusing that power, like all good senior computer scientists.

I've been working in technology, bouncing between Melbourne and Silicon Valley, working globally for 30 years, so I know how most technology works, how to work it, and how to work hard and make it work.

I've met some big players, like Sergey and Larry. I've been frequenting the offices of CXOs for many years, I've worked with the biggest companies in Australia, and I was there when altavista.com.au was loading its inverted indices every month. I redefined phreaking as a kid just as digitisation was happening, I invented mass surveillance search engines back in 1995, I was the first to deliver post click tracking as a PaaS/SaaS offering back in 1998.

I say this as humbly as I can, but I know what I'm doing. Or as they say in Australia, "here, hold my beer, I got this".

Helping out & donating

At the bottom of this document are some software engineering discussion topics that you might be able to help with by providing some peer-review.

If not, and you would still like to support me in my cause to re-invent software engineering from the ground up so it's secure-by-design and secure-by-default, then why not buy something from my Amazon Wish-list?

There are plenty of books I don't have, and I love books. It's also a great way to provide moral support, as I'm going against the grain a bit and so there are days I have self-doubt about my mission to fix cyber security by making everyone re-write the text books! Here's the link:

I'm also not super wealthy as I'm an independent innopreneur rather than an entrepreneur (i.e., I inspire innovation and change, rather than commercialise it), and I do need to keep up my readings, so it really does help. Anything in the software engineering space that's on that list would be handy right now and directly contributes to this piece of work I'm currently doing.

Project #CompulsoryRectificationNotice

This Cyber Security nightmare that's going on has a short-term and long-term timescale.

The publication of my textbook will address the long-game, and will probably begin to have a material impact on software quality and security 3 years from first publication (along with the publication of creative commons teaching aids, i.e., two PowerPoint decks suitable for use in a University setting 2 hour lecture as well as GRC Assesment templates). The text students will have to pay for, but the decks and templates will (hopefully) be free to use for academic purposes.

Between now and then, we have a bit of a disaster on our hands. The World Economic Form (WEF)'s January 2021 alarmist video warns the world the impending threat of "A cyber-attack with COVID-like characteristics." [9] It's now 2023, and we are seeing fragments of that. Not quite that scale yet, but each month incidents are getting bigger and scarier.

Some of these incidents are very preventable, but the apathetic approach to information security by vendors and governments has meant that prevention has not been done.

Given my ability to find suspected vulnerabilities very quickly, and the lack of response to responsible disclosures (see next section), and full disclosure (which is what one is supposed to do when they don't receive a response) is madness; so I have started a social-media based movement to address some of the bigger elements of the short term game.

The movement is called "#CompulsoryRectificationNotice", and sits somewhere in between 'responsible disclosure' and 'full disclosure'.

It's a social media based campaign intended to induce vendors to stop focusing on feature boat, and instead address the more important matters: fixing the bigger classes of vulnerabilities, pronto, whilst we await a new generation of engineers that are properly trained to build security technology and to start working their way through the technology development pipeline.

If there is a particularly egregious matter going on, or I am withholding some of the details from a public forum, then a #WikiNoyingLeaks might pop-up. If you see #WikiNoyingLeaks, then something is going on behind the scenes that I have chosen not to fully reveal (as yet), or the matter is of extreme importance and urgency.

No alt text provided for this image
The pester-power of #WikiNoyingLeaks & #CompulsoryRectificationNotice is the new way we get security matters addressed. And yes I'm fully aware that 'annoying' is mispelt, that's the point - it's bloody annoying!

The classes of #CompulsoryRectificationNotice are as follows:

  1. #WeNeedToTalk (low grade warning)
  2. #CompulsoryRectificationNotice
  3. #WikiNoyingLeaks
  4. #IKnowSomethingYouDoKnow
  5. #VendorBackdoor (the worst grade of rectification notice)

Fortunately technology turns over so fast (and always faster - see 'Moore's Law'), that we won't have to wait too long for old insecure tech to be sunset, and newer replacements based on a more modern secure-by-design, secure-by-default engineering approach to take their place.

The campaign uses social media to put pressure on software (and hardware) system vendors to prioritise fixing critical infrastructure vulnerabilities and to secure their products.

Thus far it's working: I'm getting roughly a 20% response rate. We haven't had any official fixes or press releases come down the pipeline yet, but at least vendors are engaging (in a variety ways).

The tricky thing with this type of suspected vulnerability disclosure is trying to not stray into the territory of 'full disclosure', which I see as irresponsible, but is a perfectly valid and long-standing means of forcing a vendor to fix vulnerabilities as fast as possible. Bugtraq, Zardoz and Secunia/VulnWatch were 90s-era journals centred around full disclosure. #CompulsoryRectificationNotice is not that, and if anyone would like to join in - please don't make it into that.

I will say, what is more irresponsible than full disclosure is this systemic and intentional failure to address information security or to respond to responsible disclosures. Simply put, we shouldn't have to do this.

For an example of it in action, see my #CompulsoryRectificationNotice I served on the firewall vendors recently.

Update 25-Mar-23: OK we have a problem. There is currently a culture of not fixing vulnerabilities, and it is worldwide, from industry to big tech and government.

Historical background: spooks and villains

Before choosing this year to write the definitive manual on secure software engineering, I could see this "perfect storm" of cyber security that we're currently in coming down the pipeline. I don't need to hack anything to work it out, just a few clicks and I can spot a cyber security disaster almost instantly.

As far back as 2018 I started approaching corporations and government bodies globally, trying to fix various critical infrastructure vulnerabilities and providing cyber intelligence about some very important things that companies knew about but weren't bothering to address.

Too expensive, too hard, not interested, that problem doesn't exist in our company, and not my job has been the corporate response. I wont name any of the firms, because they are all guilty. And yes they certainly do.

The government responses to my responsible disclosures has been even more disappointing, persecutory and even horrendous at times.

Lots of 'shooting the messenger' and 'kill the whistleblower' has been the general receipt. On no occasion, has anyone from any government or corporation of any significance replied back in a civil manner and sought more information about the reported vulnerabilities, and I've disclosed at least 10 biggies and 20 less onerous ones. I've been hung up on when I've said things like "there are some things you don't write down", I've been referred to lifeline when reporting active nation scale cyber intelligence threats, only to have my details and my report forwarded verbatim to the nation states that are behind the cyber threats. So much for "always look after your intelligence sources".

I've had my house tossed, my car vandalised, my phone monitored, my friends & clients rung by police 'enquiring on the nature of [their] relationship with me' and character assassinating me, calling me a criminal and all sorts or slanderous and defamatory things. It's outrageous and sad. Not for me, but for society.

This sort of response is complete madness.

Allow me to quantify: The 2022 Mandiant M-Trends report "showed that 47% of attacks are discovered only after notification from an external party" [1], which means only 53% of cyber threats are detectable, and then there is the actual detection rate associated with detectable events (which varies based on cyber security maturity level, threat hunting budget/resources, and SIEM telemetry instrumentation coverage). Lets be optimistic and estimate the detection rate for detectable events quite high, and say "in Australia we detect 75% of all detectable threats".

So:

47% undetectable + 53% detectable x (1- 75% detection rate) = 60.25%

of all cyber threats will go undetected in Australia, on average. And this number is optimistic. God help us all.

In the old days, when one reported a vulnerability to the CERTs, they jumped on it if it was a biggie, a PGP-encrypted conversation started up that day and as soon as they had enough information, somebody (I don't know whom) would create an exploit, and it got fixed pronto. 30 days was normal, 90 days if it was a tough one. And then after a patch is released an advisory comes out. I miss those days.

Zero trust, blame culture environments do not promote information security best practices: People need to understand, jobs do not get lost responding to cyber security incidents, many incidents are unavoidable and they happen quite regularly. Welcome to The Internets. Jobs do however get lost when one covers up vulnerabilities or subverts governance protocols. Or as I've been known to say, "People make mistakes all the time. It's not what one has done wrong that matters, its what one does to put it right that shows the true character of a person". And please, don't blame security defects on one person, systems engineering says it's a systems failure - treat it as such.

The micro-economics of cyber security is very interesting and is covered in my textbook. I estimate the previously mentioned 60% would be closer to 25% if the intelligence I provided was taken more seriously.

This is important because the utility cost of friendly fire, and the utility cost of cover-ups/non-response in cyber security together is a sub-exponential coefficient that increases the cost of impact and the cost of rectification for real incidents quite substantially (detected or undetected), such that the marginal utility cost to a victim for each day of dwell time is inversely proportional to the attackers marginal utility (which is then scaled by the sub-exponential coefficient previously mentioned). Id. est., this zero trust blame culture environment can make a $100,000 problem into a $1bn problem.

Anyways, I have given up on responsible disclosure.

The huge vulnerabilities that I was trying to get addressed are now public knowledge, exploit scripts have been created, journalists are writing about them, CERT-CC and NVD vulnerability reports are starting to emerge, and intelligence contacts (that don't work in InfoSec) whom know I have been trying to address these particular classes of vulnerability have confirmed on back channels that the details are now known and in the hands of organised crime. The opportunity to dodge the bullet is gone.

Now that we're past "day 0", these things don't interest me anymore - I don't deal in exploits or publicly disclosed vulnerabilities, so I have moved on. I only deal in minus-days, so it's now someone else's problem. Good luck!

"Not my circus, not my monkeys", as they say.

Unfortunately, due to gross incompetence and corrupt behaviour, there is now a pandemic scale thermonuclear cyber monster of multiple critical infrastructure vulnerability classes that is going to make the next few years a nightmare. All our data, all our IP will be stolen, sold, destroyed, tampered with; and it was all avoidable. Actually, most of it already has been.

We had a 5 year lead time on this, but this non-response has put the entire nation of Australia onto the losing side of a rather sizeable short position.

Just wait for the "Alice in Wonderland" cyber attacks to start. And the attacks to our defence systems. Our safety and our future prosperity is an illusion.


So how do we fix this?

I am still very much interested in addressing the long game cyber security problem, because frankly, technology isn't fun at the moment. The risks of data breaches and losing intellectual property is a chilling effect for innovation, a disincentive to progress technology to the next level, and it's just not appropriate for a senior computer scientist such as myself to release years of extraordinarily powerful data science, AI and cryptography research into the current geo-political nightmarish, ultra-corrupt, cyber-disaster mine filled world that we are in today.

The last thing the world needs right now is faster sorting and searching algorithms, more powerful AI, highly accurate one-to-many mass facial recognition (thank god one hot encoding generates a 70% at best outcome), super creepy information retrieval systems that don't even need a search box, or super strong unbreakable cryptography (all of which I've been quietly working on) in the hands of state actors so they can find even more intrusive ways to intricately analyse the semantic meaning of every single thing we do, only to have all that data stolen by cyber nasties.

"We are not the sum-total of our search queries, GPS coordinates, text messages and browser history" - AP on CompSciFutures.

Instead of responsible disclosure (which no longer works), I'm going to fix the problem a different way: with a textbook. A great one. One that redefines what it means to engineer software.

Secure-by-design, secure-by-default software engineering. The textbook.

But I need your help.

How you can help

First, please excuse any typos or grammatical errors when reviewing the material posted here, the textbook is yet to be proof read by myself or sent to an editor.

Some of the material in my textbook re-defines how key processes and methodologies operate, so I need feedback from security people, business people and of course software engineering and computer sciences people.

I will be publishing excerpts from the book here and if you would like to comment, please do. Publicly or privately is fine, and feel free to tell me I'm wrong. I will embrace any good intention-ed, well reasoned intellectually sound critique and be most appreciative and respectful of your time, positive or negative.


Excerpt 1: Validation (SDLC) Testing Re-Defined

In Software Engineering, we have functional testing and non-functional testing, or Validation Testing and Verification testing respectively. The first tests, "are we building the right product?", the second tests "are we building the product right?"

I have noticed in LinkedIn discussion forums that some of the answers to quizzes posted in cyber security "professionals only" groups are filled with professionals that are egregiously wrong.

For example, one quiz had the correct answer "Inadequate control checks" (which relates to input validation), but 53% said "buffer overflows" was the biggest cause of vulnerabilities. The root cause of this is that we don't test for security in the SDLC except when we get to penetration testing (if any).

No alt text provided for this image
The correct answer is 1) Inadequate control checks, closely followed by 2) Improper login authentication.

For the sake of completeness, here is the old and tired V-Model as it stands (excuse the hand written diagram - it's from some decades old uni notes):

No alt text provided for this image
Version 1: The classical V-Model we get taught at university. [2, Figure 11.2]
I propose we change the V-Model for Validation (SDLC) Testing as follows:
No alt text provided for this image
Version 2: Note the significant improvements of adding Business Requirements, GRC Assessment, Risk Treatments, Security Requirements, Penetration Testing, Deployment, Availability Monitoring & CI/CD Automation. The text also explains why I've re-classified Usability Testing as Funtional (SDLC) Testing, complete with a math proof to justify this controversial ammendment.

Here is an excerpt from the section on testing if anyone would like to review and comment. Please do (and please don't copy it - there are some intentional material but non-obvious errors and contradictions that will reveal plagiarists and fool ChatGPT):

Here's the question:

In Figure 10, Version 2 (above), where it says:

"Code -------> Unit Testing",

I have added just before it in Version 3 (below):

"Code ----------> Security Testing",

Note that I have added "Security Requirements" to Figure 10, Version 2 (above), because if security requirements are not explicitly specified as part of what "building the right product" means, i.e., Validation (SDLC) Testing, then Security Testing test activities will fall into the category of "building the product right", which is Verification (Non-Functional) Testing. The aim here is to integrate security related test activities into the SDLC directly from start to finish, with all the rigour that goes with functional requirements, and none of the hand waving explaining that goes with non-functional requirements.

Which gives us the following revised V-Model:

No alt text provided for this image
Verison 3: Note the addition of Security Testing in this 3rd verison.

It's my view that we need to we start teaching the test team how to test for security defects that are usually left till penetration testing (if any), by adding "Security Testing" just after "Unit Testing".

What do people think? Let me know!

.\p

And this is the finalised V-Model (25-Mar-23):

No alt text provided for this image
Finalised V-Model - https://doi.org/10.13140/RG.2.2.23515.03368


Journal of feedback received

  • 4-Feb-23: Someone has pointed out there is an error in version 3 of my (revised) V-Model - can you spot it?
  • 25-Mar-23: Published final (version 4) which matches up with the SESM (see Excerpt 2 below)


Excerpt 2A: The Software Engineering Standard Model (1/2 changes to the SDLC)

No alt text provided for this image
The Software Engineering Standard Model

This excerpt that I'm putting out for peer review is a controversial one - I have lined up the life-cycle of an object, use case driven development, the Software Development Life Cycle (SDLC) AND the Rational Unified Process (RUP) then gone through and fixed all the 'little things' that contradicted each-other, then integrated a number of touch-points into the various process models where we can insert information security best practices.

The diagram should be self-explanatory to those skilled in the art, and there is an Excerpt 2B coming with a discussion of the new/revised elements of the SDLC (including a few proofs to back some of the more controversial changes), as well as some high-level guidance from the Software Engineering chapter on what sort of information security best practices can be integrated at each of the annotated security-related touch points in The Excerpt supplied here.

Feedback (positive or negative) is most welcome and very much encouraged. I can't solve this alone, and this is one of those "whole is greater than the sum of it's parts" things that benefits from group-think.

Journal of feedback received

  • 13-Feb-23: A very senior lecturer that has trained many Australian software engineers both from here and abroad at an undergrad and postgrad level has weighed in with some "corrections to my homework". I'm very grateful, and the Software Engineering Standard Model (SESM) will need to be updated once we conclude our dialog.
  • 21-Feb-23: Someone has pointed out that Usability testing may affect the business requirements which isn't reflected in the diagram.
  • 22-Feb-23: Someone has pointed out that the iterative aspect of the Rational Unified Process (RUP) has not been properly captured in the SESM and at the moment it looks like a waterfall process. Changes to come.

Final Version

No alt text provided for this image
Software Engineering Standard Model (SESM) v2 - https://doi.org/10.13140/RG.2.2.12609.84321

Excerpt 2B: Proofs associated with 2A

No alt text provided for this image
Proof showing that usability testing has now moved from non-functional to functional testing - https://doi.org/10.13140/RG.2.2.12609.84321

Excerpt 3: The Pillars of Information Security

Traditionally, information security has been framed (give-or-take) in terms of four pillars:

  • Confidentiality
  • Integrity
  • Authenticity
  • Availability

Further, no consideration has been given about the necessity of certain pillars as a necessary condition for others.

The following update to the Pillars of Information Security expands the domain into 11 separate concerns, which can be arranged according to a continuum whereby each antecedent pillar is necessary for proper implementation for any of its consequent pillars, and if one rigorously implements each of these concerns in their entirety, we can now start to talk about 'Complete Security'.

The following illustration summarises the revised Pillars of Information Security:

https://doi.org/10.13140/RG.2.2.12609.84321
The Pillars of Information Security - https://doi.org/10.13140/RG.2.2.12609.84321

Excerpt 4: update to the "Iterative process of modelling and decision making" by @ Weigend Andreas

Prior art / literature survey:

No alt text provided for this image
initial version of the "Iterative Process of Modelling and Decision Making", pre-edit. (click for link to PDF)

Revision 4A - the revised Iterative Process of Modelling & Decision Making:

No alt text provided for this image

Discussion:

This process is the subjective machine learning process vs. the classical objective randomised, controlled experiment approach.

This is an old one that draws on the work of Dr. Andreas Wiegand, Dr. Andrew Ng and Dr. Sebastien Haneuse which forms the basis of an updated process for modelling and decision making.

If you follow the process correctly, then the machine learning process will lower-bound model performance, whereas the classical objective stats method (with 95% - 99% confidence intervals) gives you an average case performance. The former is subjective, so it can't be proven, but it more frequently works better in the wild with less data when applied correctly.

In my book, I will be adding two new and additional steps to the second slide (now done - see revision 4A), but the prior art / literature survey deck is a relatively complete work in itself showing the current state of the art in how the Machine Learning process works.

If you follow all the steps correctly and you are trained in machine learning, the evaluation step should give you metrics which represent a 'floor' or minimum best performance of your model in production.

WARNING/ACHTUNG: DO NOT RELEASE AT SCALE ML, AI, DATA SCIENCE OR MANAGEMENT SCIENCE THAT DOES NOT USE THIS PROCESS TO BEAT A HUMAN PANEL AND HAS NOT PASSED A "NECESSARY, SUFFICIENT & COMPLETE" TURING TEST AND HAS NOT BEEN SIGNED OFF BY A POST-GRAD QUALIFIED COMPUTER SCIENTIST THAT IS A MACHINE LEARNING EXPERT, or it will reverse productivity and unwind economic growth. Ask a good macro-economist.

Revision 4B - adding the contributions by Dr Andrew Ng & Dr Sebastien Hanehuse:

The finalised ML process is as follows (click for link to PDF w/notes):

No alt text provided for this image
The revised iterative process of modelling & decision making (v4) - https://doi.org/10.13140/RG.2.2.11228.67207

Excerpt 5: Vendor Backdoors + Minimum Responsible Security Controls

Definition:

A Vendor Backdoor is a known vulnerability with an associated exploit script (so it's past 0-day), that has been explicitly added to a Software System via the vendor's code base intentionally, either by the vendor themselves or by someone that has compromised the software supply chain. A Vendor Backdoor is not associated with a "Software Security Defect" (which is usually invisible functionality that is not part of the specified function points), so to that end, vendor back doors are intentional vulnerability functionality that do form part of a system's function points.

Software Engineering aspects of Vendor Backdoors:

Because Vendor Backdoors are intentional functionality, they are easily fixed and do not suffer the "Software Security Defect" issue of needing multiple remediation attempts before the root cause is finally rectified because their implementation is plainly visible in the code base, however in certain situations mitigation can still be put in place to de-fang them whilst we await the vendor to address them.

Their use is usually also very easy to detect by the trained eye, as very experienced software engineers that have spent more than a little time working with or on a testing team can usually spot hidden & intentional function points from a mile away.

Economic Considerations:

For each day of dwell time, the cost of a cyber event increases exponentially. The upper bound of friendly fire (vendor backdoors) lower bounds the cost of a real cyber attack.

VENDOR BACKDOORS ARE MARKET INTERFERENCE.?National Security Agency?#NSA?Central Intelligence Agency?#CIA?#ACSC?#IGIS?#ASIO?Asio?If you keep doing it YOU ARE GOING TO CAUSE MARKET FAILURE. Read "Price Theory" by Friedman.

State Actors are generally the root cause of Vendor Backdoors, so Chubb?#Chubb?stop issuing D&O, PI, PL, Product & Cyber coverage for state actor events or market collapse will occur. This is an economic austerity intervention measure (not interference) that will delay the impending market failure of #nasdaq #bigtech.

More to come in my textbook.

Vendor Backdoor Use Cases

Vendor Backdoor are an autocratic tool usually used by State Actors for the following 4 strategies:

  • Oppression
  • Discrimination
  • Coercive Control
  • War

These use cases include the following tactics:

  • Unchecked covert surveillance
  • Psyops
  • IP theft
  • Causing global insecurity
  • Revisionism
  • Tampering with & destruction of evidence
  • Subverting legal due-process
  • Profit

The state actors that now have access to these Vendor Backdoors are frequently junior level state police with no more than successful completion of year 10 high/elementary school; no training in ethics or corruption; no legal obligation to hold the confidential information they encounter secret; and no training in what it means to hold power or what the fable of Damocles Sword is about.

Further there is no way to check if a cyber attack is due to a Vendor Backdoor, or a Bad Actor that is actually hacking.

Yes this is actually happening, today.

Minimum Responsible Security Controls for Vendor Backdoors:

Coming soon.

Activity diagram showing how to implement a vendor backdoor that complies with the above-mentioned security controls:

Coming soon.


Excerpt 6: The Lifecycle of a Vulnerability

No alt text provided for this image
The Lifecycle of a Vulnerability - https://dx.doi.org/10.13140/RG.2.2.23428.50561

Forthcoming excerpts:

  • #SDLC changes w/proofs justifying them. (DONE)
  • Changing the 4 pillars of information security (confidentiality, integrity, authenticity, availability) to the 11 pillars. [3, 8, 10] (DONE)
  • 2023 update to the "Iterative process of modelling and decision making" by @ Weigend Andreas w/ a comparison to other process models. (DONE)
  • Clarifying "Threat Intelligence" as a post-attack + remediation activity rather than preventative risk mitigation measures + re-defining the kill-chain. (DONE)
  • Updates to the (@ Twitter ) RACI model so it can be used in UML Activity Diagrams & #InfoSec.
  • Iterative (agile) GRC Assessment and GRC maturity levels (#RiskAssessment).
  • ML/AI evaluation metrics and methods best practices.
  • ML/AI mathematical notation standardisation: A set-builder based confluence of @ Daphne Koller 's probabilistic notation [4], @ Andrew Ng 's deep learning notation [5], Tom Mitchell 's version space definition [6] and @ Kevin Korb + @ Ann Nicholson 's various probabilistic functional forms [7].
  • The real Y2K bug: The Y2K38 time_t bug, cache coherency of intervals, locality of reference/data/distribution of point estimates of time and best practices on time dimensions + how to properly represent an infinite discrete structure inside a computer using Projective Infinite Set Theory[11] using the line of time as a prototypical example.
  • Re-classifying the Gamma (@ Erich Gamma ) & Fowler (@Martin Fowler design patterns.


(C) COPYRIGHT 2023, Andrew Prendergast - All Rights Reserved.

References

[1] “Dwell Time.” Cyber threat hunting. Wikimedia Foundation, December 9, 2022. https://en.wikipedia.org/wiki/Cyber_threat_hunting#Dwell_Time.?

[2] Jorgensen, Paul. Software Testing: A Craftsman's Approach. Boca Raton, FL: CRC Press, 2014.

[3] Bass, Len, Paul Clements, and Rick Kazman. “4.4 Quality Attributes (Security).” Essay. In Software Architecture in Practice, p.~86. Boston, MA: Addison-Wesley, 2010.?

[4] Koller, Daphne, and Nir Friedman. Probabilistic Graphical Models Principles and Techniques. Cambridge, Mass: MIT Press, 2010.?

[5] Ng, Andrew. “Deep Learning Specialization.” DeepLearning.AI. Coursera. Accessed February 6, 2023. https://www.coursera.org/specializations/deep-learning.?

[6] Mitchell, Tom M. Machine Learning. New York: MacGraw-Hill, 1997.?

[7] Korb, Kevin B., and Ann E. Nicholson. Bayesian Artificial Intelligence. Boca Raton, FL: Chapman & Hall/CRC, 2004.

[8] Daswani, Neil, Christopher Kern, Anita Kevasan, and Vinton Cerf. “1.9 Concepts at Work.” Essay. In Foundations of Security: What Every Programmer Needs to Know, 22–24. New York, NY: Springer-Verlag, 2007.?

[9] “A Cyber-Attack with Covid-like Characteristics?” World Economic Forum 2021. YouTube, January 18, 2021. https://www.youtube.com/watch?v=-0oZA1B3ooI.?

[10] Stallings, William. “1.1 Computer Security Concepts.” Essay. In Cryptography and Network Security: Principles and Practice, 21–24. Pearson, 2022.?

[11] Gowers, Timothy, June Barrow-Green, and Imre Leader. “IV.22 Set Theory 9. Projective Sets and Descriptive Set Theory.” Essay. In The Princeton Companion to Mathematics, 631–32. Princeton, NJ: Princeton University Press, 2008.?


This is a test for a LinkedIn bug report: #AliceInWonderlandCyberAttack

Andrew Prendergast

Avant-garde innopreneur, Australian computer scientist, futurist, decision science + ethical CV/ML/AI expert, #InfoRec + visualization guru, cryptgrphr and information security specialist. (?/acc) ? @CompSciFutures on X

1 年

Finalised the Iterative Process of Modelling & Decision Making (see 4B)

Andrew Prendergast

Avant-garde innopreneur, Australian computer scientist, futurist, decision science + ethical CV/ML/AI expert, #InfoRec + visualization guru, cryptgrphr and information security specialist. (?/acc) ? @CompSciFutures on X

1 年

Added more info on the (1) Iterative Process of Modelling & Decision Making and (2) started the section on Vendor Backdoors and Minimum Responsible Security Controls associated with them.

  • 该图片无替代文字
Andrew Prendergast

Avant-garde innopreneur, Australian computer scientist, futurist, decision science + ethical CV/ML/AI expert, #InfoRec + visualization guru, cryptgrphr and information security specialist. (?/acc) ? @CompSciFutures on X

1 年

I've made two recent updates, including 1) The 12 Pillars of Information Security: what it takes to make sure your security architecture is necessary, sufficient and complete 2) initial version of the "Iterative Process of Modelling and Decision Making" (pre-edit). Diagram follows.

  • 该图片无替代文字
Andrew Prendergast

Avant-garde innopreneur, Australian computer scientist, futurist, decision science + ethical CV/ML/AI expert, #InfoRec + visualization guru, cryptgrphr and information security specialist. (?/acc) ? @CompSciFutures on X

1 年

The SESM has been updated based on peer review.? The previous version was missing the 'iterative' nature of the Rational Unified Process (RUP). The final is available here: https://doi.org/10.13140/RG.2.2.23515.03368

Andrew Prendergast

Avant-garde innopreneur, Australian computer scientist, futurist, decision science + ethical CV/ML/AI expert, #InfoRec + visualization guru, cryptgrphr and information security specialist. (?/acc) ? @CompSciFutures on X

1 年

I know I'm behind on getting the next pre-printed excerpt out: it's done, I just need to package it up and publish it. I had a big week last week with a few unforeseen family events, and I need a break from InfoSec for a week so I've been doing some AI-related work instead. I offer this amusing little Urban Dictionary definition in it's place (temporarily) and I'll have the pre-print up in a couple days:

  • 该图片无替代文字

要查看或添加评论,请登录

社区洞察

其他会员也浏览了