The Reality Problem Grows With the AI Executive Order
Chip Block
Vice President and Chief Solutions Architect at Evolver, a Converged Security Solutions Company and CEO/CTO of Kiwi Futures, LLC
The White House released an Executive Order on Artificial Intelligence (AI) (Executive Order) on Monday that laid out the administrations plan for managing and regulating AI.? The goals of providing transparency, security, equity and privacy are admirable and described well in the order.? There is one problem…..Reality.
Reality is an ongoing issue in the regulatory space when it comes to today’s technology.? The President’s Cybersecurity Strategy (Presidents Cybersecurity Strategy? ) clearly states that a primary method for addressing the exploding cybersecurity crisis is through increased regulation.? Congress is also pursuing this approach by addressing major technology changes and cybersecurity threats with new laws set to direct and punish those that do not meet specific auditing and reporting requirements.? The problem is that most of these regulations and orders are not really implementable given the current state of technology.
What the orders and regulations do not address is the automation and speed of our current technological world.? Most of these regulations have this vision of a technical economy made up of companies with a bunch of programmers in back rooms writing code that goes through architects and designers and then through quality staff and testers that eventually have configuration boards that review everything and declare the software ready for public release.? Furthermore, this software is going to run on client networks that are monitored and protected.? That was the early 2000s.? It doesn’t work that way anymore.?
Code is autogenerated from millions of open source and other repositories in a matter of minutes.? Why did Microsoft pay $7.5B for GitHub; access to code.? They followed up with CoPilot to autogenerate code from these massive repositories.? Google, Amazon, Facebook and almost all other development shops follow this development methodology.? Software is not developed by a set of programmers in a back room, it is created in minutes by automated means.? This software is then automatically pushed out to devices around the world in a matter of minutes.? According to 42matters.com, 39% of all iPhone apps are updated per week.
Similarly, there are no networks anymore.? Applications on laptops connect to cloud “as a service” functions which sends data to phones connected to cars and even refrigerators.? There is no boundary.? Now combine this with the software development pace described earlier, and we live in a world where everything changes constantly and is spread instantly.?
Enter AI
Where AI will mostly be felt is the acceleration of what I described above.? Where 80 percent of code may be automatically generated today, 95 percent or higher will be autogenerated within the next few years.? Anybody who has played with the ChatGPT Python code generator can see this.? You can tell it in English what software you want to build and it writes the code for you.? The major leap that is not that far off is auto generation of code at the endpoint.? In other words, my laptop generates code to create new functions, defeat security attacks and to capture new data and usage patterns without a traditional release mechanism at all.
As admirable as the objectives of the AI Executive Order are, overlaying them with the world I described above is impossible.? When the order mentions that items created by AI have to be labeled, does that mean that all auto generated code has to be marked?? If an AI application uses an initial dataset that has no personal data but then trains from data gathered from users does that have to dynamically label itself?? If this data is used to create new functions, does that have to be marked in some way?
This continues a trend in regulations that have trouble with reality.? The addition of AI makes these already difficult requirements almost impossible.? These include:
领英推荐
-?? Software Bill of Materials (SBOM) – this is a hot topic but has already hit the reality challenge.? If an application has 10,000 files that are primarily open source, created by thousands of people around the world with most having no attribution as to who wrote them, at what level of bill of materials reporting has value.? As mentioned earlier, the update rate of apps on iPhones is 39% per week.? Double that in the AI world and now you have new SBOMs for all of these apps coming out every single day, if not every hour.
?-???Reporting within 48/72 hours – anybody who has worked in the incident response world has known this was always a bit of a pipe dream.? Within the first few days of an attack, everyone is just trying to figure out what is going on, much less knowing the extent and impact of the attack.? Now, in the AI age, even identifying where the attack has hit and is coming from is going to be extremely complicated.?
?-??? Vulnerability Management (VM) – the overall VM approach is based on the traditional software development process.? Each application should have a list of known vulnerabilities that are tracked in detail.? Similarly, CISA then publishes this list for everyone to fix.? This process already does not work.? A company with 500 software applications ranging from tiny open source software monitoring network performance to major enterprise resource platforms often has close to 1000 vulnerabilities a week to address.? Now enter AI, with new software coming out every day, the pace of new vulnerabilities combined with the pace of patching/update makes tracking and reporting on all vulnerabilities impossible.
?-?? Privacy Management – there has been an explosion of privacy laws passed by federal and state governments over the past few years.? The implication is that the systems sharing information have inherent knowledge of the data being shared and consumed.? For example, that a mapping app like Google Maps knows that the place someone is going to is a hospital, therefore this is protected information.? These systems were not designed this way, in fact, the rationale the Googles and Facebooks use for why they are not responsible for data is that their systems are agnostic.? In the AI world with autogenerated code and function creation, this issue is going to multiply exponentially.? Data collected by apps will be used for training the apps, which will create more functionality, which will collect more data.? Personal data is going to be in this mix and segmenting it using current systems will be almost impossible.
-???CMMC – foundational to assessments such as CMMC is that the network, systems, users and applications can be defined and evaluated.? As described above, this already is a challenge.? Now enter an AI world where software defined networks, applications, users and data changes daily.? The assessment is just a snapshot in very tiny window that really doesn’t reflect the overall security posture at all.
Paper for Papers Sake
As with many regulations, what will happen is that companies and organizations will come up with work arounds to say they are meeting current requirements.? This becomes more of a paper drill than actually fulfilling the detailed objective of the regulation or order.? This ends up as a whole line of business in itself, creating forms, assessors, auditors, etc. to review and approve this process.? In the end, however, the results are minor because of the reality challenges mentioned above.
Thinking Differently
As the phrase goes, just pointing out issues without solutions is just complaining, so I have to at least make an attempt at addressing the challenges mentioned above.? Most importantly, we need to start thinking about this issue differently.? Instead of thinking of large actions measured by months, looking at small actions measured by days would move toward a pace aligned with the AI paradigm.? Changing the risk calculations and actions focused more on reducing loss and impact versus trying to defeat every attack is also more structured in the speed and dynamic world that AI is moving towards.? And finally, the true protections is in how data is managed independently from the network, devices and applications being used.? I know these concepts are fuzzy and not defined, so they also may have reality issues. What is clear, however, is we need to start thinking differently.