June 23, 2021
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Drift is a change in distribution over time. It can be measured for model inputs, outputs, and actuals. Drift can occur because your models have grown stale, bad data is flowing into your model, or even because of adversarial inputs. Now that we know what drift is, how can we keep track of it? Essentially, tracking drift in your models amounts to keeping tabs on what had changed between your reference distribution, like when you were training your model, and your current distribution (production). Models are not static. They are highly dependent on the data they are trained on. Especially in hyper-growth businesses where data is constantly evolving, accounting for drift is important to ensure your models stay relevant. Change in the input to the model is almost inevitable, and your model can’t always handle this change gracefully. Some models are resilient to minor changes in input distributions; however, as these distributions stray far from what the model saw in training, performance on the task at hand will suffer. This kind of drift is known as feature drift or data drift. It would be amazing if the only things that could change were the inputs to your model, but unfortunately, that’s not the case.
To mount a proper defense, you must understand what digital assets are exposed, where attackers will most likely target a network, and what protections are required. So, increasing attack surface visibility and building a strong representation of attack vulnerabilities is critical. The types of vulnerabilities to look for include older and less secure computers or servers, unpatched systems, outdated applications, and exposed IoT devices. Predictive modeling can help create a realistic depiction of possible events and their risks, further strengthening defense and proactive measures. Once you understand the risks, you can model what will happen before, during and after an event or breach. What kind of financial loss can you expect? What will be the reputational damage of the event? Will you lose business intelligence, trade secrets or more? “The successful [attack surface mapping] strategies are pretty straightforward: Know what you are protecting (accurate asset inventory); monitor for vulnerabilities in those assets; and use threat intelligence to know how attackers are going after those assets with those vulnerabilities,” says John Pescatore, SANS director of emerging security trends.
There’s the technology of building the blockchain, and then there’s building the network and the business around that. So there are multiple legs to the stool, and the technology is actually the easiest piece. That’s just establishing architecturally how you want to embody that network, how many nodes, how many channels, how your data is going to be structured, and how information is going to move among the blockchain. But the more interesting and challenging exercise, as is true with any network, is participation. I think it was Marc Andreessen who famously said “People are on Facebook because people are on Facebook.” You have to drive participation, so you have to consider how to bring participants to this network, how organizations can be engaged, and what’s going to make it compelling for them. What’s the value proposition? What are they going to get out of it? How do you monetize and how do you operate it? And you can’t figure that on the fly. So we went out to bring the top-of-the-food-chain organizations in various industries on board, so they can help establish the inertia for the network to take off.?
领英推荐
The big three frameworks are the Lockheed Martin Cyber Kill Chain?, the Diamond Model, and MITRE ATT&CK. If there’s a fourth, I would add VERIS, which is the framework that Verizon uses for their annual Data Breach Investigations Report. I often get asked which framework is the best, and my favorite answer as an analyst is always, “It depends on what you’re trying to accomplish.” The Diamond Model offers an amazing way for analysts to cluster activity together. It’s very simple and covers the four parts of an intrusion event. For example, if we see an adversary today using a specific malware family plus a specific domain pattern, and then we see that combination next week, the Diamond Model can help us realize those look similar. The Kill Chain framework is great for communicating how far an incident has gotten. We just saw reconnaissance or an initial phish, but did the adversary take any actions on objectives? MITRE ATT&CK is really useful if you’re trying to track down to the TTP level. What are the behaviors an adversary is using? You can also incorporate these different frameworks.
The microservices architecture not only makes the whole application much more decoupled and cohesive, it also makes the teams more agile to make frequent deployments without interrupting or depending on others. The communication among services is most commonly done using HyperText Transfer Protocol. The Request and Response format (XML or JSON) is known as API Contract and that’s what binds services together to form the complete behaviour of the application. In the given example above, we are talking about an application that serves both Web and Mobiles users, and allows external services to integrate using REST API endpoints provided to end-users. Each of the use cases have their own endpoints exposed in front of individual Load Balancers that manages Incoming Requests with best available resources. Each of the internal services contains a Web Server that handles all incoming requests and forwards them to the right services or sends it to in-house application, an Application Server that hosts all the business logic of the microservice, and a quasi-persistent layer, a Local Replication of the Database based on Spatial and/or Temporal locality of data.
Autonomous systems have complex interactions with the real world. This raises many questions about the validation of autonomous systems: How to trace back decision making and judge afterwards about it? How to supervise learning, adaptation, and especially correct behaviors – specifically when critical corner cases are observed? Another challenge would be how to define reliability in the event of failure. With artificial intelligence and machine learning, we need to satisfy algorithmic transparency. For instance, what are the rules in an obviously not anymore algorithmically tangible neural network to determine how an autonomous system might react with several hazards at the same time? Classic traceability and regression testing will certainly not work. Rather, future verification and validation methods and tools will include more intelligence based on big data exploits, business intelligence, and their own learning, to learn and improve about software quality in a dynamic way.