August 21, 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Traditional approaches to fraud prevention and response no longer measure up. First of all, they’re reactive, rather than proactive, focused on damage that’s already taken place rather than anticipating, and potentially preventing, the threats of the future. The limitations of this approach play out in commercial off-the-shelf tools that organizations can’t easily modify to new developments in the landscape. Even the most cutting-edge AI solutions may be limited in detecting new types of fraud schemes, having only been trained on known categories. Secondly, today’s siloed operations impede progress. Cybersecurity teams and fraud teams, the two groups on the frontlines of the fight, too often work with different tools, workflows, and intelligence sources. These silos extend across the various stages of the fraud-fighting lifecycle: threat hunting, monitoring, analysis, investigation, response, and more. Individual tools address only discrete parts of the process, rather than the full continuum, leaving much to fall within the gaps. When one team notices something suspicious, the full organization might not know about the threat and act upon it until it’s too late.
One of the biggest challenges in AI, bias can stem from several sources. The data used for training AI models might reflect real societal inequalities, or the AI developers themselves might have conscious or unconscious feelings about gender, race, age, and more that can wind up in ML algorithms. Discriminatory decisions can ensue, such as when Amazon’s recruiting software penalized applications that included the word “women,” or when a health care risk prediction algorithm exhibited a racial bias that affected 200 million hospital patients. To combat AI bias, AI-powered enterprises are incorporating bias-detecting features into AI programming, investing in bias research, and making efforts to ensure that the training data used for AI and the teams that develop it are diverse. Gartner predicts that by 2023, “all personnel hired for AI development and training work will have to demonstrate expertise in responsible AI.” Continually monitoring, analyzing, and improving ML algorithms using a human-in-the-loop (HITL) approach – where humans and machines work together, rather than separately – can also help reduce AI bias.?
Scalability refers to the systems' ability to perform and operate as the number of users or requests increases. It is achievable with horizontal or vertical scaling of the machine or attaching AutoScalingGroup capabilities. Here are three areas to consider when architecting scalability into your system:Traffic pattern: Understand the system's traffic pattern. It's not cost-efficient to spawn as many machines as possible due to underutilization. Here are three sample patterns:Diurnal: Traffic increases in the morning and decreases in the evening for a particular region. Global/regional: Heavy usage of the application in a particular region.?Thundering herd: Many users request resources, but only a few machines are available to serve the burst of traffic. This could occur during peak times or in densely populated areas.?Elasticity: This relates to the ability to quickly spawn a few machines to handle the burst of traffic and gracefully shrink when the demand is reduced.?Latency: This is the system's ability to serve a request as quickly as possible.?
领英推荐
A few weeks later, Yann LeCun, the chief scientist at Meta’s artificial intelligence (AI) Lab and winner of the 2018 Turing Award, released a paper titled “A Path Towards Autonomous Machine Intelligence.” He shares in the paper an architecture that goes beyond consciousness and sentience to propose a pathway to programming an AI with the ability to reason and plan like humans. Researchers call this artificial general intelligence or AGI. I think we will come to regard LeCun’s paper with the same reverence that we reserve today for Alan Turing’s 1936 paper that described the architecture for the modern digital computer. Here’s why. ... LeCun’s first breakthrough is in imagining a way past the limitations of today’s specialized AIs with his concept of a “world model.” This is made possible in part by the invention of a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and over multiple time scales. With this world model, we can predict possible future states by simulating action sequences. In the paper, he notes, “This may enable reasoning by analogy, by applying the model configured for one situation to another situation.”
One key takeaway from all this: consolidation of application descriptors enables efficiencies via modularization and reuse of tested and proven elements. This way the DevOps team can respond quickly to the dev team needs in a way that is scalable and repeatable. Some potential anti-patterns include: Developers are throwing their application environment change needs over the fence via the ticketing system to the DevOps team causing the relationship to worsen. Leaders should implement safeguards to detect this scenario in advance and then consider the appropriate response. An infrastructure control plane, in many cases, can provide the capabilities to discover and subsume the underlying IaC files and detect any code drift between the environments. Automating this process can alleviate much of the friction between developers and DevOps teams. Developers are taking things into their own hands resulting in an increased number of changes in local IaC files and an associated loss of control. Mistakes happen, things stop working, and finger pointing ensues.?
DevOps is changing fundamentally as a result of AI and ML. Change in security is most notable because it acknowledges the need for complete protection that is intelligent by design (DevSecOps). Many of us believe that shortening the software development life cycle is the next critical step in the process of ensuring the secure delivery of integrated systems via Continuous Integration & Continuous Delivery (CI/CD). DevOps is a business-driven method for delivering software, and AI is the technology that may be integrated into the system for improved functioning; they are mutually dependent. With AI, DevOps teams can test, code, release, and monitor software more effectively. Additionally, AI can enhance automation, swiftly locate and fix problems, and enhance teamwork. AI has the potential to increase DevOps productivity significantly. It can improve performance by facilitating rapid development and operation cycles and providing an engaging user experience for these features. Machine Learning technologies can make data collection from multiple DevOps system components simpler.