September 01, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Since cyber risk can’t be eliminated, the question that must be answered is: Can cyber risk at least be managed in a cost-effective manner? The answer is an emphatic yes! ... Identify the sources of cyber risk. These sources can be broken down into various categories. More specifically, there are internal and external threats, as well as potential vulnerabilities that are the basis for cyber risk. Identifying these threats and vulnerabilities is not only a logical place to start the process of managing an organization’s cyber risk, it also will help to frame an approach for addressing an organization’s cyber risk. Estimate the likelihood (i.e., probability) that your organization will experience a cyber breach. Of course, any single point estimate of the probability of a cyber breach is just that—an estimate of one possibility from a probability distribution. Thus, rather than estimating a single probability, a range of probabilities could be considered. Estimate the maximum cost to an organization if a cyber breach occurs. Here again, a point estimate of the maximum cost resulting from a cyber-attack is just that—an estimate of one possible cost. Thus, rather than estimating a single cost, a range of costs could be considered.
AI prosthetics technology is advancing on several fronts. Researchers at the UK's University of Southampton and Switzerland's EPFL University have, for instance, developed a sensor that allows prosthetic limbs to sense wetness and temperature changes. "This capability helps users adjust their grip on slippery objects, such as wet glasses, enhancing manual dexterity and making the prosthetic feel more like a natural part of their body," Torrang says. Multi-texture surface recognition is another area of important research. Advanced AI algorithms, such as neural networks, can be used to process data from liquid metal sensors embedded in prosthetic hands. "These sensors can distinguish between different textures, enabling users to feel various surfaces," Torrang says. "For example, researchers have developed a system that can accurately detect and differentiate between ten different textures, helping users perform tasks that require precise touch." Natural sensory feedback research is also attracting attention. AI can be used to provide natural sensory feedback through biomimetic stimulation, which mimics the natural signals of the nervous system.
As software grows, code can become overly complicated and difficult to understand, making modifications and extensions challenging. Refactoring simplifies and clarifies the code, enhancing its readability and maintainability.Signs of poor performance If software performance degrades or fails to meet efficiency benchmarks, refactoring can optimize it and improve its speed and responsiveness. Migration to newer technologies and libraries When migrating legacy systems to newer technologies and libraries, code refactoring ensures smooth integration and compatibility, preventing potential issues down the line.Frequent bugs Frequent bugs and system crashes often indicate a messy codebase that requires cleanup. If your team spends more time tracking down bugs than developing new features, code refactoring can improve stability and reliability.Onboarding team of new developers Onboarding new developers is another instance where refactoring is beneficial. Standardizing the code base ensures new team members can understand and work with it more effectively.Code issues
领英推荐
The most immediate chance to make a difference lies within your existing dataset. Take the initiative to compare your supplier and customer master data with reliable external sources, such as government databases, regulatory lists, and other trusted entities, to pinpoint discrepancies and omissions. Consider this approach as a form of “data governance as a service,” as a shortcut to data quality where you can rely on comparison with the authoritative data sources to make sure fields are the right length, in the right format, and, even more important, accurate. This task may require significant effort (unless automated master data validation and enrichment is employed), but it can provide an immediate ROI. Each corrected error and updated entry contributes to greater compliance, lower risk and enhanced operational efficiency within the organization. However, many companies lack a consistent process for cleaning data, and even among those with a process in place, the scope and frequency of data cleansing is often insufficient. The best data quality comes from continuous automated cleansing and enrichment.
Assume-breach accepts that breaches are inevitable, shifting the focus from preventing all breaches to minimizing the impact of a breach through security measures, protocols and tools that are designed with the assumption that an attacker may have already compromised parts of the network. Paired with the assume-breach mindset, these security measures, protocols and tools focus on protecting data, detecting unusual behavior and responding quickly to potential threats. Just as cars are equipped with seatbelts and airbags to reduce the fallout of a crash, assume-breach encourages organizations to put proactive measures in place to reduce the impact and damage when the worst occurs. ... In the event a cyber attack does occur, having a well-tested and resilient plan in place is key to minimize impacts. As the entire organization participates in these practices and trainings, leaders can focus on implementing assume-breach security measures, protocols and tools. These measures should include enhancing real-time visibility, identifying vulnerabilities, blocking known ransomware points and strategic asset segmentation.?
The first reason is that, due to the cost of GPUs, generative AI has broken the near-zero marginal cost model that SaaS has enjoyed. Today, anything bundling generative AI commands a high seat price simply to make the product economically viable. This detachment from underlying value is consequential for many products that can’t price optimally to maximize revenue. In practice, some products are constrained by a pricing floor (e.g., it is impossible to discount 50% to 10x the volume), and some features can’t be launched because the upsell doesn’t pay for the inference cost ... The second reason is that the user experience with remote models could be better: generative AI enables useful new features, but they often come at the expense of a worse experience. Applications that didn’t depend on an internet connection (e.g., photo editors) now require it. Remote inference introduces additional friction, such as latency. Local models remove the dependency on an internet connection. The third reason has to do with how models handle user data. This plays out in two dimensions. First, serious concerns have been about sharing growing amounts of private information with AI systems.
Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan
2 个月Well said!.
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
2 个月Glad that you are finding it useful. Thanks.
Entrepreneur | Executive Transition Coach | Customer Service Advocate | Mocktail Distributor | Martial Artist | Conflict Specialist | Author | Speaker
2 个月Great stuff… I appreciate your newsletter ??
Senior Manager Mining
2 个月Very informative