November 05, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Currently, the All-India Institute of Medical Sciences (AIIMS) Delhi is the only public healthcare institution exploring AI-driven solutions. AIIMS, in collaboration with the Ministry of Electronics & Information Technology and the Centre for Development of Advanced Computing (C-DAC) Pune, launched the iOncology.ai platform to support oncologists in making informed cancer treatment decisions. The platform uses deep learning models to detect early-stage ovarian cancer, and available data shows this has already improved patient outcomes while reducing healthcare costs. This is one of the few key AI-driven initiatives in India. Although AI adoption in the healthcare provider segment is relatively high at 68%, a large portion of deployments are still in the PoC phase. What could transform India’s healthcare with Generative AI? What could help bring care to those who need it most? ... India has tremendous potential in machine intelligence, especially as we develop our own Gen AI capabilities. In healthcare, however, the pace of progress is hindered by financial constraints and a shortage of specialists in the field. Concerns over data breaches and cybersecurity incidents also contribute to this aversion.?
To help organizations develop stronger defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a trio of guidance documents for security organizations on Oct. 31. To its previously released AI cybersecurity and governance checklist, the group added a guide for preparing for deepfake events, a framework to create AI security centers of excellence, and a curated database on AI security solutions. ... The trajectory of deepfakes is quite easy to predict — even if they are not good enough to fool most people today, they will be in the future, says Eyal Benishti, founder and CEO of Ironscales. That means that human training will likely only go so far. AI videos are getting eerily realistic, and a fully digital twin of another person controlled in real time by an attacker — a true "sock puppet" — is likely not far behind. "Companies want to try and figure out how they get ready for deepfakes," he says. "The are realizing that this type of communication cannot be fully trusted moving forward, which ... will take people some time to realize and adjust." In the future, since the telltale artifacts will be gone, better defenses are necessary, Exabeam's Kirkwood says.
The Cyber Resilience Act was a shock that awakened many people from their comfort zone: How dare the “technical” representatives of the European Union question the security of open-source software? The answer is very simple: because we never told them, and they assumed it was because no one was concerned about security. ... The CRA requires software with automatic updates to roll out security updates automatically by default, while allowing users to opt out.? Companies must conduct a cyber risk assessment before a product is released and throughout 10 years or its expected lifecycle, and must notify the EU cybersecurity agency ENISA of any incidents within 24 hours of becoming aware of them, as well as take measures to resolve them. In addition to that, software products must carry the CE marking to show that they meet a minimum level of cybersecurity checks. Open-source stewards will have to care about the security of their products but will not be asked to follow these rules. In exchange, they will have to improve the communication and sharing of best security practices, which are already in place, although they have not always been shared. So, the first action was to create a project to standardize them, for the entire open-source software industry.
领英推荐
Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, a former EY partner. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.” These services are specifically being advertised as using machine learning, and it’s probably not just marketing. “The proof is in the pudding,” Malone says. “They’re definitely better.” ... Criminals are also using machine learning to get better at guessing passwords. “We’ve seen evidence of that based on the frequency and success rates of password guessing engines,” Malone says. Criminals are building better dictionaries to hack stolen hashes. They’re also using machine learning to identify security controls, “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.” ... The most frightening use of artificial intelligence are the deep fake tools that can generate video or audio that is hard to distinguish from the real human. “Being able to simulate someone’s voice or face is very useful against humans,” says Montenegro.
If ‘Shift Left’ is all about integrating processes closer to the source code, ‘Shift Right’ offers a complementary approach by tackling challenges that arise after deployment. Some decisions simply can’t be made early in the development process. For example, which cloud instances should you use? How many replicas of a service are necessary? What CPU and memory allocations are appropriate for specific workloads? These are classic ‘Shift Right’ concerns that have traditionally been managed through observability and system-generated recommendations. Consider this common scenario: when deploying a workload to Kubernetes, DevOps engineers often guess the memory and CPU requests, specifying these in YAML configuration files before anything is deployed. But without extensive testing, how can an engineer know the optimal settings? Most teams don’t have the resources to thoroughly test every workload, so they make educated guesses. Later, once the workload has been running in production and actual usage data is available, engineers revisit the configurations. They adjust settings to eliminate waste or boost performance, depending on what’s needed. It’s exhausting work and, let’s be honest, not much fun.
“Capacity growth will be driven increasingly by the even larger scale of those newly opened data centers, with generative AI technology being a prime reason for that increased scale,” Synergy Research writes. Not surprisingly, the companies with the broadest data center footprint are Amazon, Microsoft, and Google, which account for 60% of all hyperscale data center capacity. And the announcements from the Big 3 are coming fast and furious. ... “In effect, industry cloud platforms turn a cloud platform into a business platform, enabling an existing technology innovation tool to also serve as a business innovation tool,” says Gartner analyst Gregor Petri. “They do so not as predefined, one-off, vertical SaaS solutions, but rather as modular, composable platforms supported by a catalog of industry-specific packaged business capabilities.” ... There are many reasons for cloud bills increasing, beyond simple price hikes. Linthicum says organizations that simply “lifted and shifted” legacy applications to the public cloud, rather than refactoring or rewriting them for cloud optimization, ended up with higher costs. Many organizations overprovisioned and neglected to track cloud resource utilization. On top of that, organizations are constantly expanding their cloud footprint.