March 23, 2024

March 23, 2024

The tech tightrope: safeguarding privacy in an AI-powered world

The only means of truly securing our privacy is through proactive enforcement of the utmost secure and novel technological measures at our disposal, those that ensure a strong emphasis on privacy and data encryption, while still enabling breakthrough technologies such as generative AI models and cloud computing tools full access to large pools of data in order to meet their full potential. Protecting data when it is at rest (i.e., in storage) or in transit (i.e., moving through or across networks) is ubiquitous. The data is encrypted, which is generally enough to ensure that it remains safe from unwanted access. The overwhelming challenge is how to also secure data while it is in use. ... One major issue with Confidential Computing is that it cannot scale sufficiently to cover the magnitude of use cases necessary to handle every possible AI model and cloud instance. Because a TEE must be created and defined for each specific use case, the time, effort, and cost involved in protecting data is restrictive. The bigger issue with Confidential Computing, though, is that it is not foolproof. The data in the TEE must still be unencrypted for it to be processed, opening the potential for quantum attack vectors to exploit vulnerabilities in the environment.


Ethical Considerations in AI Development

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to guarantee better conditions for the development and use of this innovative technology. Parliament’s priority is to ensure that AI systems used in the EU are secure, transparent, traceable, non-discriminatory, and environmentally friendly. AI systems must be overseen by people, rather than automation, to avoid harmful outcomes. The European Parliament also wants to establish a uniform and technologically neutral definition of AI that can be applied to future AI systems. “It is a pioneering law in the world,” highlighted Von Der Leyen, who celebrates that AI can thus be developed in a legal framework that can be “trusted.” The institutions of the European Union have agreed on the artificial intelligence law that allows or prohibits the use of technology depending on the risk it poses to people and that seeks to boost the European industry against giants such as China and the United States. The pact was reached after intense negotiations in which one of the sensitive points has been the use that law enforcement agencies will be able to make of biometric identification cameras to guarantee national security and prevent crimes such as terrorism or the protection of infrastructure.


FBI and CISA warn government systems against increased DDoS attacks

The advisory has grouped typical DoS and DDoS attacks based on three technique types: volume-based, protocol-based, and application layer-based. While volume-based attacks aim to cause request fatigue for the targeted systems, rendering them unable to handle legitimate requests, protocol-based attacks identify and target the weaker protocol implementations of a system causing it to malfunction. A novel loop DoS attack reported this week targeting network systems, using weak user datagram protocol (UDP)-based communications to transmit data packets, is an example of a protocol-based DoS attack. This new technique is among the rarest instances of a DoS attack, which can potentially result in a huge volume of malicious traffic. Application layer-based attacks refer to attacks that exploit vulnerabilities within specific applications or services running on the target system. Upon exploiting the weaknesses in the application, the attackers find ways to over-consume the processing powers of the target system, causing them to malfunction. Interestingly, the loop DoS attack can also be placed within the application layer DoS category, as it primarily attacks the communication flaw in the application layer resulting from its dependency on the UDP transport protocol.


The Future of AI: Hybrid Edge Deployments Are Indispensable

Deploying AI models locally eliminates dependence on external network connections or remote servers, minimizing the risk of downtime caused by maintenance, outages or connectivity issues. This level of resilience is particularly critical in sectors like healthcare and other sensitive industries where uninterrupted service is absolutely critical. Edge deployments also ensure “low latency,” as the speed of light is a fundamental limiting factor, and there may be significant latency when accessing cloud infrastructure. With increasingly powerful hardware available at the edge, it enables the processing of data that is physically nearby. Another benefit is the ability to harness specialized hardware that is tailored to their needs, optimizing performance and efficiency while bypassing network latency and bandwidth limitations, as well as configuration constraints imposed by cloud providers. Lastly, edge deployments allow for the centralization of large shared assets within a secure environment, which in turn simplifies storage management and access control, enhancing data security and governance.


OpenTelemetry promises run-time "profiling" as it guns for graduation

This means engineers will be able “to correlate resource exhaustion or poor user experience across their services with not just the specific service or pod being impacted, but the function or line of code most responsible for it.” i.e. They won't just know when something falls down, but why; something commercial offerings can provide but the project has lacked. OpenTelemetry governance committee member, Daniel Gomez Blanco, principal software engineer at Skyscanner, added the advances in profiling raised new challenges, such as how to represent user sessions, and how are they tied into resource attributes, as well as how to propagate context from the client side, to the back end, and back again. As a result it has formed a new specialist interest group to tackle these challenges. Honeycomb.io director of open source Austin Parker, said: “We're right along the glide path in order to continue to grow as a mature project.” As for the graduation process, he said, the security audits will continue over the summer along with work on best practices, audits and remediation. They should complete in the fall: “We'll publish results along these lines, and fixes ,and then we're gonna have a really cool party in Salt Lake City probably.”


Fake data breaches: Countering the damage

Fake data breaches can hurt an organization’s security reputation, even if it quickly debunks the fake breach. Whether real or fake, news of a potential breach can create panic among employees, customers, and other stakeholders. For publicly traded companies, the consequences can be even more damaging as such rumors can degrade a company’s stock value. Fake breaches also have direct financial consequences. Investigating a fake breach consumes time, money, and security personnel. Time spent on such investigations can mean time away from mitigating real and critical security threats, especially for SMBs with limited resources. Some cybercriminals might deliberately create panic and confusion about a fake breach to distract security teams from a different, real attack they might be trying to launch. Fake data breaches can help them gauge the response time and protocols an organization may have in place. These insights can be valuable for future, more severe attacks. In this sense, a fake data breach may well be a “dry run” and an indicator of an upcoming cyber-attack.

Read more here ...
Tejasvi Addagada

Empowering Digital Transformation through Data Strategy & AI Innovation | Data & Privacy Leader | Speaker & Author

8 个月

Thanks for sharing Kannan Subbiah. It’s crucial for an organization act on events that can be potential threats, however, the business model with the past know-how can make it easier to knock-off fake breaches.

要查看或添加评论,请登录

Kannan Subbiah的更多文章

  • November 28, 2024

    November 28, 2024

    Agentic AI: The Next Frontier for Enterprises Agentic AI represents a significant leap forward. "These systems can…

    1 条评论
  • November 27, 2024

    November 27, 2024

    Cybersecurity’s oversimplification problem: Seeing AI as a replacement for human agency One clear solution to the…

  • November 26, 2024

    November 26, 2024

    Just what the heck does an ‘AI PC’ do? As the PC market moves to AI PCs, x86 processor dominance will lessen over time,…

  • November 25, 2024

    November 25, 2024

    GitHub Copilot: Everything you need to know GitHub Copilot can make inline code suggestions in several ways. Give it a…

  • November 24, 2024

    November 24, 2024

    AI agents are unlike any technology ever “Reasoning” and “acting” (often implemented using the ReACT — Reasoning and…

  • November 23, 2024

    November 23, 2024

    AI Regulation Readiness: A Guide for Businesses The first thing to note about AI compliance today is that few laws and…

  • November 22, 2024

    November 22, 2024

    AI agents are coming to work — here’s what businesses need to know Defining exactly what an agent is can be tricky…

  • November 21, 2024

    November 21, 2024

    Building Resilient Cloud Architectures for Post-Disaster IT Recovery A resilient cloud architecture is designed to…

  • November 20, 2024

    November 20, 2024

    5 Steps To Cross the Operational Chasm in Incident Management A siloed approach to incident management slows down…

  • November 19, 2024

    November 19, 2024

    AI-driven software testing gains more champions but worries persist "There is a clear need to align quality engineering…

社区洞察

其他会员也浏览了