March 23, 2024

March 23, 2024

The tech tightrope: safeguarding privacy in an AI-powered world

The only means of truly securing our privacy is through proactive enforcement of the utmost secure and novel technological measures at our disposal, those that ensure a strong emphasis on privacy and data encryption, while still enabling breakthrough technologies such as generative AI models and cloud computing tools full access to large pools of data in order to meet their full potential. Protecting data when it is at rest (i.e., in storage) or in transit (i.e., moving through or across networks) is ubiquitous. The data is encrypted, which is generally enough to ensure that it remains safe from unwanted access. The overwhelming challenge is how to also secure data while it is in use. ... One major issue with Confidential Computing is that it cannot scale sufficiently to cover the magnitude of use cases necessary to handle every possible AI model and cloud instance. Because a TEE must be created and defined for each specific use case, the time, effort, and cost involved in protecting data is restrictive. The bigger issue with Confidential Computing, though, is that it is not foolproof. The data in the TEE must still be unencrypted for it to be processed, opening the potential for quantum attack vectors to exploit vulnerabilities in the environment.


Ethical Considerations in AI Development

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to guarantee better conditions for the development and use of this innovative technology. Parliament’s priority is to ensure that AI systems used in the EU are secure, transparent, traceable, non-discriminatory, and environmentally friendly. AI systems must be overseen by people, rather than automation, to avoid harmful outcomes. The European Parliament also wants to establish a uniform and technologically neutral definition of AI that can be applied to future AI systems. “It is a pioneering law in the world,” highlighted Von Der Leyen, who celebrates that AI can thus be developed in a legal framework that can be “trusted.” The institutions of the European Union have agreed on the artificial intelligence law that allows or prohibits the use of technology depending on the risk it poses to people and that seeks to boost the European industry against giants such as China and the United States. The pact was reached after intense negotiations in which one of the sensitive points has been the use that law enforcement agencies will be able to make of biometric identification cameras to guarantee national security and prevent crimes such as terrorism or the protection of infrastructure.


FBI and CISA warn government systems against increased DDoS attacks

The advisory has grouped typical DoS and DDoS attacks based on three technique types: volume-based, protocol-based, and application layer-based. While volume-based attacks aim to cause request fatigue for the targeted systems, rendering them unable to handle legitimate requests, protocol-based attacks identify and target the weaker protocol implementations of a system causing it to malfunction. A novel loop DoS attack reported this week targeting network systems, using weak user datagram protocol (UDP)-based communications to transmit data packets, is an example of a protocol-based DoS attack. This new technique is among the rarest instances of a DoS attack, which can potentially result in a huge volume of malicious traffic. Application layer-based attacks refer to attacks that exploit vulnerabilities within specific applications or services running on the target system. Upon exploiting the weaknesses in the application, the attackers find ways to over-consume the processing powers of the target system, causing them to malfunction. Interestingly, the loop DoS attack can also be placed within the application layer DoS category, as it primarily attacks the communication flaw in the application layer resulting from its dependency on the UDP transport protocol.


The Future of AI: Hybrid Edge Deployments Are Indispensable

Deploying AI models locally eliminates dependence on external network connections or remote servers, minimizing the risk of downtime caused by maintenance, outages or connectivity issues. This level of resilience is particularly critical in sectors like healthcare and other sensitive industries where uninterrupted service is absolutely critical. Edge deployments also ensure “low latency,” as the speed of light is a fundamental limiting factor, and there may be significant latency when accessing cloud infrastructure. With increasingly powerful hardware available at the edge, it enables the processing of data that is physically nearby. Another benefit is the ability to harness specialized hardware that is tailored to their needs, optimizing performance and efficiency while bypassing network latency and bandwidth limitations, as well as configuration constraints imposed by cloud providers. Lastly, edge deployments allow for the centralization of large shared assets within a secure environment, which in turn simplifies storage management and access control, enhancing data security and governance.


OpenTelemetry promises run-time "profiling" as it guns for graduation

This means engineers will be able “to correlate resource exhaustion or poor user experience across their services with not just the specific service or pod being impacted, but the function or line of code most responsible for it.” i.e. They won't just know when something falls down, but why; something commercial offerings can provide but the project has lacked. OpenTelemetry governance committee member, Daniel Gomez Blanco, principal software engineer at Skyscanner, added the advances in profiling raised new challenges, such as how to represent user sessions, and how are they tied into resource attributes, as well as how to propagate context from the client side, to the back end, and back again. As a result it has formed a new specialist interest group to tackle these challenges. Honeycomb.io director of open source Austin Parker, said: “We're right along the glide path in order to continue to grow as a mature project.” As for the graduation process, he said, the security audits will continue over the summer along with work on best practices, audits and remediation. They should complete in the fall: “We'll publish results along these lines, and fixes ,and then we're gonna have a really cool party in Salt Lake City probably.”


Fake data breaches: Countering the damage

Fake data breaches can hurt an organization’s security reputation, even if it quickly debunks the fake breach. Whether real or fake, news of a potential breach can create panic among employees, customers, and other stakeholders. For publicly traded companies, the consequences can be even more damaging as such rumors can degrade a company’s stock value. Fake breaches also have direct financial consequences. Investigating a fake breach consumes time, money, and security personnel. Time spent on such investigations can mean time away from mitigating real and critical security threats, especially for SMBs with limited resources. Some cybercriminals might deliberately create panic and confusion about a fake breach to distract security teams from a different, real attack they might be trying to launch. Fake data breaches can help them gauge the response time and protocols an organization may have in place. These insights can be valuable for future, more severe attacks. In this sense, a fake data breach may well be a “dry run” and an indicator of an upcoming cyber-attack.

Read more here ...
Tejasvi Addagada

Empowering Digital Transformation through Data Strategy & AI Innovation | Data & Privacy Leader | Speaker & Author

11 个月

Thanks for sharing Kannan Subbiah. It’s crucial for an organization act on events that can be potential threats, however, the business model with the past know-how can make it easier to knock-off fake breaches.

要查看或添加评论,请登录

Kannan Subbiah的更多文章

  • March 12, 2025

    March 12, 2025

    Rethinking Firewall and Proxy Management for Enterprise Agility Firewall and proxy management follows a simple rule:…

  • March 11, 2025

    March 11, 2025

    This new AI benchmark measures how much models lie Scheming, deception, and alignment faking, when an AI model…

  • March 10, 2025

    March 10, 2025

    The Reality of Platform Engineering vs. Common Misconceptions In theory, the definition of platform engineering is…

  • March 09, 2025

    March 09, 2025

    Software Development Teams Struggle as Security Debt Reaches Critical Levels Software development teams face mounting…

  • March 08, 2025

    March 08, 2025

    Synthetic identity blends real and fake data to enable fraud, demanding new protections Manufactured synthetic…

  • March 07, 2025

    March 07, 2025

    Operational excellence with AI: How companies are boosting success with process intelligence everyone can access The…

  • March 06, 2025

    March 06, 2025

    RIP (finally) to the blockchain hype Fowler is not alone in his skepticism about blockchain. It hasn’t yet delivered…

  • March 05, 2025

    March 05, 2025

    Zero-knowledge cryptography is bigger than web3 Zero-knowledge proofs have existed since the 1980s, long before the…

  • March 04, 2025

    March 04, 2025

    You thought genAI hallucinations were bad? Things just got so much worse From an IT perspective, it seems impossible to…

  • March 03, 2025

    March 03, 2025

    How to Create a Winning AI Strategy “A winning AI strategy starts with a clear vision of what problems you’re solving…

社区洞察

其他会员也浏览了