May 30, 2024

May 30, 2024

Single solution for regulating AI unlikely as laws require flexibility and context

In drafting the AI Act – the world’s first major piece of AI legislation – with an “omnibus approach,” Mazzini says, the EU aimed for a blanket coverage that allows for few loopholes. It aims to avoid overlap with existing sectoral laws, which can be enforced in addition to the AI Act. With the exception of exclusions around national security, military and defense (owing to the fact that the EU is not a sovereign state), it “essentially covers social and economic sectors from employment to vacation to law enforcement, immigration, products, financial services,” says Mazzini. “The main idea that we put forward was the risk-based approach.” ... Kortz believes it is “unlikely that we will see a sort of omnibus, all-sector, nationwide AI set of regulations or laws in the U.S. in the near future.” As in the case of data privacy laws, individual states will want to maintain their established authority, and while Kortz says some states – “especially, I think, here, of California” – may try something ambitious like a generalized AI law, the sectoral approach is likely to win out.?


Why Intel is making big bets on Edge AI

“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.” Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices. One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations. Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response. Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.


Tensions in DeFi industry exposed by LayerZero’s anti-Sybil strategy

If identity protocols could eliminate Sybil farming and solutions already exist, why have they not already become standard practice? Cointelegraph spoke with Debra Nita, a senior crypto strategist at public relations firm YAP Global, to better understand the perceived risks that liveness checks might introduce to the industry. “Protocols may be reluctant to solve issues they face with airdrops using better verification processes — including decentralized ones — for reasons including reputational. The implications vary from the impact on community sentiments, key stakeholders and legal standing,” said Nita. Nita continued, “Verification poses a potential reputational problem, whereby it, from the outset, potentially excludes a large group of users.” Nita cited EigenLayer’s airdrop, which disqualified users from the United States, Canada, China and Russia despite allowing participation from these regions. This left a sour taste in the mouths of many who spent time and money on the platform only to receive no reward for their efforts.


Investing in employee training & awareness enhances an organisation’s cyber resilience

One essential consideration is the concept of Return on Security Investment (ROSI). Boards scrutinise security spending, expecting a clear demonstration of value. Evaluating whether security investments outweigh the potential costs of breaches is crucial. Therefore, investments should be made judiciously, focusing on technologies and strategies that offer substantial RoI. A key strategy is to consolidate and unify security technologies. Many organisations deploy a multitude of security solutions, often operating in silos. ... Furthermore, prioritising skill development is essential. With each additional technology, the demand for specialised expertise grows. Investing in training and development programs ensures that internal teams possess the necessary skills to effectively manage and leverage security solutions. Additionally, strategic partnerships with trusted vendors and service providers can augment internal capabilities and broaden access to specialised expertise. Ultimately, consolidating security technologies, focusing on ROI, and investing in skill development are key best practices for maximisng the effectiveness of existing security investments.


Modular, scalable hardware architecture for a quantum computer

To build this QSoC, the researchers developed a fabrication process to transfer diamond color center “microchiplets” onto a CMOS backplane at a large scale. They started by fabricating an array of diamond color center microchiplets from a solid block of diamond. They also designed and fabricated nanoscale optical antennas that enable more efficient collection of the photons emitted by these color center qubits in free space. Then, they designed and mapped out the chip from the semiconductor foundry. ... They built an in-house transfer setup in the lab and applied a lock-and-release process to integrate the two layers by locking the diamond microchiplets into the sockets on the CMOS chip. Since the diamond microchiplets are weakly bonded to the diamond surface, when they release the bulk diamond horizontally, the microchiplets stay in the sockets. “Because we can control the fabrication of both the diamond and the CMOS chip, we can make a complementary pattern. In this way, we can transfer thousands of diamond chiplets into their corresponding sockets all at the same time,” Li says.


NIST launches ambitious effort to assess LLM risks

NIST’s new Assessing Risks and Impacts of AI (ARIA) program will “assess the societal risks and impacts of artificial intelligence systems,” the NIST statement said, including ascertaining “what happens when people interact with AI regularly in realistic settings.” ... The first will be what NIST described as “controlled access to privileged information. Can the LLM protect information it is not to share, or can creative users coax that information from the system?” The second area will be “personalized content for different populations. Can an LLM be contextually aware of the specific needs of distinct user populations?” The third area will be “synthesized factual content. [Can the LLM be] free of fabrications?” The NIST representative also said that the organization’s evaluations will make use of “proxies to facilitate a generalizable, reusable testing environment that can sustain over a period of years. ARIA evaluations will use proxies for application types, risks, tasks, and guardrails — all of which can be reused and adapted for future evaluations.”

Read more here ...

要查看或添加评论,请登录

社区洞察

其他会员也浏览了