Mastercard partners with Blockchain firms for CBDCs, AI Model Evaluation Tool Launches, Quantum-Resistant Browsers, VR's Impact on Workforce Training
Hi Everyone,
Welcome to QX Snapshots - a weekly recap of the key news on emerging technologies. In this newsletter, you will get a "digest" of latest info on AI, Quantum Technology, Metaverse and Enterprise Blockchain.
Hope it brings you value :)
[Quantum Technology] Google's Chrome Advances Quantum-Resistant Cryptography to Combat Future Cyber Threats. Google's Chrome browser is preparing for a post-quantum cybersecurity world by revising technical standards and testing new quantum-resistant algorithms. Collaborative efforts across Google aim to transition the web to quantum-resistant cryptography, as detailed in a recent Chromium Blog post. Chrome will use a hybrid method, supporting the X25519Kyber768 protocol, to establish symmetric secrets in Transport Layer Security (TLS) connections. This method combines an elliptic curve algorithm, X25519, and a quantum-resistant Key Encapsulation Method, Kyber-768. This fusion aims to protect data transmission from potential quantum attacks. This initiative also addresses the "Harvest Now, Decrypt Later" threat, highlighting that while current encryption algorithms are strong, the method for creating symmetric keys requires fortification against potential quantum decryption. The introduction of X25519Kyber768 adds extra data to the TLS ClientHello message, but Google believes most TLS implementations will remain compatible. To ensure smooth transitions, there's a provision for administrators to disable X25519Kyber768 temporarily. Both X25519Kyber768 and Kyber specifications are still in draft form, and Chrome's approach may evolve accordingly. This effort emphasizes Google's dedication to protecting user data against emerging cyber threats in the quantum era.
[AI] Arthur Launches Open-Source Tool to Evaluate Large Language Model Performance. AI startup Arthur has introduced Arthur Bench, an open-source tool designed to assess and compare the performance of large language models (LLMs) like OpenAI's GPT-3.5 Turbo and Meta's LLaMA 2. Arthur Bench enables companies to test various language models against their specific requirements, offering metrics to gauge accuracy, readability, hedging, and other aspects. A unique feature addresses "hedging", where an LLM provides unnecessary language referring to its terms of service, a common issue with some models. The tool includes several starter criteria for comparison, but as it's open-source, businesses can add their own benchmarks. Adam Wenchel, Arthur's CEO, stated that users could test the last 100 questions from their audience against multiple models, allowing Arthur Bench to spotlight significant answer variations. The platform aims to assist businesses in making informed AI adoption decisions. Financial firms and vehicle manufacturers are among the early users, leveraging the tool for quicker investment analysis and enhancing customer support. Additionally, Axios HQ utilizes Arthur Bench for product development. Arthur promotes an open-source approach, believing it fosters superior products. The company also revealed a hackathon collaboration with Amazon Web Services and Cohere to develop new Arthur Bench metrics.
[Blockchain] Mastercard Partners with Leading Blockchain Firms to Enhance Understanding of CBDCs. Mastercard has initiated a collaboration with seven leading blockchain and payment technology providers to delve deeper into the potential and constraints of central bank digital currencies (CBDCs). Announced on Aug. 17, this partnership aims to amplify insights surrounding security, privacy, interoperability, and innovation in the CBDC arena. Raj Dhamodharan, Mastercard's head of digital assets and blockchain, emphasized the importance of making CBDCs user-friendly in the upcoming digital era. Partners in this program include Ripple, known for its dedicated CBDC platform; ConsenSys, which has been part of multiple CBDC initiatives; Fluency, involved in 23 CBDC projects; Giesecke+Devrient and Idemia, both of which have collaborated with various central banks on CBDCs; Consult Hyperion, focusing on offline payment solutions; and the institutional custody platform Fireblocks. Mastercard, a long-time crypto participant, has recently reduced its crypto involvement but continues to advocate for CBDCs, having engaged in projects with global institutions like the Bank for International Settlements and regional entities like the New York Federal Reserve Bank.
[AR/VR] Emerging Role of Virtual Reality in Workforce Training. Virtual Reality (VR) is steadily finding its place in practical applications like job training, with many employers across diverse sectors, from retail to healthcare, adopting the technology. This immersive medium offers trainees an experience closely resembling real-world situations, just by donning a headset. The advantages of VR in training are manifold: Immersiveness: It offers a 360-degree environment, compelling trainees to be completely involved and less distracted. Enhanced Memory Retention: Engaging multiple senses simultaneously fosters a multimodal learning experience, aiding in better retention of information. Risk-Free Mistakes: Trainees can practice challenging scenarios without real-world repercussions, allowing for stress-free repetitions and mastery. Walmart, an early adopter of VR for training, reported a 5-10% improvement in employee test scores since integrating VR in 2017. Furthermore, VR can offer cost-effective training solutions. Instead of conventional training sessions, companies can provide anywhere, anytime training in short VR modules. The potential of VR was underscored when VR training was credited with saving lives during an active-shooter incident at a Walmart store.
[General technology] Apple Bluetooth Vulnerabilities and a $70 device to trick users into sharing passwords. At the Def Con hacking conference, attendees were bewildered by unexpected pop-up messages on their iPhones, prompting Apple ID connections and password sharing with a nearby Apple TV. The alerts were part of a research project by security researcher Jae Bochs, intended to demonstrate the vulnerabilities of Apple's Bluetooth settings and have some fun. Bochs used a device made of a Raspberry Pi Zero 2 W, antennas, a Linux-compatible Bluetooth adapter, and a battery, costing roughly $70 with a 50-foot range, to trigger these pop-ups. The researcher exploited Apple’s Bluetooth low energy (BLE) protocols that let Apple devices interact when in proximity. Although Bochs’ device did not gather data from iPhones, he stated that, with modifications, it could potentially retrieve sensitive data like phone numbers and Apple IDs. Bochs pointed out that these vulnerabilities have been known since a 2019 academic study on Apple’s BLE, but Apple is unlikely to make changes due to the functionality required for other devices like watches and headphones. Bochs suggests that Apple might benefit from adding a warning about Bluetooth's continuous activity even after being toggled off from the Control Center. Apple has yet to comment on the issue.
领英推荐
FEATURED: ‘‘Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI.’
By: David Gray Widder (Carnegie Mellon University), Sarah West (AI Now Institute), Meredith Whittaker (Signal Foundation)
“This paper examines ‘open’ AI in the context of recent attention to open and open source AI systems. We find that the terms ‘open’ and ‘open source’ are used in confusing and diverse ways, often constituting more aspiration or marketing than technical descriptor, and frequently blending concepts from both open source software and open science. This complicates an already complex landscape, in which there is currently no agreed on definition of ‘open’ in the context of AI, and as such the term is being applied to widely divergent offerings with little reference to a stable descriptor. So, what exactly is ‘open’ about ‘open’ AI, and what does ‘open’ AI enable? To better answer these questions we begin this paper by looking at the various resources required to create and deploy AI systems, alongside the components that comprise these systems. We do this with an eye to which of these can, or cannot, be made open to scrutiny, reuse, and extension. What does ‘open’ mean in practice, and what are its limits in the context of AI? We find that while a handful of maximally open AI systems exist, which offer intentional and extensive transparency, reusability, and extensibility– the resources needed to build AI from scratch, and to deploy large AI systems at scale, remain ‘closed’—available only to those with significant (almost always corporate) resources. From here, we zoom out and examine the history of open source, its cleave from free software in the mid 1990s,
and the contested processes by which open source has been incorporated into, and instrumented by, large tech corporations. As a current day example of the overbroad and ill-defined use of the term by tech companies, we look at ‘open’ in the context of OpenAI the company. We trace its moves from a humanity-focused nonprofit to a for-profit partnered with Microsoft, and its shifting position on ‘open’ AI. Finally, we examine the current discourse around ‘open’ AI–looking at how the term and the (mis)understandings about what ‘open’ enables are being deployed to shape the public’s and policymakers’ understanding about AI, its capabilities, and the power of the AI industry. In particular, we examine the arguments being made for and against ‘open’ and open source AI, who’s making them, and how they are being deployed in the debate over AI regulation.
Taken together, we find that ‘open’ AI can, in its more maximal instantiations, provide transparency, reusability, and extensibility that can enable third parties to deploy and build on top of powerful off-the-shelf AI models. These maximalist forms of ‘open’ AI can also allow some forms of auditing and oversight. But even the most open of ‘open’ AI systems do not, on their own, ensure democratic access to or meaningful competition in AI, nor does openness alone solve the problem of oversight and scrutiny. While we recognize that there is a vibrant community of earnest contributors building and contributing to ‘open’ AI efforts in the name of expanding access and insight, we also find that marketing around openness and investment in (somewhat) open AI systems is being leveraged by powerful companies to bolster their positions in the face of growing interest in AI regulation. And that some companies have moved to embrace ‘open’ AI as a mechanism to entrench dominance, using the rhetoric of ‘open’ AI to expand market power while investing in ‘open’ AI efforts in ways that allow them to set standards of development while benefiting from the free labor of open source contributors.”
Read the full paper: here.
If you enjoyed today’s QX Snapshots, I would love it if you subscribed for more. Can’t wait a full week? You can keep up with me here on LinkedIn for daily emerging technology content.