Mastercard & RBA's CBDC Move, JPMorgan's Tokenized Turn, UK's Tokenization Blueprint, GPT 4 vs. LLaVA, and SoC Innovations Dominate Tech Patents!

Hi Everyone,

Welcome to QX Snapshots - a weekly recap of the key news on emerging technologies. In this newsletter, you will get a "digest" of the latest info on AI, Quantum Technology, Metaverse, and Enterprise Blockchain.

Hope it brings you value :)


[Blockchain] Mastercard and RBA Unveil CBDC Solution for Web3 Commerce while JPMorgan announces Tokenized Collateral Network. Mastercard, in collaboration with the Reserve Bank of Australia (RBA) and the Digital Finance Cooperative Research Centre, showcased an interoperable central bank digital currency (CBDC) solution for trusted Web3 commerce. This technology allows CBDCs to be "wrapped" and transacted on various blockchains, ensuring secure participation for consumers. The demonstration involved a pilot CBDC's use to purchase an Ethereum-based NFT. All participants, including wallets and smart contracts, were pre-approved. This initiative is part of Mastercard's Multi Token Network strategy, emphasizing blockchain-based payment efficiency. The project showcases the potential of CBDCs and NFTs in transforming commerce and ensuring security. Meanwhile, JPMorgan has introduced the Tokenized Collateral Network (TCN), a blockchain-based tokenization platform that transforms traditional assets into digital tokens, facilitating quicker and more secure on-chain settlements. BlackRock was among the first clients, using TCN to convert shares of a money market fund into digital tokens which were transferred to Barclays bank for an over-the-counter derivatives exchange. The platform enhances transaction speed and security while allowing almost instantaneous collateral movement. This move underscores the growing integration of blockchain technology in traditional finance. If you missed it late last week, The European Securities and Markets Authority (ESMA) released its second consultative paper on the Markets in Crypto-Assets (MiCA) mandates. The 307-page document addresses five key areas, including sustainability indicators for blockchains and requirements for crypto-asset service providers. A final report will be submitted to the European Commission by June 2024, following another consultation in Q1 2024.


[AI] Google Cloud Bolsters AI IP Indemnity; EY and IBM Innovate in HR Tech;? LLaVA is GPT-4 image recognition new competitor. Google Cloud has introduced a two-pronged intellectual property indemnity for its generative AI offerings, ensuring customers are covered against potential copyright challenges. Firstly, Google provides indemnity for the training data used in its generative AI models, protecting users against third-party claims of copyright infringement due to Google’s use of this data. Secondly, an indemnity is provided for the generated output of these AI models, covering content created by customers using Google services. This means that, provided users follow responsible AI practices, Google will take responsibility for potential legal risks, further reinforcing their commitment to user safety and trust in generative AI. Also, EY, in collaboration with IBM, has introduced EY.ai Workforce, a new solution that incorporates artificial intelligence (AI) to optimize HR tasks and processes. Leveraging IBM's watsonx Orchestrate for AI and automation, the system is designed to help organizations streamline HR operations. Features include guiding users in routine HR tasks such as drafting job descriptions and generating payroll reports using a natural language interface. The partnership aims to enhance workplace productivity, placing emphasis on human-centric technology. This development further cements the long-standing alliance between EY and IBM, which focuses on merging business innovation with AI and hybrid cloud technology.? Meanwhile, LLaVA, the new open-source Large Language and Vision Assistant by Microsoft, challenges GPT-4's image recognition. Merging a vision encoder with Vicuna, it offers comprehensive visual and language understanding, setting new accuracy benchmarks in Science QA. With advanced chat capabilities, it parallels multimodal GPT-4. LLaVA has variants for healthcare and interactive visual demonstrations and has begun exploring GPT-4 data integration for self-instruction tuning.


[Quantum Technology] Quantum Leap in Computing with Record-Breaking Performance and Precision. A team of Chinese researchers has unveiled Jiuzhang 3.0, a record-breaking light-based quantum computer. With 255 detected photons, it's significantly faster than its predecessor, Jiuzhang 2.0, and can solve complex Gaussian boson sampling problems in a microsecond.? Imagine you have a very big box of colorful marbles. Now, if you wanted to know all the different ways these marbles can be arranged, you'd have a tricky puzzle. "Complex Gaussian boson sampling problems" is a fancy name for a type of puzzle that is super hard for regular computers to solve. In comparison, the world's quickest supercomputer would require over 20 billion years for the same task. Despite this breakthrough, experts emphasize the variations in quantum computer types and caution against direct comparisons. Toronto-based Xanadu claimed their quantum computer Borealis could perform a similar task in 36 microseconds last year. Meanwhile, scientists from QuEra Computing, Harvard University, and MIT have made a historic advancement in quantum computing by demonstrating two-qubit entangling operations with an unprecedented 99.5% accuracy on 60 neutral atom qubits simultaneously. This breakthrough surpasses the previous best of 97.5% fidelity and is vital for large-scale quantum algorithms and simulations. Key innovations include optimal control for precision, atomic dark states to reduce errors, and enhanced atom cooling. Achieving this high fidelity exceeds essential requirements for quantum error correction, marking a significant stride towards realizing the full potential of quantum computers.


[Metaverse] EU JURI and UAE Navigate Metaverse Governance while Glasgow unveils? 'Museums in the Metaverse'. The University of Glasgow launched a £5.6 million "Museums in the Metaverse" project, merging history and extended reality (XR) technologies like virtual and augmented reality. Funded by the UK Government’s Innovation Accelerator programme, the platform will have two parts: one letting users virtually access museums and experiences, and another for virtual curators to combine 3D objects and environments for storytelling. Partnering with National Museums Scotland and others, the project aims to enhance traditional museum experiences and make heritage content creation more affordable. It addresses challenges like cost and seeks to display unseen collections, allowing users worldwide to virtually interact with historical artifacts. Also announced this week, the European Parliament's JURI Committee released a draft report discussing legal challenges in virtual worlds, especially metaverse environments. This follows an earlier draft by the IMCO Committee on policy implications for the virtual single market. Meanwhile, the UAE AI Office released a whitepaper ‘Responsible Metaverse Self-governance Framework’ advocating international standards for metaverse operations. Led by Minister Omar Sultan Al Olama and in partnership with Dubai's Department of Economy and Tourism, the report emphasizes the need for global collaboration to ensure the metaverse's ethical and safe expansion across sectors.


[General technology] ?Surge in Tech Patents with SoC Innovations Leading the Charge while Atlassian Grapples with Critical Security Flaw. The technology sector has witnessed the filing of over 4.1 million patents in the past three years, primarily in SoC (System on Chip) computing, as per GlobalData. SoC streamlines electronic devices, enhancing their efficiency and compactness. Disruptive emerging technologies include network-on-a-chip and in-memory computing. Intel is a top patent filer in SoC, with its patent focusing on dynamic accelerator selection. GlobalData's analysis highlights 190+ companies, including Intel, Apple, and Dell Technologies, deeply involved in SoC computing innovations, with their patents varying in application diversity and geographic reach. Hackers are exploiting a critical zero-day vulnerability in Atlassian's software, warns Microsoft. The flaw in Atlassian Confluence Data Center and Server allows unauthorized administrator account creations. Microsoft detected the vulnerability, CVE-2023-22515, being exploited three weeks before Atlassian's public disclosure. Atlassian? stated collaboration with Microsoft and acknowledged reports from a few customers without detailing the extent of exploitation. Atlassian, emphasizing that only on-premises instances are affected, has released a patch and urges users to update promptly.



FEATURED: ‘FedSyn: Synthetic Data Generation using Federated Learning’

By: Monik Raj Behera, Sudhir Upadhyay, Suresh Shetty, Sudha Priyadarshini, Palka Patel, Ker Farn Lee - JP Morgan team

“As Deep Learning algorithms continue to evolve and become more sophisticated, they require massive datasets for model training and efficacy of models. Some of those data requirements can be met with the help of existing datasets within the organizations. Current Machine Learning practices can be leveraged to generate synthetic data from an existing dataset.

Further, it is well established that diversity in generated synthetic data relies on (and is perhaps limited by) statistical properties of available dataset within a single organization or entity. The more diverse an existing dataset is, the more expressive and generic synthetic data can be. However, given the scarcity of underlying data, it is challenging to collate big data in one organization. The diverse, non-overlapping dataset across distinct organizations provides an opportunity for them to contribute their limited distinct data to a larger pool that can be leveraged to further synthesize. Unfortunately, this raises data privacy concerns that some institutions may not be comfortable with. This paper proposes a novel approach to generate synthetic data - FedSyn.?

FedSyn is a collaborative, privacy preserving approach to generate synthetic data among multiple participants in a federated and collaborative network. FedSyn creates a synthetic data generation model, which can generate synthetic data consisting of statistical distribution of almost all the participants in the network. FedSyn does not require access to the data of an individual participant, hence protecting the privacy of participant’s data. The proposed technique in this paper leverages federated machine learning and generative adversarial network (GAN) as neural network architecture for synthetic data generation. The proposed method can be extended to many machine learning problem classes in finance, health, governance, technology and many more.”

Read the full article: here.


If you enjoyed today’s QX Snapshots, I would love it if you subscribed for more. Can’t wait a full week? You can keep up with me here on LinkedIn for daily emerging technology content.

要查看或添加评论,请登录

Ana?s Ofranc的更多文章

社区洞察

其他会员也浏览了