April #TechBytes - 2nd part

April #TechBytes - 2nd part

Here is a roundup of some of the tech news you may have missed recently selected by us at Etiqa!

Ours is a journey inside the latest advanced technologies, news about the most exciting startups and scale ups from around the world, and updates on specialized, high-risk industries.


Italian Government's bold move: one billion Euro commitment to Artificial Intelligence

In a significant move towards fostering innovation, the Italian government has pledged to invest up to one billion euros in artificial intelligence (AI) initiatives. The commitment comes as a part of the AI bill, which outlines various measures to advance technology in Italy.

Initially proposed by Prime Minister Giorgia Meloni, the government's commitment to investing one billion euros in AI projects has now been formalized in the legislation. The bill authorizes the expenditure of funds, administered through Cdp Venture Capital, to support startups and small to medium-sized innovative enterprises across key sectors, including AI, cybersecurity, quantum computing, telecommunications, 5G, mobile edge computing, and web3 architecture.

The allocated funds aim to cover a wide range of stages, from seed funding to scaleup, and support the creation of national champions within the technology sector. Additionally, the legislation emphasizes the establishment of technology transfer hubs and acceleration programs to further drive innovation.

The governance structure outlined in the bill divides responsibilities between the Agency for Digital Italy (Agid) and the National Cybersecurity Agency (Acn), with Agid overseeing AI development, application in public entities, and certification of AI algorithms, while Acn focuses on inspection and enforcement activities.

This landmark legislation not only demonstrates the government's commitment to advancing AI technologies but also aims to position Italy as a leader in innovation on the global stage.


Microsoft unveils Phi-3: a new lightweight Generative AI Model

Microsoft has unveiled Phi-3, a groundbreaking new generative AI model designed to bridge the gap between cloud-based AI and edge devices. With a staggering 3.8 billion parameters, Phi-3 represents a significant advancement in AI technology, aiming to democratize access to artificial intelligence beyond traditional computing infrastructures.

The efficacy of AI models hinges not only on their sheer quantity of parameters but also on their quality and fine-tuning. While larger language models (LLMs) such as GPT-3 boast hundreds of billions of parameters, researchers have begun to question the necessity of such immense scale. Enter Phi-3: a family of compact language models engineered to deliver common-sense reasoning and natural language understanding with unparalleled efficiency.

The inaugural member of the Phi-3 family, Phi-3 Mini, features a modest 3.8 billion parameters—significantly smaller than its gargantuan counterparts. Despite its reduced size, Phi-3 Mini rivals the capabilities of larger LLMs like GPT-3.5, albeit with lower operational costs and enhanced compatibility with resource-constrained devices such as smartphones and laptops.

Microsoft's Phi-3 represents a paradigm shift in AI development, eschewing the conventional "bigger is better" mentality in favor of a more nuanced approach. Rather than inundating the model with vast amounts of internet-sourced data, developers adopted a more targeted strategy inspired by childhood storytelling. By training Phi-3 on simplified language structures reminiscent of children's stories, Microsoft engineers achieved remarkable efficiency and effectiveness in AI reasoning.

Phi-3 Mini sets out to revolutionize AI accessibility, offering unprecedented problem-solving capabilities and advanced reasoning skills in a compact, localized format. Its ability to operate autonomously without reliance on cloud infrastructure expands the reach of AI to previously underserved regions and scenarios.?

Microsoft has made Phi-3 available through various platforms, including HuggingFace, Ollama, and Azure, facilitating easy deployment and local execution of the model.


OpenAI Ambassador Maldonado: ChatGPT to become a basic public service, like web browsing

Abran Maldonado, OpenAI's ambassador, discusses the transformative impact of ChatGPT and the future of generative AI during his visit to Italy for AI Week 2024 in Rimini.

Maldonado emphasizes the importance of thorough evaluations before adopting generative AI technologies, urging companies to test these tools in controlled environments to assess their specific impacts on business, productivity, and potential risks.

Regarding the development of generative AI, Maldonado highlights its continuous evolution and the need for adaptability, suggesting that there won't be a definitive moment when the technology becomes fully consolidated due to its ever-changing nature.

He underscores the significance of partnerships in OpenAI's future, emphasizing collaborations with stakeholders like Nvidia and Facebook to broaden access to data and knowledge, fostering a more interconnected ecosystem.

Maldonado discusses the recent decision to allow the use of ChatGPT without an account, aiming to lower access barriers and promote greater adoption of conversational AI technologies, positioning ChatGPT as a basic public service akin to web browsing.

In terms of educating youth on ChatGPT's optimal use, Maldonado suggests adopting an approach similar to teaching programming, focusing on prompt engineering and computational thinking to enhance communication skills and problem-solving abilities in various contexts.


Nvidia acquires AI workload management startup Run:ai for $700M

Nvidia has acquired Run:ai, an Israeli company specializing in AI workload management, for $700 million. Run:ai's innovative approach to managing and optimizing AI hardware infrastructure has garnered significant attention, particularly among Fortune 500 companies.

The integration of Run:ai's products into Nvidia's DGX Cloud AI platform will provide enterprise customers with enhanced capabilities for training AI models. This move reflects Nvidia's commitment to offering cutting-edge solutions to meet the growing demand for efficient AI computing resources.

Run:ai's co-founders, Omri Geller and Ronen Dar, developed a platform that facilitates the parallel execution of AI models across various hardware configurations, both on-premises and in the cloud. Their expertise and technology have positioned Run:ai as a leader in the field of AI workload management.

Nvidia's acquisition of Run:ai underscores the importance of efficient AI infrastructure management in today's complex computing landscape. By integrating Run:ai's capabilities into its portfolio, Nvidia aims to provide customers with greater flexibility and efficiency in deploying and managing AI workloads.

要查看或添加评论,请登录

Etiqa的更多文章

社区洞察

其他会员也浏览了