Read Before Monday #47

Read Before Monday #47

Another interesting week! Lets review recent advancements across diverse fields, which underscore the transformative potential and inherent challenges of integrating advanced technologies into society. Starting in healthcare, FIBRESOLVE's FDA authorisation for AI-driven diagnostics exemplifies the critical interplay between interdisciplinary expertise and the need for careful oversight to mitigate concerns over opacity and data privacy, while addressing resistance within the radiology community. In parallel, efforts chronicled by Dr. Shailee Jain in UCSF Magazine reveal ambitious progress toward a "silicon brain"- an artificial neural network that, by synthesising diverse data from fMRI and single neuron recordings, aims to decode thoughts, restore speech, and create personalised brain models for revolutionary neurological treatments. Meanwhile, a Reuters Institute study exposes a complex global landscape in which the anticipated impacts of Generative AI on news media are met with low public trust, particularly in the UK, highlighting the broader societal caution toward these transformative tools. This blend of optimism and scepticism is further echoed in Virginia Postrel's reflections on mid-20th-century futuristic visions that once promised a world of technological marvels but eventually gave way to concerns over pollution and societal limitations, and in the innovative concept of microarchitectural weird machines presented in Communications of the ACM, where computing through hardware side effects challenges conventional detection methods and opens new avenues for obfuscation.

___

The integration of AI into medical diagnostics, exemplified by FIBRESOLVE's FDA authorisation, signifies a pivotal advancement in healthcare technology. This development underscores the importance of interdisciplinary expertise in bridging the gap between medicine and technology. However, the persistence of concerns regarding AI's opacity and the necessity for human oversight highlights the need for strategic implementation. Addressing resistance within the radiology community and ensuring data privacy are critical factors for successful adoption. As AI continues to evolve, its role in enhancing patient care and clinical efficiency will depend on thoughtful integration and ongoing evaluation.

  • My take: It's fascinating to see AI/GenAI making waves into healthcare, especially with tools like these leading the charge. However, the term "black box" keeps popping up in my mind, reminding me of those mystery novels where the detective knows the outcome but not the 'how' or 'why.' Which for this particular case, in medicine, not knowing the 'how' can be a tough pill to swallow - see what I did here? :)? And let's not forget the human element; even the smartest of the AI's can't replace the nuanced judgment of a radiologist. It's always good to remember that, as we embrace these innovations, balancing enthusiasm with caution seems like the best prescription.

___

In a Winter 2025 article from UCSF Magazine, Dr. Shailee Jain discusses efforts to create a "silicon brain" - an artificial neural network designed to replicate human brain activity. This technology aims to decode thoughts, restore speech, and develop personalised brain models. By integrating diverse data sources, including fMRI and single neuron recordings, with advanced AI, researchers hope to revolutionise treatments for neurological disorders and enhance brain-computer interfaces.

  • My take: The ambition to construct a "silicon brain"; an artificial neural network mirroring human cognition, is undeniably captivating and I'm all in for it! The potential to decode thoughts or restore speech could revolutionize medicine and our understanding of the mind. However, lets not forget that it's crucial to recognize that these silicon constructs, no matter how advanced they look and sound, they lack consciousness, intent, or genuine understanding. It's just a system processing data and executing algorithms, math, amplifying human-designed functions without possessing awareness or conscience. I think the real challenge lies not in fearing these creations as rivals but in ensuring their responsible development and deployment. If we focus on transparent and accountable practices, we can really use these tools to enhance our human potential, viewing them as extensions of our capabilities rather than competitors for consciousness. Embracing this perspective could let us explore applications in education, medicine, and beyond, ensuring that technology serves as a servant to humanity, not a threat. Think about that...

___

A recent study by the Reuters Institute for the Study of Journalism examined public perceptions of Generative AI in news across six countries. The findings reveal a complex landscape of awareness, usage, and trust. While a majority anticipate significant impacts of generative AI on various sectors, including news media, trust in institutions to use AI responsibly remains low. Notably, only 12% of respondents in the UK trust news media to use generative AI responsibly.

  • My take: Integrating generative GenAI into newsrooms offers significant opportunities for efficiency and innovation, for sure, however, it also raises concerns about trust and responsible use. The public's skepticism isn't about the technology itself, I think, but about how it's used. News orgs must prioritise transparency and responsibility in their GenAI applications to bridge this trust gap. After all, GenAI should enhance journalism, not undermine its integrity. Despite widespread awareness of tools like ChatGPT, actual application for news consumption remains minimal, with a mere 5% using GenAI to access the latest news - and maybe that will change with scheduled tasks from OpenAI and search. But, concerns about the accuracy and reliability of GenAI generated content will persist, especially in critical areas like politics and international affairs. This raises questions about the potential for misinformation and the erosion of journalistic integrity. Bottom line, it's essential to recognise that GenAI is not a sentient entity capable of intent. It functions as a tool, amplifying the directives and biases of its human creators - as always! The real challenge lies in how we choose to implement and oversee this technology. Rather than viewing GenAI as an existential threat to journalism, the focus should be on establishing responsible frameworks, ensuring transparency, and maintaining human oversight. I sense you can fell where I'm going to with these topics now :)

___

In her article "The World of Tomorrow," Virginia Postrel reflects on the mid-20th century's optimistic visions of the future, epitomised by events like the 1939 New York World's Fair and Disneyland's Tomorrowland. These venues showcased a future brimming with technological marvels and societal advancements, instilling a sense of hope and excitement. However, as time progressed, this enthusiasm waned, giving way to concerns about pollution, overcrowding, and the limitations of progress. Postrel delves into the cultural shift that transformed the future from a glamorous ideal to a source of apprehension, exploring the factors that led to this change in perception.

  • My take: Lets reflect on the mid-20th century's starry-eyed visions of the future, it's clear that our collective imagination was once captivated by the promise of flying cars, utopian cities, and boundless technological wonders. The 1939 New York World's Fair and Disneyland's Tomorrowland weren't just attractions; they were embodiments of an era's unbridled optimism. Fast forward to today, and that shimmering image of "The World of Tomorrow" has been clouded by environmental concerns, social challenges, and a realisation that progress isn't always linear. It's a clear reminder that while it's essential to dream big, we must also remain grounded, ensuring that our pursuit of advancement doesn't overshadow the very values and realities that define our humanity.

___

In the article "Computing with Time: Microarchitectural Weird Machines," published in?Communications of the ACM?in November 2024, authors Thomas S. Benjamin et al. introduce the concept of microarchitectural weird machines (μWMs). These are code constructions that perform computations through side effects and conflicts between microarchitectural components like branch predictors and caches. The outcomes of these computations are observed as timing variations during instruction execution. The authors demonstrate how μWMs can serve as potent obfuscation engines, enabling computations that remain undetectable by conventional anti-obfuscation tools, including emulators, debuggers, and both static and dynamic analysis techniques.

  • My take:?This exploration of microarchitectural weird machines (μWMs) opens for me, a fascinating, yet concerning chapter in the realm of cybersecurity. The ingenuity of leveraging CPU microarchitectural side effects to perform covert computations showcases the depth of complexity inherent in modern processors. However, this also underscores a significant vulnerability: the very features designed to enhance performance can be repurposed to obfuscate malicious activities. This duality presents a formidable challenge for us, security professionals. As we continue to push the boundaries of computational efficiency and capability - hello DeepSeek - , it becomes important to anticipate and mitigate the unintended avenues these advancements may open for exploitation. This study serves as a clear reminder that in the intricate dance between innovation and security, vigilance must remain paramount.

?___

This Week in GenAI

We're on episode 30 of #TWIGAI and we covered this week the news from OpenAI (Deep Research and Search), Google's Gemini models and what is Open Source AI.

In other news

要查看或添加评论,请登录

Vitor Domingos的更多文章

  • Read Before Monday #49

    Read Before Monday #49

    Last week of February, eh! That was fast..

  • Read Before Monday #48

    Read Before Monday #48

    Welcome to another #RBM! This week I cover the march of progress, whether in science, technology, or culture, which is…

  • Read Before Monday #46

    Read Before Monday #46

    January always feels like the longest month of the year! But we're in February, finally! This edition starts with a…

  • Read Before Monday #45

    Read Before Monday #45

    January blues are more like January innovations - that's what it felt this month with everything happening and we still…

    1 条评论
  • Read Before Monday #44

    Read Before Monday #44

    TikTok ban and its consequences have been all over the news, which begs for so many questions that I might do a…

  • Read Before Monday #43

    Read Before Monday #43

    The longest month of the year is here, but there’s no shortage of fascinating stories to learn and get into! From the…

  • Read Before Monday #42

    Read Before Monday #42

    Welcome to 2025! This is the first #RBM edition of the year and what a year 2024 was, right? So, lets keep that…

  • Read Before Monday #41

    Read Before Monday #41

    It starts to look a lot like Xmas… Isn’t it? :) It’s been 41 weeks since I restarted #RBM - Read Before Monday, and…

    5 条评论
  • Read Before Monday #40

    Read Before Monday #40

    ___ Claud Cockburn, a pioneering journalist, revolutionised oppositional reporting with his newsletter The Week…

  • Read Before Monday #39

    Read Before Monday #39

    I don't have a lot of nostalgia to share this week, but it's an interesting edition, as this week has been quite full…

社区洞察

其他会员也浏览了