DeepMind predicting gene expression with AI
Welcome to the 38th edition of the?AI and Global Grand Challenges newsletter, where we explore?how AI is tackling the largest problems facing the world.
The aim: To inspire AI action by builders, regulators, leaders, researchers and those interested in the field.
If you would like to support our continued work from £1 then click?here!
---
Packed inside
Make sure to subscribe to our site as well:?www.nural.cc
__________________________________
Key recent developments
---
Google DeepMind predicting gene expression with AI
What:?The human genome is made up of roughly 3 billion “letters”. Approximately 2% of these are genes that perform various essential functions in cells. The remaining 98% are called “non-coding” and contain less well understood instructions about when and where genes should be produced or expressed in the human body. How this takes place is a major unsolved problem. New research by DeepMind claims substantially improved gene expression predictions from DNA sequences through the use of a new AI deep learning architecture.
Key Takeaways:?Developments in this area will have critical downstream applications in human genetics. For example, complex genetic diseases are often associated with variants in the non-coding DNA that potentially alter gene expression.
Code:?deepmind-research
---
Facebook is researching AI systems that see, hear, and remember everything you do
What:?Facebook have launched an ambitious research project called Ego4D creating a large collection of first person (“egocentric”) videos of people completing everyday tasks that will ultimately be used to train augmented reality systems. Most current image datasets consist of third person views leading to vision systems struggling to understand first person views. Eventual new technology could be included in wearable cameras and home assistant robots helping find lost items, monitoring social interactions, providing real-time assistance learning or undertaking everyday tasks.
Key Takeaways:?This new development might exacerbate the trust issues that Facebook is currently facing. A key recommendation for AI development is that an ethical risk assessment should be built in at project inception but that appears not to be the case for Ego4D.
---
How AI can fight human trafficking
What:?Traffic Jam is a new AI system “to find missing persons, stop human trafficking and fight organized crime.” It combs through online ads in speciality “hot spots” (commercial adult services) and searches for “vulnerability indicators.” These can include images of subjects who look like children and indications of drug use. The intention is to aid law enforcement.
Key Takeaways:??This provides an example of the legal and ethical dilemmas in this area.?Efforts to help victims of trafficking are laudable. However, there are similarities with another company, Clearview, that secretly scraped 10 billion images of faces from the web. Subsequent use by law enforcement and private companies has provoked a storm of controversy.
---
LinkedIn’s approach to building transparent and explainable AI systems
What:?LinkedIn provides details about its recently launched Responsible AI program. This involves both building products and programs that empower individuals regardless of their background or social status and also ensuring that systems are transparent. Predictive machine learning models are widely used at LinkedIn. The company developed a customer-facing model explainer system that provides understandable interpretations and the rationale behind predictions.
Key Takeaways:?Facebook has recently faced criticism during testimony to a Senate sub-committee by a whistle-blower. A?key allegation?is that changes to the Facebook newsfeed recommender system in 2018 increased, rather than reduced, divisive content but this was not addressed. However, the LinkedIn Responsible AI program provides an example that a corporation can take active steps to operate transparently, when motivated.
---
Microsoft and NVIDIA claim the world’s largest and most powerful AI language model
What:?Microsoft and NVIDIA have produced what they claim is the largest and the most powerful transformer language model trained to date with 530 billion parameters. The system advances the state of the art in AI for many natural language tasks. By comparison the well-known GPT-3 system consists of 175 billion parameters.
Key Takeaway:?AI models continue increasing in size and continue to achieve ever better results. But this comes at a huge financial and environmental cost (in terms of carbon emissions). There are concerns that this further strengthens Big Tech, locking out smaller companies and stifling scientific research. Consequently, work on other avenues of improvement, such as improved data quality and technical enhancements to models, is a growing area of research.
__________________________________
AI Ethics
Explores the proposition that “difficult” new regulations around the use of high-risk AI could spur AI innovation in the EU.
领英推荐
The non-binding resolution also calls for a moratorium on the deployment of predictive policing software.
Proposes unbiasing (biased) human beings; data for good instead of data for bias; and educating citizens
Cynthia Rudin wins prize for her work on “interpretable” AI?in sensitive areas such as social justice and medical diagnosis.
Other interesting reads
This pre-print article describes extending DeepMind AlphaFold predictions from single proteins to multi-chain protein complexes.
A list of global startups applying AI to climate change challenges.
Google is applying new?Multitask Unified Model?AI to improve Google Search, and implementing the new T5 language model.
Google employs “contrastive learning” to generate additional, robust training data without requiring additional labelled images.
A new AI suite streamlines carbon footprint analysis across the supply chain and highlights risks arising from climate change.
__________________________________
Cool companies found this week
Duality?- technology to apply AI on sensitive data without ever decrypting has received?$30 million in series B funding.
Gretel?- offers privacy-by-design tools and has raised?$50 million in series B funding.
Neural Magic?- - provides AI software designed to run on Internet of Things devices gained?$30 million in series A investment.
__________________________________
A?flying robot that can skateboard?– with killer high heels
__________________________________
AI/ML must knows
Foundation Models?- any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning?- Supervised learning using only a small dataset to master the task.
Transfer Learning?- Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network?- Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning?- Deep learning is a form of machine learning based on artificial neural networks.
Thanks for reading and I'll see you next week!
If you are enjoying this content and would like to support the work then you can get a plan?here?from £1/month!
___________________________________
Graham Lane and Marcel Hedman
This newsletter is an extension of the work done by?Nural Research, a group which explores AI use to inspire collaboration between those researching AI/ ML algorithms and those implementing them. Check out the website for more information?www.nural.cc
Feel free to send comments, feedback and most importantly things you would like to see as part of this newsletter by getting in touch?here.