What's new in innovation?
The first two weeks of September are full of announcements, meetings, and innovations -
I prepared a short article that includes all the interesting things that happened in the meantime, what will you find there?
1." Customized VR Chipsets" for Future Devices.
2. Meta's unrealized VR ambitions.
3. How does quantum levitation work?
4. Stable Diffusion brings native AI art to your PC.
5. The future of the nanotechnology industry: the smallest industry in the world.
6. Using artificial intelligence, Atelier TAT presents a surreal and futuristic architecture that comes from Earth.
7. Meet "dune", by Duffy London, the table that examines new possibilities for 3D printing.
8. What is reinforcement learning? How artificial intelligence teaches itself.
9. Interesting opportunities.
Enjoy!?? Good morning ????
1." Customized VR Chipsets" for Future Devices.
Two of the biggest names in XR — headset maker Meta and chip maker Qualcomm — have announced a "broad, multi-year strategic agreement" to collaborate on "custom VR chipsets" for future devices.
Qualcomm, a leading supplier of smartphone processors, was an early leader in the XR space by pushing versions of its mobile Snapdragon processors as ideal for use in both AR and VR headsets — a spectacle that now sees the company's product in the vast majority of standalone processors. headphones are available in the market today.
Meta has used Qualcomm processors in all its standalone headsets to date - Go, Quest and Quest 2 - and is expected to do the same in its upcoming Project Cambria headset.
But there's likely another key reason for this partnership—it brings together two allies against a common threat: Apple.
Although Apple hasn't officially announced XR products yet, all signs point to a long history of R&D and a desire for the company to dominate the space. For Meta, which itself wants to control XR's destiny, that's a problem. Mark Zuckerberg has been exploring this potential since at least 2015, prompting from the beginning to acquire Oculus - in an effort to get ahead of companies like Apple and Google in the nascent XR field.
Apple has long built custom processors for its smartphones, which has given the company an edge over competitors that use commodity chips. In recent years, Apple has also gradually begun to phase out third-party processors in favor of its own chips in its computer products, which signals the maturation of the company's microprocessor design and manufacturing capabilities.
For Meta, the partnership with Qualcomm bolsters strategic vulnerability by giving the company a committed ally that can produce highly specialized chips for XR devices.
For Qualcomm, the partnership with Meta is an effort to ensure Apple doesn't easily dominate the XR market and eliminate the company's opportunity to sell chips to a wide range of non-Apple XR device makers.
Ultimately the partnership is a maneuver in the fight for early ground in a market that the companies expect will one day be worth trillions of dollars.
@ cnbc
2. Meta's unrealized VR ambitions.
On the one hand, Tyler Yee, a spokesman for Meta, said the company did not discuss details of how its roadmap developed and would not comment on specific plans it had for custom chips for Quest products. And on the flip side, Yee shared a statement about the company's "general approach to custom silicon," saying that Meta doesn't believe in a "one-size-fits-all approach" for the technology powering its future devices.
"There could be situations where we use off-the-shelf silicon or work with industry partners on customizations while exploring our new silicon solutions. There could also be scenarios where we use both partner solutions and custom solutions in the same product." "It's all about doing what it takes to create the best metaverse experiences possible."
META is betting big on the metaverse, but several of its projects have hit roadblocks
There are other signs of where Meta has scaled back its VR/AR ambitions.
The company currently uses Android to power the Quest, but has reportedly been working on its own operating system for its virtual reality and augmented reality devices. According to a report from The Information, it has suspended work on a specific project called XROS, although the company responded to this article by saying that it is "still working on a very special operating system for our devices." Still, the "microkernel-based operating system" that Meta CEO Mark Zuckerberg said was in the works in 2021 has yet to appear.
The background to all of this is a company that faces a lot of pressure. Meta's revenue fell for the first time (thanks in part to Apple's changes to how apps are allowed to track users), and Zuckerberg specifically stated plans to turn up the heat on employees while admitting, "I think some of you might just say this place isn't for you. And the choice This self is fine for me." At the same time, he is making a massive bet on Metaverse - the company spends and loses billions of dollars a year on the project, which includes AR and VR headsets.
This is a high-stakes game and you'll probably want to play as close to the chest as possible. But for now, hardware clients seem to be accessing Zuckerberg's Metaverse (if they're going to do it at all,?instead of just playing Beat Saber ) will remain powered by someone else's chips.
@ theverge
3. How does quantum levitation work?
With the right material at the right temperature and magnetic orbit, physics really does allow for perpetual motion without energy loss.
In our conventional world, if you apply a voltage to any set of charged particles, it will cause them to move and create a current, but all the resistance of the material they pass through will oppose that movement.
However, under certain low-temperature conditions in certain specific materials, the resistance can drop to zero, creating a "lossless" medium for electricity to flow through a superconductor.
By leveraging the properties of certain superconducting materials with impurities in them, a properly configured magnetic array can lead to quantum levitation, just like you see here!
The idea of floating off the ground has been mainly in the dreams of science fiction and the human imagination since time immemorial. While we don't have our levitation panels yet, we do have the very real phenomenon of quantum levitation, which is almost as good.
Under the right circumstances, a material specially made to low temperatures can be cooled and placed over a properly configured magnet, and it will float there indefinitely. If we create a magnetic orbit, it will hover above or below it and remain in constant motion.
But shouldn't perpetual motion be an impossibility in physics? It is true that the law of conservation of energy cannot be violated, but it is possible to greatly reduce the resistance forces in any physical system. In the case of superconductivity, a special set of quantum effects really allows the resistance to drop to zero, allowing for all sorts of strange phenomena, including the one you see here:
What does all this mean? This levitation is actually real and has been achieved here on Earth. We could never do this without the quantum effects that enable superconductivity, but with them, it's just a question of designing the right experimental setup.
It also gives us a great sci-fi dream for the future. Imagine roads made of properly defined magnetic tracks. Imagine backpacks, vehicles, or even shoes with the right kind of conductors at room temperature. And imagine driving at the same speed without having to use a drop of gas until it's time to slow down...
@ bigthink
4. Stable Diffusion brings native AI art to your PC.
Diffusion is the dispersion of a substance (a physical object, such as molecules) down a concentration gradient, from its high concentration to its low concentration. Pulsation is a spontaneous movement of particles, a process caused by the constant and random movement of the particles of matter, which results from the kinetic energy they have. The result of the stirring is a gradual mixing of the material until a uniform concentration is reached over the volume available to the material.
AI-generated artwork is incredibly popular right now. Now you can create photorealistic images directly on your PC, without using external services like Mid journey or DALL-E 2.
Stability AI is a technology startup developing the "Stable Diffusion" AI model, which is a complex algorithm trained on images from the Internet. Following a test version available to researchers, the company has officially released the Stable Diffusion model, which can be used to create images from text messages. Unlike Mid journey and other models/generators, Stable Diffusion aims to create photorealistic images (photorealistic visualization is a visualization that actually looks like a photograph. Photorealistic visualizations actually present photographs of the project and visually express different elements - with great precision. For example, such visualization brings to express the texture, the shine, the transparency, as well as other elements such as materials and decoration.)
Something that has already led to controversy over "fake Deepfakes" content. However, it can also be defined as the imitator of a particular style of a given artist.
Stable Diffusion is unique because it can run with a typical graphics card, rather than using remote (and expensive) servers to generate images. AI stability recommends using NVIDIA graphics cards right now, but full support for AMD and Apple Silicon is in the works.
Stable Diffusion has a 'Safety Classifier' mode that tries to block the creation of offensive images, but since the model is open source, it can be turned off when running on a computer. Web-based generators prevent people from using prompts that mention certain words or phrases to prevent creating images that could be used to deceive or harm others. For better or worse, Stable Diffusion can create more image types than most other services.
Given Stable Diffusion's open source licensing model, and its impressive generation capabilities, it is likely that most AI generators will adopt the new model.
@ howtogeek
5. The future of the nanotechnology industry: the smallest industry in the world.
With limitless applications, nanotechnology is fast becoming the world's leading scientists.
Nanotechnology refers to the scientific study, research, and re-engineering of the properties of atoms and molecules. There is a great deal of controversy surrounding this science, as it aims to reshape the fundamentals of matter. As with any emerging field, there are disadvantages and advantages, and due to the infinitely wide use for applications, nanotechnology will affect our daily lives in a profound way that we have only just begun to see.
Introduced to the world in 1959 by the physicist Richard Feynman, nanotechnology was conceptualized as synthesis through the reconstruction of atoms and molecules.
During the last decade and a half, nanotech has been one of the fastest growing industries in the world and develops significantly every year with new and excellent applications. We have seen incredible innovation in energy, robotics, agriculture, health, computing, military intelligence, and manufacturing. These are just a small sampling of the sectors where nanotech has been a major leader in advancement.
Future applications
While it could be argued that the future of nanotech is happening now, we've barely scratched the surface. For example, the confluence of nanotechnology with self-fulfilling artificial intelligence has long been theorized for its potential benefits in predicting, solving, and managing environmental crises and space exploration through the analysis of universal patterns and behaviors. Although still a long way off, the applications for making climate concerns a thing of the past or developing new climate systems on habitable planets are quite plausible.
Nanotech is expected to reach $33.63 billion by 2030, from the current market value of $1.76 billion, nanotech is on track to trend as one of the fastest growing sciences today, not only due to its percentage growth, but in its ongoing collaboration with industries across the board, and budget distribution of their market share. Future applications are truly limitless with nanotechnology, and living in this age of exploration is truly exciting.
@ entrepreneur.com
6. Using artificial intelligence, Atelier TAT presents a surreal and futuristic architecture that comes from Earth.
Using the artificial intelligence program Mid journey, TAT Atelier Architecture imagines a series of surreal organic structures that emerge from the earth and become livable spaces of the future.
The series titled Rethink Earth Architecture is part of an ongoing experiment by the Indian design studio exploring the limits and potential of artificial intelligence and generative design in the field of architecture. Reinventing how architecture can be envisioned and conceived in the future with the help of artificial intelligence tools, the artworks reveal strange earthen structures composed of raw mushrooms and tree bark within desolate backdrops of vast natural landscapes.
The architecture created by AI is rising from the ground
With the recent sweeping trend of using AI in creative design, tools such as Mid journey and DALL·E are now increasingly being adopted by architects to visualize the spaces of the future. In the experimental project created by AI Rethink Earth Architecture, TAT Atelier Architects are exploring how Mid journey can assist architects' processes to push their concepts beyond existing boundaries. Following the input of a series of text-based guidelines and further iterations, the program envisions this collection of surreal organic structures. Phrases such as 'earth house architecture and 'unreal engine' produce a futuristic architecture that unites with the earth, emerges from living organisms such as mushrooms, or natural landscapes such as stone mountains, and becomes livable spaces.
The designers propose that artificial intelligence will simplify, assist and increase the planning process in architecture, starting from clarifying customer requirements and designing a design language, to ensuring the building's compliance with regulations such as building laws and zoning data, and beyond. The tools have the power to sort through endless data and create countless design variations.
Project information:
Name: Rethink Earth Architecture, Designer: TAT Atelier Architects Esther Borao tat.atelier Team: Ar. Jerin Jabir, Ar. Byrun Shabeer, Ar. Muhammed Basil
@ designboom
7. Meet "dune", by Duffy London, the table that examines new possibilities for 3D printing.
Pushing the boundaries of what is possible, innovative design studio Duffy London exclusively unveils its newest sculptural piece of furniture, the 'Dune' coffee machine, 3D printed using black quartz sand.
Following on from the hand-sculpted and CNC marble table designs for Civilization and Monument Valley, which debuted in the large-scale 3D printed sand furniture studio, created in collaboration with German manufacturer Sandhelden. Like many Duffy London designs,
The result is a thought-provoking visual statement piece that truly captures the essence of the desert with its incredible fluidity and movement.
The new premium material
Inspired by the wave-like patterns of sweeping sand dunes, the table comes to life in a rugged black finish or natural sand texture, combined with a thick glass top to lend a sense of grandeur to the final piece. For designer and founder Christopher Duffy, discovering this new premium material was inspiring because "the very touch and feel, not to mention the surprising weight of this medium is similar in some of its qualities to marble."
Using sand for a free-flowing texture pattern and minimizing the carbon footprint
Using a 3D printed sand approach allows for total flexibility and allows for a consistent and free-flowing design to be created, all while using a premium quality material that could not be more in line with the original inspiration of the table.
3D Printed Dune Coffee Table by Duffy London:
@ designboom
8. What is reinforcement learning? How artificial intelligence teaches itself.
What will we discuss-
Machine learning (ML) may be considered the core subset of artificial intelligence (AI), and reinforcement learning may be the essential subset of ML that people imagine when they think of AI.
Reinforcement learning is the process by which a machine learning algorithm, robot, etc. can be programmed to respond to complex, real-time, real-world environments to optimally reach a desired goal or outcome. Think of the challenge posed by self-driving cars.
The algorithms involved can also "learn" from this process of absorbing and responding to new circumstances or improving.
Other forms of ML may be "trained" by massive sets of "training data," often allowing the algorithm to classify or aggregate data—or otherwise identify patterns—based on the contexts and outcomes on which it was trained. Machine learning algorithms start with training data and create models that capture some of the patterns and lessons embedded in the data.
Reinforcement learning is part of the training process that often occurs after deployment when the model is working. The new data captured from the environment is used to adapt the model to the current world.
Reinforcement learning is achieved through a feedback loop based on "rewards" and "punishments". The scientist or user creates a list of successful and unsuccessful results, which the AI then uses to adjust the model. This may adjust some of the weights in the model, or even re-evaluate some or all of the training data in light of the new reward or punishment.
For example, an autonomous car might have a set of simple predetermined rewards and punishments. The algorithm receives a reward if it arrives on time and does not make sudden speed changes such as panic braking or rapid acceleration. If the car hits the curb, gets caught in a bad traffic jam, or brakes unexpectedly, the algorithm will be penalized. The model can be trained with special attention to the process that led to the bad results.
In some cases, the reinforcement happens during and after deployment in the real world. In other cases, the model is refined in a simulation that generates synthetic events that may reward or penalize the algorithm. These simulations are especially useful with systems like autonomous vehicles that are expensive and dangerous to test in actual deployment.?
In many cases, reinforcement learning is just an extension of the main learning algorithm. It iterates through the same process again and again after the model is put to use. The steps are similar, and the rewards and punishments become part of an extended set of training data.?
What is the history of reinforcement learning? Reinforcement learning is one of the first types of algorithms that scientists developed to help computers learn how to solve problems on their own. The adaptive approach that relies on rewards and punishments is a flexible and powerful solution that can leverage the indefatigable ability of computers to try and retry the same tasks.?
Mathematician and computing pioneer Alan Turing contemplated and reported on a “child-machine” experiment using punishments and rewards in a?paper published in 1950 .
In the early 1950s, scientists like Marvin Minsky, Belmont Farley, and Wesley Clark created models that adapted themselves to their input data until they provided the correct response. Minsky called his approach?SNARC s, which stood for “Stochastic Neural-Analog Reinforcement Calculators.” The name suggested that they used reinforcement learning to refine the statistical model. Farley and Clark built some of the same neural networks that connected individual simulated neurons into networks that converged upon an answer.?
One of the most influential approaches came from?Donald Michie in the early 1960s . He proposed a very simple approach to learning to play tic-tac-toe that was also easily understood by non-programmers. He compiled a list of the possible positions of Xs and Os that constituted the state of the game. Then he assigned one matchbox for each possible position. Inside the matchbox, he would put a set of colored beads, with each color representing one of the possible moves.?
What are some useful open-source options for reinforcement learning?
There are a number of different packages or frameworks designed to help artificial intelligence scientists continue to train their models and reinforce important behaviors. These are generally distributed as open-source packages that make it simpler for companies and scientists to adopt them.?
How do major providers handle reinforcement learning?
The major AI cloud platform providers also support reinforcement learning.
Amazon offers a variety of platforms for exploring artificial intelligence and building models, and all offer some options for using reinforcement learning to guide the process.?SageMaker RL ,?RoboMaker ,?and?DeepRacer ?are just three of the major machine learning options, and all support a variety of different open-source options for adding the feedback from reinforcement learning like?Coach, Ray RLLib ?or?OpenGym .?
Google’s?VertexAI , its unified machine learning platform, offers options like?Vizier ?to find the best types of data, aka?hyperparameters , to help the model converge quickly. This can be especially helpful for training a model with many inputs because the complexity of covering all the options grows quickly. The company has also been?enhancing ?some of its hardware options for faster training, like tensor processing units (TPUs) to support more distributed reinforcement algorithms.?
How do AI startups handle reinforcement learning?
Many of the startups delivering artificial intelligence solutions have engineered their algorithms to support reinforcement learning later in the process. This approach is very common in many of the solutions that support autonomous robots and vehicles.?
Wayve , for instance, is creating guidance systems for autonomous cars using a pure machine learning approach. Its system, AV2, is constantly reinforcing its model creation as new data about the world becomes available.?
Startups like?Waymo ,?Pony AI ,?Aeye ,?Cruise Automation ,?and?Argo ?are a few with significant funding that is building software and sensor systems that depend upon models of the natural world to guide autonomous vehicles. These vendors are deploying various forms of reinforcement learning to improve these models over time.?
Other companies deploy route planning algorithms for domains that need to respond to changing, real-time information.?Teale ?is building drill guidance systems for the extraction of oil, gas, or water from the ground.?Pickle Robot? and?Dorabot ?are creating robots that can unpack boxes stacked in haphazard ways in large trucks.??
Many pharmaceutical companies are marrying reinforcement learning with drug development to help doctors home in on treatments for a variety of diseases. Companies like?Insilico ,?Phenomic ,?and?ProteinQure ?are refining reinforcement learning algorithms to incorporate feedback from doctors and patients in their search for potentially useful drugs and proteins. The process could both unlock new potential drugs and lead to individualized treatments.?
Other companies are exploring specific domains.?Signal AI ?is a media monitoring company that helps other companies track their reputations by creating a knowledge graph of the world and constantly refining it in real-time.?Perimeter X ?enhances web security by constantly watching for threats with an evolving model.?
Is there anything that reinforcement learning can’t do?
Ultimately, reinforcement learning is just like regular machine learning, except it collects some of its data at a later time. The algorithms are designed to adapt to new information, but they still process all the data in some form or other. So, reinforcement learning algorithms have the same philosophical limitations as regular machine learning algorithms.?
These are already well-known by machine learning scientists. Data must be carefully gathered to represent all possible combinations or variations. If there are gaps or biases in the data, the algorithms will build models that conform to them. Gathering the data is often much more complicated than running the algorithms.?
Delaying some of the data can have mixed effects. Occasionally the delay introduced by reinforcement helps the human guide the model to be more accurate, but sometimes the human interaction just introduces more randomness to the process. Humans are not always consistent and this can confuse the modeling algorithm. If one human inputs one choice on one day and another human input the opposite later, they will cancel each other out and the learning will be limited.?
There is also an air of mystery to the entire process. While AI scientists have grown more adept at providing explanations for how and why a model is making a decision, these explanations are still not always fulfilling or insightful. The algorithms churn to produce a result and that result can be an inscrutable collection of numbers, weights, and thresholds.?
Reinforcement learning also requires extensive exploration and experimentation. Scientists often work through numerous possible architectures for a model, with different numbers of layers and configurations for the various artificial neurons. Finding the best one is often as much an art as a science.?
In all, reinforcement learning suffers from the same limitations as regular machine learning. It’s an ideal option for domains that are evolving and where some data is unavailable at the start. But after that, success or failure depends upon the underlying algorithms themselves.
@ venturebeat
9. Interesting opportunities.
?? Check out a variety of topics including, but not limited to:
- Tracking and modeling of people/objects
- Action and gesture recognition
- New sensors / sensor fusion
For more information and to apply by September 20: https://ow.ly/buGg50KrvFT
After the launch conference of the National Program for Artificial Intelligence, which took place in July, we meet again on 9/15.
The meeting schedule:
10:00-10:15 Presentation of the main points of the National Program for Artificial Intelligence - Tom Dan Deputy Director General of the Ministry of Innovation, Science and Technology.
10:15-10:45 From Dali to Dall-e responsibility and ownership in artificial intelligence products - Professor Shlomit Yeniski Ravid. Head of a master's degree in law, high tech and technology, at the Faculty of Law, Academic Kirya Ono
10:45-11:00 Regulation of artificial intelligence - Attorney Danny Horin, ombudsman of the Ministry of Innovation, Science and Technology
11:00-11:30 Responsible AI: bridging professional gaps - Prof. Avigdor Gal, Technion, Dr. Karni Shegal-Pefferkorn, University of Ottawa, Shlomi Hood, Boston University
11:30-11:50 Promoting Artificial Intelligence Ministry of Education - Dr. Amir Gefen Director of the Artificial Intelligence Laboratory, R&D Division, Ministry of Education
11:50-12:00 Preliminary Policy Recommendations for the Regulation of Artificial Intelligence - Adv. Yosef Gadalihu, Technology and Regulatory Policy Referent, Consulting and Legislation (Economic), Ministry of Justice
The meeting will take place on Zoom, a link to register for the meeting is attached. https://app.activetrail.com/Members/PreviewCampaign.aspx?CampaignId=4888883
Israel Innovation Authority is looking for experts in various fields
It's time to take part in the group of professional testers that helps move the Israeli high-tech and innovation industry forward!
Want to check investment requests from different companies and startups? An opportunity to research, test, and advise in very diverse fields. Want to be exposed to developments that are about to make a difference? Do you think you can make decisions and advise as experts? We are looking for exactly you! The Innovation Authority (the largest investment fund in Israel...) is looking for experts in a variety of technological fields.
All the details in the link: https://bit.ly/3qc7xya