7-eleven last mile delivery robot tests, Machine learning making autonomous cars safer, Thoughts-into-text AI
7-eleven has been testing last mile delivery robots in stealth
7-Eleven has been testing #selfdriving #robots in Hollywood to deliver foods and snacks. The convenience store chain has partnered with Uber backed food-tech startup, Serve Robotics .
Tests are currently being carried out where orders can be placed through an app and the fee for a robot delivery is $2.99. The robots, Snack-E and Nomsky, can carry about 50 pounds and are outfitted with cupholders. The #ai-powered robots are programmed to slow down on rough terrain to ensure the delivery of spill-free beverages.
Upon delivery, a code unlocks the robot so the contents can be retrieved from the robot. The robots are designed for short-distance deliveries, ranging from 1 to 3 miles. Most of Serve's robots can complete deliveries in about 15 minutes.
These last mile delivery robots have the potential to ease labour challenges being faced by many businesses as well as reduce costs by making deliveries cheaper. Serve has completed over?20,000 deliveries?so far and it “has attained exceptional on-time delivery, fulfillment, and customer satisfaction metrics.” The plan is for this test to eventually spread to other locations.?
Nuclear research center might help self-driving cars "see" the road better
The European Organization for Nuclear Research (CERN) is partnering with Volvo's Zenuity and Zenseact to improve #selfdrivingcars decision-making, thus enhancing road safety. CERN, known for its Large Hadron Collider, generates vast amounts of data that requires quick decision-making, which it achieves through Field-Programmable Gate Arrays (FPGAs).
CERN uses FPGAs, a hardware system capable of executing complex decision-making algorithms in microseconds, for fast machine learning in?autonomous driving?and particle physics experiments.?
For the past three years, researchers have been collaborating with Volvo's subsidiary Zenseact on #computervision for #autonomousdriving software. The aim is to try and enhance decision-making in autonomous cars using LHC's #deeplearningalgorithms. The research has shown potential for improvement in running the algorithms faster and more efficiently on resource-constrained on-device hardware. According to Christoffer Petersson, research lead at Zenseact, machine learning can drive faster decision-making in autonomous vehicles.
领英推荐
The partnership has highlighted the importance of collaboration and cooperation in science and technology. The collaboration between Zenuity and CERN will focus on utilizing Field-Programmable Gate Arrays (FPGAs) for advanced machine learning applications in both the autonomous driving sector and particle physics experiments. FPGAs will provide the necessary speed and decision-making capabilities for these complex applications.
AI can turn thoughts into text at record speed
Speech implants that use tiny electrode arrays can be inserted into the brain to measure neural activity, with the goal of transforming thoughts into text or sound. They’re invaluable for people who lose their ability to speak due to paralysis, disease, or other injuries. The only downside, however, is that they’re very slow, slashing word count per minute nearly ten times. The delay is considerable and can not enable everyday conversations.
A team led by Drs. Krishna Shenoy and Jaimie Henderson at Stanford University are changing this by using #ai to enable faster speech. Published on the preprint server?bioRxiv,?their study helped a 67-year-old woman restore her ability to communicate with the outside world using brain implants at a record-breaking speed.
Known as “T12,” the woman gradually lost her speech from amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, which progressively robs the brain’s ability to control muscles in the body. T12 could still vocalize sounds when trying to speak—but the words came out unintelligible.
With her implant, T12’s attempts at speech are now decoded in real time as text on a screen and spoken aloud with a computerized voice, including phrases like “it’s just tough,” or “I enjoy them coming.” The words came fast and furious at 62 per minute, over three times the speed of previous records.
This is a “big breakthrough” and impressive new performance benchmarks have been achieved, according to experts. However, the study hasn’t yet been peer-reviewed and the results are limited to the one participant.
The underlying technology isn’t limited to ALS only. The boost in speech recognition stems from a marriage between RNNs—recurrent neural networks, a machine learning algorithm previously effective at decoding neural signals—and language models. When further tested, the setup could pave the way to enable people with severe paralysis, stroke, or locked-in syndrome to casually chat with their loved ones using just their thoughts.
executive director at Centre for Research and Security Studies
2 年vow - very interesting and exciting development