Machine Learning News | Nov 27, 2022
Asif Razzaq
AI Research Editor | CEO @ Marktechpost | 1 Million Monthly Readers and 56k+ ML Subreddit
Featured Post?????????
Researchers show a systematic study of how adversarial attacks on the best object detection frameworks can be moved to other frameworks. They train patterns that suppress the objectness scores made by various commonly used detectors and groups. They do this by using standard detection datasets. Through much testing, researchers found how well adversarially trained patches work in both white-box and black-box settings and how well attacks can be transferred between datasets, object classes, and detector models. Lastly, they show a detailed study of physical world attacks using printed posters and clothes worn, and they use different metrics to measure how well these attacks work.
Scientists have made a real-life “invisibility cloak” that fools artificial intelligence (AI) cameras and stops them from recognizing people. Researchers have made a sweater that “breaks” AI systems that recognize people and makes a person “invisible” to AI cameras.
According to the research group, this stylish sweater is a great way to stay warm this winter. “It has a modern cut, a waterproof microfleece lining, and anti-AI patterns that will help you hide from object detectors.”
The researchers say, “In their demonstration, they were able to trick the YOLOv2 detector by using a pattern trained on a COCO data set and a carefully made target.”
Data Scientist @ Firemind | Applied AI and Machine Learning Solutions | x6 AWS certified
1 年Lara Wehbe