May 04, 2021

May 04, 2021

Why Is There A Shortage Of MLOps Engineers?

MLOps and DevOps engineers require different skill sets. Firstly, developing machine learning models do not need a software engineering background as the focus is mainly on the proof of concept/prototyping. Secondly, MLOps are more experimental in nature compared to DevOps. MLOps calls for tracking different experiments, feature engineering steps, model parameters, metrics, etc. MLOps is not limited to unit testing. Various parameters need to be considered, including data checks, model drift, analysing model performance, etc. Deploying machine learning models is easier said than done as it involves various steps, including data processing, feature engineering, model training, model registry and model deployment. Lastly, MLOps engineers are expected to track data distribution with time to ensure the production environment is consistent with the data it is being trained on. Last year, AI/ML research hit the doldrums in the wake of the pandemic; tech giants like Google slowed down hiring AI researchers and ML engineers, and Uber laid off their AI research and engineering team.


AI security risk assessment using Counterfit

The tool comes preloaded with published attack algorithms that can be used to bootstrap red team operations to evade and steal AI models. Since attacking AI systems also involves elements of traditional exploitation, security professionals can use the target interface and built-in cmd2 scripting engine to hook into Counterfit from existing offensive tools. Additionally, the target interface can allow for granular control over network traffic. We recommend using Counterfit alongside Adversarial ML Threat Matrix, which is an ATT&CK style framework released by MITRE and Microsoft for security analysts to orient to threats against AI systems. ... The tool can help scan AI models using published attack algorithms. Security professionals can use the defaults, set random parameters, or customize them for broad vulnerability coverage of an AI model. Organizations with multiple models in their AI system can use Counterfit’s built-in automation to scan at scale. Optionally, Counterfit enables organizations to scan AI systems with relevant attacks any number of times to create baselines. Running this system regularly, as vulnerabilities are addressed, also helps to measure ongoing progress toward securing AI systems. 


New Attacks Slaughter All Spectre Defenses

The findings are going to obliterate a pile of work done by those who’ve been working hard to fix Spectre, the team says. “Since Spectre was discovered, the world’s most talented computer scientists from industry and academia have worked on software patches and hardware defenses, confident they’ve been able to protect the most vulnerable points in the speculative execution process without slowing down computing speeds too much. They will have to go back to the drawing board,” according to UVA’s writeup. The new lines of attack demolish current defenses because they only protect the processor in a later stage of speculative execution. The team was led by UVA Engineering Assistant Professor of Computer Science Ashish Venkat, who picked apart Intel’s suggested defense against Spectre, which is called LFENCE. That defense tucks sensitive code into a waiting area until the security checks are executed, and only then is the sensitive code allowed to execute, he explained. “But it turns out the walls of this waiting area have ears, which our attack exploits. We show how an attacker can smuggle secrets through the micro-op cache by using it as a covert channel.”


Drake: Model-based design in the age of robotics and machine learning

The Drake developers have a philosophy of rigorous test-driven development. The governing equations for multibody physics are well known, but there are often bugs in a complex engine like this. If you scan the codebase, you will find unit tests that contain comparisons with closed-form solutions for nontrivial mechanics problems like a tumbling satellite, countless checks on energy conservation, and many other checks that help the rest of the team focus on manipulation with the confidence that the multibody models are implemented correctly. Importantly, this dynamics engine is not only for simulation. It is also built for optimization and for control. The exact same equations used for simulation can be used to compute forward or inverse kinematics and Jacobians. They can also be used for more complex queries like the gradient of an object’s center of mass. We provide smooth gradients for optimization whenever they are available (even through contact). Drake also supports symbolic computation, which is very useful for structured optimization and for use cases like automatically extracting the famous “lumped parameters” for parameter estimation directly from the physics engine.


How to lead a digital transformation — ethically

Not all ethical imperatives related to digital transformation are as debatable as the suggestion that it should be people-first; some are much more black and white, like the fact that you have to start somewhere to get anywhere. Luckily, “somewhere” doesn’t have to be from scratch. Government, risk and compliance (GRC) standards can be used to create a highly structured framework that’s mostly closed to interpretation and provides a solid foundation for building out and adopting digital solutions. The utility of GRC models applies equally to startup multinationals and offers more than just a playbook; thoughtful application of GRC standards can also help with leadership evaluation, progress reports and risk analysis. Think of it like using bowling bumpers — they won’t guarantee you roll a strike, but they’ll definitely keep the ball out of the gutter. Of course, a given company might not know how to create a GRC-based framework (just like most of us would be at a loss if tasked with building a set of bowling bumpers). This is why many turn to providers like IBM OpenPages, COBIT and ITIL for prefab foundations.


Use longitudinal learning to reduce risky user behavior

Longitudinal learning is a teaching method that is gaining traction within academia, particularly for corporate training. This continuing education approach involves administering shorter assessments of specific content (such as whether to click on a URL embedded within an email sent by an unknown user) repeatedly over time. Through a consistent assessment process, security concepts and information are reinforced so that knowledge is retained and accumulated gradually. Studies on longitudinal learning in healthcare showed that testing medical students in combination with explaining the information is the most effective way to drive the long-term retention of information. Consistent, repetitive lessons are critical to help employees overcome the cognitive biases that cybercriminals count on to execute their attacks. The human mind is stingy; that is to say, that the brain processes so much information daily that it is constantly trying to take shortcuts to save energy and enable multi-tasking. Cybercriminals know this which is why impersonation attacks, phishing, and rnalicious URLs are so effective. Did you catch the typo in the last sentence? If not, look at the word “malicious” again.

Read more here ...

要查看或添加评论,请登录

Kannan Subbiah的更多文章

  • March 28, 2025

    March 28, 2025

    Do Stablecoins Pave the Way for CBDCs? An Architect’s Perspective The relationship between regulated stablecoins and…

  • March 27, 2025

    March 27, 2025

    Can AI Fix Digital Banking Service Woes? For banks in India, an AI-driven system for handling customer complaints can…

  • March 26, 2025

    March 26, 2025

    The secret to using generative AI effectively It’s a shift from the way we’re accustomed to thinking about these sorts…

  • March 25, 2025

    March 25, 2025

    Why FinOps Belongs in Your CI/CD Workflow By codifying FinOps governance policies, teams can put guardrails in place…

  • March 24, 2025

    March 24, 2025

    Identity Authentication: How Blockchain Puts Users In Control One key benefit of blockchain is that it's decentralized.…

  • March 23, 2025

    March 23, 2025

    Citizen Development: The Wrong Strategy for the Right Problem The latest generation of citizen development offenders…

  • March 21, 2025

    March 21, 2025

    Synthetic data and the risk of ‘model collapse’ There is a danger of an ‘ouroboros’ here, or a snake eating its own…

  • March 20, 2025

    March 20, 2025

    Agentic AI — What CFOs need to know Agentic AI takes efficiency to the next level as it builds on existing AI platforms…

  • March 19, 2025

    March 19, 2025

    How AI is Becoming More Human-Like With Emotional Intelligence The concept of humanizing AI is designing systems that…

  • March 17, 2025

    March 17, 2025

    Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured…

社区洞察

其他会员也浏览了