Artificial Intelligence, Real Skills: Velocity as a New Approach to Hiring AI Talent
An AI in Training

Artificial Intelligence, Real Skills: Velocity as a New Approach to Hiring AI Talent

The state of AI hiring in the U.S. is, at least based on current LinkedIn job listings, rather tight and concentrated. As of March 21, 2025, there are only about 2,400 open AI-related positions. However, the jobs that are available pay quite well, typically ranging from $150,000 to $200,000, with some roles offering over $325,000.?

One of the more amusing aspects of these postings is their continued insistence on arbitrary experience requirements. For example, job listings commonly request candidates with “5+ years of experience in LLMs” or “5 years of experience in TensorFlow.” Considering that TensorFlow 2 was only released in 2019, unless they are exclusively targeting Google Brain alumni or early adopters, it would be quite difficult to find candidates who meet that requirement. These experience requirements often feel more like arbitrary gatekeeping than meaningful qualifications.

Beyond the specificities of years of experience, the broader hiring approach in AI (and tech in general) remains counterproductive. It places too much emphasis on traditional markers like degrees and rigid experience levels, rather than actual capability. Personally, I am entirely self-taught in cybersecurity, programming, and AI, and I have done perfectly well for myself. I went from never having worked with the cloud to running full LLM-MLOps pipelines in production on GCP within a few months over the summer of 2024.? In fact, the best engineer I ever hired had no degree at all—he taught himself Application Security, Kubernetes, Cloud Engineering, and more. He outperformed candidates with prestigious academic credentials because he had the ability to learn quickly, adapt, and solve real-world problems.

This raises a fundamental question: Does hiring based on "years of experience" or "specific degrees" actually result in better employees? Obviously not. Given the rapid evolution of AI tools and frameworks, hiring should be based on adaptability, critical thinking, and a candidate's ability to leverage modern tools effectively. The reality is that a strong engineer can learn Kubernetes, Helm, and DevOps while on the job, leveraging LLMs and other AI-assisted learning tools, all while still delivering production-ready solutions on schedule.

We need actual technical tests (not LeetCode) where candidates can prove their skills in Jupyter Notebooks and DevOps to show they can meet business needs and deliver production-ready systems. This is also a far fairer test because anyone can compete. Let candidates take the test first, and then if they pass, you know you should interview them. Resumes tell me where you have been (or what you want me to know); they don’t show me what you COULD do.

What I envision for hiring testing is to have candidates complete a velocity challenge to build a simple full-stack application and build it to cloud-native standards incorporating some aspect of ML. For example, a medical reference app for EMS personnel. Given a list of 30 medications in a spreadsheet, can they design a full-stack web application for this, containerize it, deploy it to the cloud, and explain their deployment strategy? For the ML component, I would be looking for implementations of simple decision trees (rules-based), embeddings w/ semantic search, or even the “lazy” method of just incorporating an LLM API call to answer the question. I care less about the specific algorithm implementation and more about a detailed understanding of their engineering choices, considerations, and tradeoffs. I’d say this is a fair and reasonable MLOps test because it tests both ML techniques and the DevOps to actually package and deploy it into useful applications.

With LLM-based coding tools from Cursor to Claude to Grok, the key here is velocity. Velocity is a vector, not a scalar. It's not just about building things fast but building the right things fast to meet the actual business requirements. This is why we need true velocity tests for engineering, especially in AI, but across the software engineering industry as a whole, as AI takes on more and more of the actual "writing of code," and humans make the true engineering decisions.

要查看或添加评论,请登录

Ari B.的更多文章