Rejected by LLM for a Job?
John Kruebbe
AI/ML & LLM/RAG Expert | Founder Diassu Software | Inventor Diassu Safe Products
Can you imagine the nightmare of having all the right skills and being totally qualified for the job, but never getting the call? What if it was just that your profile does not include Python, C++ or whatever the job poster was looking for. What if you took those skills off of your resume to make it more concise because you are looking for a management job and you want to not focus on programming that much anymore? But what if you need a job doing anything you are qualified for? I hope you see where this is going.
I think this is highly possible now days because there are so many folks unemployed that recruiters are not able to look at all the resumes. I spoke with at least 2 recruiters that said the job I was selected to interview for had 3000 applicants. I believe them too.
So, we programmers, as we want to help, so we create an RAG LLM AI model to put all the resumes through and let it select the person to screen. Seems like a reasonable request, right? Well, think again.
What if the AI LLM suggests to the manager or recruiter like shown above to reject you? BUT you have all the skills? What if every company uses this model because it is say part of a popular hiring software that all the big companies use?
This is called Bias or lack of knowledge of a model, and I experienced this recently while putting a summary to ChatGPT4.0., BERT and Claude. They all did the same thing. What could be going on here?
Enter RAG (Retrieval Augmented Generation). Say that the company has a RAG function that goes out and actually searches the web and one of your worst enemies wrote "John Lacks technical skills" on Glass Door for example.
The RAG was looking at the Linked In Summary. It was not looking at my Resume or the job postings where I listed C++ and Java skills. This is a problem with the RAG obviously, but it brings to mind how easily AI can cut you out if the programmers have not got it right yet.
I asked the model to explain how it reached its conclusion step by step and then the trace showed that I just did not have the C++ and Java skills listed in my summary on my Linked in.
I am not saying that everyone should run out to put all the skills there. Keep on reading this TLDR.
Even worse yet, what if it just means that I did not detail my technical skills, and the recruiter has to call me to ask me what my technical skills are? But she/he has 500 of 3000 candidates to call.
领英推荐
So, obviously some automation that actually works would greatly help this situation and an LLM there to do the job would be the right approach to help the recruiters.
Also, I don't blame the folks that created the model or even the scenario because they were just testing a Beta model at this point, but it brings to mind a scenario that really could cause a group of people harm if we don't get it right. If I really could not find a job because of this, then who should be at fault? Should there be a government body that I could complain to?
So, now I am unemployable. I go broke and become homeless. Now who is to blame? Is it really all on me? It should not be. I think that the model should be required in this case to go out and see what percentage of the details on me is negative. There should be a detailed complex prompt that the LLM uses. If it were say, 80% negative, then maybe that could be a valid Reject. Or would it?
What if I knew that I had the 80% negative score. What if I got 100 people offshore to post positive comments on me and bias the model back towards me being 10% negative and 90% positive. Now what is the ground truth here now?
Since these things are so very complex, I call for an non-governmental body, say an AI steering body to be created by PhD's in Philosophy, Computer Science, Law and Ethics. Experienced LLM folks like me should be involved. We should make suggestions on what to do in these scenarios.
I would create an LLM model which would be responsible for its own bias. I think this will be possible now that LLM's can do Meta Prompting. We need to teach the model how to judge its own bias and report how much bias and send that report that to the user. There needs to be an Explain button as well put back on the Bing Chat and OpenAI user interface so that you can know how the AI reached its conclusion. There needs to be a mitigation process on each model to get the information correct and it should not be on the team at OpenAI or Google or Microsoft to fix it. It needs to be enabled by them though.
My questions are:
1. Why did you guys take that Explain button off of the Bing interface? Why was it never there?
2. What ELSE do you think we need to do?
3. Could you point me to some governing bodies that fit this criterion already? ( I am about to ask ChatGPT 4.0... but would still like to hear from you because I want to talk more)