Why Relying on Data Can Cause More Harm Than Good
Jacob Morgan
5x Best-Selling Author, Futurist, & Keynote Speaker. Founder of Future Of Work Leaders (Global CHRO Community). Focused on Leadership, The Future of Work, & Employee Experience
Cathy O’Neil is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. Cathy founded ORCAA, an algorithmic auditing company, and is the author of Weapons of Math Destruction.
Click here to listen to this week’s podcast.
Cathy says she was always a math nerd. She loves the beauty of mathematics, and says it is almost an art – the cleanliness of it. One of her favorite things is that math is the same no matter what country you go to. She also had had an interest in the business world, which led her from academia to work as a hedge fund quantitative analyst.
Big Data is both a technical and marketing term. The technical term depends on the technology you are using. Big data used to mean that it was more data than you could fit on your computer – now it means more that you can perform in a simple way – that it needs to be put it into another form before it can be used.
The marketing term, ‘big data’ is misleading. However, it represents the belief that you can collect data for one thing but then the same data can be used for another purpose. “It is a technology that allows us to collect seemly innocuous data and use it for another purpose.”
One profession in which O’Neil has at looked at the use of big data and algorithms in detail – and discusses in her book – is teaching and their evaluations. She says there were teacher evaluation algorithms originally designed to eliminate the achievement gap between ‘rich kids and poor kids’. Eventually, a new system was devised entitled, ‘value added teacher model’.
The idea of this new system intended to offset the previous way of looking at assessing teachers – where they solely looked at the teacher’s students’ final test scores.
The ‘value added score’ system holds teachers accountable for the difference between students’ final score and what they were expected to achieve/receive.
O’Neil says that this method ‘sounds good’ and seems to ‘make sense’. However, with only 25 (or so) students in one teacher’s classroom, there is not enough data. Additionally, both the expected and actual scores have a lot of uncertainty around each of them. So this final number ‘ends up not much better than a random number’. With that, there is not enough credible data to base important decisions such as terminating a teacher’s job.
One of O’Neil’s main points in today’s discussion is that every algorithm is subjective. Whether it is used to evaluate teachers, hire or fire employees in a financial organization – people should know that they have the right to ask the algorithm explained to them. The 14th Amendment provides them ‘due process’ to ask why they are terminated, not promoted, etc. – other than just alluding to a algorithm result.
Click here to listen to this week’s podcast.
Jacob Morgan is a best-selling author, speaker, and futurist. His new book, The Employee Experience Advantage (Wiley) analyzes over 250 global organizations to understand how to create a place where people genuinely want to show up to work. Subscribe to his newsletter, visit TheFutureOrganization, or become a member of the new Facebook Community The Future If…and join the discussion.