Yes, that’s right, some algorithms are biased; we have to work on improving the data we feed them

No alt text provided for this image

The story began on TwitterDavid Heinemeier Hansson, DHH, a respected entrepreneur and creator of Ruby on Rails with more than 358,000 followers, shared a thread about how his wife, who had requested the same Apple Card as him, was given a credit limit 20 times lower than his, even though they filed joint tax returns and that, in addition, she had a better credit history than he did.

The somehow explicit tweet soon went viral, as people with similar experiences shared their stories and DHH said the company had ignored his appeals. In the meantime, he has triggered an investigation by the New York State Department of Financial Services for gender discrimination, while the card issuer, Goldman Sachs, has announced it would reassess how it calculates credit limits.

Coming from an experienced developer, DHH’s complaint, which blamed Apple exclusively for the problem, is curious: a machine learning system reflects the data we feed it, and even in developed countries (and more so in other parts of the world), there are still strong gender biases, particularly in financial environments. Obviously, this needs to be corrected, and in most of those developed countries this is what’s happening — we could argue about whether this is happening quickly enough — but that, I fear, does not address the issue of data already used to feed algorithms.

Biased machine learning algorithms do not necessarily indicate any kind of intentionality or discrimination on the part of the company that markets a product or service. Discrimination is the result of human intervention, and therefore must be corrected, typically as a result of consumers complaining. Believing that algorithms are biased by nature is like saying they’re infallible, when in reality, they are simple mathematical methods that process data. If the data is biased, those biases are reflected in the decisions algorithms make, and will remain so until the data is corrected or mathematically discounted by methods that correct the sample bias. This is a topic I have been discussing in my classes for years and is not so hard to understand or apply from a purely statistical standpoint.

Machine learning is one of the most interesting and promising technologies to benefit from much more sophisticated, powerful and flexible automation than simply programming variables and taking conditionals and loops into account. Applying machine learning means accessing much more intelligent, powerful and flexible rules, which are corrected as the sample data evolves, reflecting variations based on many factors, and which have the capacity to improve over time. There is even talk of using history as a huge database to feed algorithms, something that will require the participation of professional profiles of many different types, combining technical with humanistic disciplines.

Attributing algorithms of unreasonableness or accusing Apple of chauvinism is, at this stage of the game, as naive as assuming that they are possessed of some mystical power. Instead, machine learning will continue to generate biased algorithms due to the data we feed the algorithms, and those biases will have to be monitored and eventually corrected. Protesting makes sense, and companies must take action. Not all discrimination is the result of some kind of explicit intention or bias, sometimes they are implicit biases that are not necessarily easy to visualize and that are rooted in mathematics.

Machine learning will soon be an integral part of our lives and the Machine Learning as a Service (MLaaS) market will grow rapidly in the coming years. What’s more, companies that opt for automation will likely beat those that do not. When you use machine learning, assume that it is possible that biases that you have not taken into account may appear, so be ready to take the right response when they do. If you automate, be sensitive to the problems and biases that can arise from automation. We will have to learn to live with algorithms, and this will mean that both the companies that use them and the customers who experience them will know they aren’t perfect, and perhaps sometimes plain wrong. Let’s learn to recognize this without resorting to conspiracy theories and overreacting.


(En espa?ol, aquí)

?

要查看或添加评论,请登录

Enrique Dans的更多文章

  • El desastre del software y la automoción

    El desastre del software y la automoción

    GM se ve obligada a detener temporalmente las ventas de su Chevy Blazer EV después de detectar un sinnúmero de…

    11 条评论
  • El enésimo drama de la automoción tradicional: la interfaz

    El enésimo drama de la automoción tradicional: la interfaz

    Porsche acaba de anunciar que se une a toda la legión de empresas de automoción tradicionales y renuncia a tener una…

  • Poniendo a prueba a ChatGPT: consultores centauros o cyborgs

    Poniendo a prueba a ChatGPT: consultores centauros o cyborgs

    Un working paper de Harvard, ?Navigating the jagged technological frontier: field experimental evidence of the effects…

    12 条评论
  • Suscripciones, tramos… y spam

    Suscripciones, tramos… y spam

    Elon Musk confirma sus intenciones de convertir la antigua Twitter, ahora X, en un complejo entramado de suscripciones…

  • El código abierto y sus límites

    El código abierto y sus límites

    Sin duda, el código abierto es la forma más ventajosa de crear software: cuando un proyecto de software toma la forma…

  • La gran expansión china

    La gran expansión china

    El ranking de apps más descargadas en el mundo en iOS y Android para el mes de septiembre de 2023 elaborado por…

    1 条评论
  • Starlink y las torres de telefonía en el espacio

    Starlink y las torres de telefonía en el espacio

    Starlink remodela su página web y a?ade una oferta de internet, voz y datos para smartphones provistos de conectividad…

    3 条评论
  • La fotografía con trampa

    La fotografía con trampa

    La presentación de los nuevos smartphones de Google, Pixel 8 y Pixel 8 Pro, y fundamentalmente de las funcionalidades…

  • Las consecuencias de reprimir los procesos de innovación

    Las consecuencias de reprimir los procesos de innovación

    Mi columna de esta semana en Invertia se titula ?El mercado de trabajo y la innovación? (pdf), y previene sobre los…

  • We are on the verge of the most dangerous election in history

    We are on the verge of the most dangerous election in history

    In just a few days, on November 3rd, the US presidential elections will take place, the most dangerous in history, and…

    2 条评论

社区洞察

其他会员也浏览了