What Do Machines Really Learn?

Some musings on critical thinking about "machine learning": whatever the machinery "learns" about people, it learns only from strings of bits that have been supplied to it. "Unsupervised learning" isn’t learning, and if we think more critically, we might notice that it's highly supervised.

Another thing we ought to notice: the machinery doesn’t learn about people. It produces data about strings of bits that represent people and their machine-readable behaviours — not their deep, true, real intentions, thoughts, emotions, or social relationships.

You could say that when machine learning arrives at a result that’s consistent with training data, that result is consistent with "what people think". But which people? Under what conditions? Are they like a cohesive, compassionate family? Or more like an incoherent, angry mob?

Whatever else happens, the "success" or "failure" of machine learning algorithms are judged by people. Decisions about algorithms and outcomes are based on the values, norms, and ethics of the people running the businesses that commission the algorithms. Pause and reflect on that. Perform the fundamental act of critical thinking: ask "how might we be fooling ourselves?"

Here's one way. Once again: media — all tools and technologies that we create or use to effect change — are agnostic to our intentions. They extend and amplify what we are, what our organisations are, and what our world is, and they do so in all directions at once, good and evil, moral and immoral.

Another crucial element of critical thinking is to focus on what's missing. Machine learning is literally limited to bits of ourselves. Walk around in your neighbourhood. Joke with the woman at the samosa shop. See the dad with the stroller. Smile back at the cyclist. Reflect on the weather and climate change. None of that gets into a training data set.

Instead of thinking of AI simply as a lens on data, it is crucial to think of it also as a mirror. AI and machine learning can help us to see what we're like. But we don’t see ourselves if we only look at data through AI. We must look at AI, and look for our own reflections in it.


Charmaine Short

Test Manager | Blockchain | FinTech | Rising Women in Crypto Power List 2023 | DLT Payment Systems | Web3 Writer

5 年

I agree unsupervised machine learning is not actual learning and may not provide much useable value to humans. However, we should think of unsupervised learning as a child doing homework based on what they learned from the teacher in class. The machine will take the new unlabelled data and apply the algorithms its already learned to identify any useable patterns or clusters of data to provide output and confidence level e.g 75% certain Michael is a boys name. The next step is reinforced learning (similar to a teacher marking the homework) to verify if the machine’s answers were right or wrong. This tunes the algorithm to make more accurate decisions. Depending on the purpose of the machine, we could include emotions and social conversation topics like on the weather into the training data. For example if the machine is to help reduce loneliness.

回复
Shane MacLaughlin

Managing Director at Atlas Computers Ltd

5 年

With respect to what gets into the data, this comes down to what you teach it to look for in the training data.? The could include a smile or other emotional indicator but only if the AI is looking for smiles.? ? It can also include something like inadvertent prejudice, as described in this article;?https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to-prevent-it/? More than just about any other application, AI needs really good testing, and while most AI training data is partitioned to include independent test data, top end human testing skills really pay for themselves here.

Sumon Dey

Software Engineer ? Apple

5 年

Thanks for the article Michael Bolton. At present, with all the development in this field we have, right now we are able to build only Weak AI and we are far off from Artificial General Intelligence (AGI). Weak AI doesn't have the ability to perform full range of human cognitive abilities, including emotion. So, the data and model outputs are quite biased. Maybe, in future emotional intelligence will take care of this bias ??

回复
Martin Zedeler

QA | Test Management | build the right software the right way

5 年

Hi Michael - I like how simply you put it. I have had this discussion with several AI people and they don't get me. I am going to try and build on your musings the next time the topic comes up.

回复

要查看或添加评论,请登录

Michael Bolton的更多文章

  • When Too Little Testing is a Requirement

    When Too Little Testing is a Requirement

    In the testing communities that I visit, I often hear things like this: We've been working on this product for a while,…

    12 条评论
  • Testing Is SO MUCH MORE Than Test Cases

    Testing Is SO MUCH MORE Than Test Cases

    On Twitter recently, Aaron Hodder provided one of his periodic reminders that "test cases are not testing", the title…

    10 条评论
  • Testing Requires Experience

    Testing Requires Experience

    Testing requires experience, but maybe not in the way that comes to mind immediately. To clear that up, maybe it would…

    17 条评论
  • Raising the Questions

    Raising the Questions

    This essay is stitched together and built upon from a recent Twitter thread. I've been in software development for…

    6 条评论
  • "Missing" Requirements

    "Missing" Requirements

    This article was inspired by a recent thread on LinkedIn. Thank you to Rahul Parwal for starting the thread off.

    9 条评论
  • Testing Doesn't Improve the Product

    Testing Doesn't Improve the Product

    Out there in the world, there is a persistent notion that preventing problems early in the software development process…

    13 条评论
  • The Secret Life of Technology Work

    The Secret Life of Technology Work

    Documentation people and testers are people who support the work of developers and designers. In that, they're like…

    34 条评论
  • Alternatives to "Manual Testing"

    Alternatives to "Manual Testing"

    This is an extension on a long Twitter thread from a while back. ?No one ever sits in front of a computer and…

    50 条评论
  • Does The Product Solve the Problem?

    Does The Product Solve the Problem?

    Want to evaluate the relevance of your testing? How much of it is focused on what the designers and builders intended?…

    21 条评论
  • Testopsy: Keylogger Challenge

    Testopsy: Keylogger Challenge

    The other day on LinkedIn, Dimitrios Staikos presented an impromptu testing challenge to James Bach and to me. LinkedIn…

    6 条评论

社区洞察

其他会员也浏览了