Is the Increased Focus on STEM Misguided?

Is the Increased Focus on STEM Misguided?

Recently, I read an account of concerns among Australian university professors about foreign students, who did not speak English to any meaningful degree, managing to graduate from university studies, even master's programs, by using artificial intelligence to complete assignments for them. The story followed that even though professors could clearly see what was going on, they were often under pressure to let the students pass. The explanation might be that these foreign students largely cover the operating costs of the universities.

The other day, I tried having ChatGPT write a short essay for me on a rather specialized topic in philosophy. I then asked a friend of mine, who is a philosophy professor, to review the essay and tell me what grade he would give the author if it were a first-year philosophy student. The answer came back quickly: He would get a five.

The Edge of Reason is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

This professor was under no pressure; the AI-generated essay was simply good enough to earn a passing grade, even though it wasn't particularly impressive in itself. The command I gave the model was, after all, quite brief. With a more detailed command and requirements for structure, such as citations, I could have obtained a considerably more refined result, and my friend might have given the AI a better grade for the assignment.

Another friend of mine, who is also a university professor, told me the other day that what has changed with the advent of artificial intelligence is that the assignments he now receives are often significantly better written than before.

The question these professors and other educators naturally face is how to handle assignments that students have AI do for them.

Is there a difference between a student using AI to complete an assignment and, for instance, stealing it or having someone else do it? If so, what is that difference?

To use artificial intelligence effectively, it is necessary to have a good grasp of the subject matter and to be able to articulate one's own thoughts reasonably clearly. The outcome depends on how well-crafted and clear the command is, and on the user's ability to evaluate the result and interact with the AI model further to improve and refine it. Isn't this ability simply something that needs to be fostered in students?

There has been considerable discussion lately about assessment in primary schools, particularly standardized assessments. Certainly, it is key that standardized assessment is carried out. The reason is that it is essentially the only way schools have to compare themselves with others, assess their own position, and take corrective actions if they fall behind. For we must not forget that the ultimate purpose of primary school is to ensure that all children have as equal opportunities as possible, regardless of economic status, class, and parental background.

But now that artificial intelligence has made its full entrance into the school system, it is crucial that when standardized assessments are implemented, proficiency in its use becomes one of the key factors measured. And then it is important to keep in mind that proficiency in using artificial intelligence is a direct result of language comprehension and linguistic ability. Perhaps, instead of increasing the emphasis on science and technology education (STEM subjects), as has been much discussed recently, it would be more appropriate to make language learning, literature, and philosophy the core of the curriculum? Language comprehension, vocabulary, and clear thinking are, after all, fundamental when it comes to utilizing artificial intelligence. Then, could not someone who masters these skills best have the AI handle the modeling and calculations? This question is worth pondering.

要查看或添加评论,请登录

Thorsteinn Siglaugsson的更多文章

  • Rethinking LLM Integration: Not Software, but a Skilled Employee

    Rethinking LLM Integration: Not Software, but a Skilled Employee

    When organizations introduce large language models (LLMs) into their operations, there is a tendency to frame the…

    2 条评论
  • Finding Constraints with the Help of AI

    Finding Constraints with the Help of AI

    Translation of an interview with me in Icelandic newspaper Morgunbladid on Augut 19, 2024. Finding Constraints with the…

  • What is a Direct and Immediate Cause?

    What is a Direct and Immediate Cause?

    When we analyse a situation logically, we move step-by-step from an undesirable effect towards a root cause. We are…

  • Are We Building a New Tower of Babel?

    Are We Building a New Tower of Babel?

    The other day I attended a conference hosted by an international software company. The presenters came from various…

社区洞察

其他会员也浏览了