AI & the Human Mind
Sienna Jackson, MBA, MS
Sustainability, Impact Measurement and Management | CEO & Founder | Board Member
I'm very late to the "think pieces about artificial intelligence" game. That is more or less a deliberate choice. A few weeks ago, my LinkedIn feed was blowing up with viral posts about ChatGPT, AI, and its many uses and (potential) abuses. I don't work in AI/ML, and I have no particular expertise in this area. I'd much rather sit back and listen to the experts analyze the technology and the potential implications of its use.
However, I do feel that layman observers can also think critically about AI. It is, at the end of the day, just another ingenious tool, like computers and the Internet. Any tool can be misused or abused if we're not mindful about how we use it.
I'm concerned about the protection of creators' rights and intellectual property when it comes to AI-generated artwork, for instance. When I was a member of the LA Music Leaders Roundtable, we advocated for the rights of creatives and artists, working closely with Rep. Judy Chu's Creative Rights Caucus.
Looking back, I don't think any of us could have imagined artists filing a class action suit against Stability AI, DeviantArt and Midjourney (the suit claims that the Stable Diffusion tool used by those platforms was trained on billions of copyrighted images scraped from the internet and contained within the LAION-5B dataset, which were downloaded and used by the companies “without compensation or consent from the artists.”) ?
So therein lies an example of AI being misused (by humans, mind you); to allegedly exploit the creative work and intellectual products of human minds and present it as "AI-generated."
Another misuse of AI that will not serve us is plagiarism and academic dishonesty. As a current UCLA student, I'm on plenty of listservs for the school's various departments and offices, and last week an email went out to the graduate and undergraduate student body from our dean of students entitled: "Expectations Regarding ChatGPT and Other AI Tools in Academic Work."
You can guess as to what those expectations might be. I was having a conversation just the other day with someone, and we discussed the harms that AI might do to student learning outcomes:
If a student can eschew the tedium of writing an essay by simply inputting a prompt to ChatGPT and passing off the output as their own original work, that student is not going to develop and refine fundamental skills (critical thinking, rhetoric, research, &c.).
These skills are what I call "heritage knowledge": the fundamental intellectual skill sets that make up our cognitive capacity as humans, namely, how much information we can retain and what tasks our minds are capable of performing. The brain is an organ of the body; if it isn't exercised and stimulated, it will atrophy, and its cognitive potential will diminish.
If we begin to habitually outsource these tasks to AI (critical thinking, problem solving, devising new ideas or rhetoric, coming to a conclusion, comparative analysis, and so on), I fear that our own cognitive ability to accomplish these tasks independently will deteriorate.
领英推荐
The research that I've seen on this topic is more concerned with AI's impact on instructor-learner interactions, and it will take some time for literature to catch up to the implementation of this new technology. (Here's an interesting study of "the Google Effect" on memory). The pandemic (namely, the shift to online learning) effectively erased two decades of U.S. progress on math and reading, and critical thinking in students of all ages (K-12 and college) have been steadily declining (see also, this "State of Critical Thinking" study from 2021).
Put it this way: If you've read my article up to this point and resonated with what I've said, and then I told you this entire article was generated by feeding prompts into ChatGPT, would you still feel that this article was "mine"? My thinking, my analysis, my thoughts? Probably not.
I could have jumped on the "think pieces about artificial intelligence" bandwagon much earlier if I had used AI as a time-savings measure. I could have generated a passable op-ed and published it to LinkedIn in a matter of minutes.
But instead, I did things the slow way: I read other people's analysis of the issue, played with the tool myself to see how it worked, and then I looked up some peer-reviewed academic sources to ground my conclusions, and then I sat down at my desk and I wrote everything that you've read up to this point, including these words right here.
It took an hour rather than a minute, and the results are far from perfect. Yet, ultimately, I have benefited far more from forcing my meat brain to do the work, than if I had foisted the task off to a clever tool to do it for me.
So, do I think AI is going to end humanity? Of course not. Do I hope that we humans still find value and reward in doing things ourselves, and that we don't sacrifice our personal development in the name of efficiency? Absolutely.
--