University of Minnesota vs. A PhD Candidate Over, You Guessed It ... ChatGPT Usage
As ChatGPT scales in society (as well as other AI platforms), we are beginning to have a little bit of a reckoning with what it all means. One story that went kinda semi-viral a few months ago was a teenager in Orlando who committed suicide, apparently because a bot based on Daernys from Game of Thrones told him to do so. Eek. (Although, again, there needs to be some discussion of parenting and community within that discussion.) Then, a few days ago, I believe we learned that the Las Vegas CyberTruck bomber guy, Matthew L., had used ChatGPT to plan routes and charging stations and even figure out (purportedly) some of what he needed. Alright.
Clearly this thing is here to stay, and if you believe Sam Altman and company, it might even get more advanced in the next year or three. Now we have a new case out of Minnesota, that of Haishan (sp?) Yang:
I actually went to University of Minnesota from 2012 to 2014. I worked there too. It’s a pretty off-task place, honestly, and I once had a professor there tell me that they “love international students” because, and I quote, “it’s easy money.” Nice.
It looks like Yang was studying policy issues that impact the U.S. population — try “not making enough money” as one — and lost his visa when he got expelled. He previously had a Masters and PhD from European universities, and has since decamped to Africa after this situation.
Hannah Neprash was the professor who booted him, and that’s who he is suing. Yang is claiming she might have altered some things to prove that he was using ChatGPT — and at this point, I think we also need to admit that if you put the same inputs into GPT two or three times, you might get two or three outputs. (It’s like Google in this regard.) So, can you even “prove” someone used it, aside from some “AI-driven plagiarism” tool?
Bryan Dowd was Yang’s advisor, and he’s standing behind him. If you watch the above video, Dowd apparently has said in legal filings that “this was not the first attempt to expel Dr. Yang.”
领英推荐
Now, from my two years at Minnesota, I can tell you that it’s very minority-heavy, especially in the grad programs. I cannot see a bias against Asians in any way there, although it could occur in specific departments, sure. Maybe Yang was over-educated (PS one of his degrees was actually from Utah State, so I am off on “European universities” above) and rubbed some people, like Ms. Neprash, the wrong way. Maybe she tried to fit a narrative about GPT usage in to get rid of him.
But then we’d need to wonder: why would a guy with basically five degrees already need to use ChatGPT? And if he’s studying population policy, there’s literally entire libraries of information about all that. Why do you even need the Internet, really?
This seems like someone was trying to back-fill a narrative to get rid of someone they didn’t like. Corporate bosses do that literally hourly, but we’re supposed to believe that doesn’t occur in academia (even though it absolutely does; human beings are flawed creatures).
There is a bigger picture to a story like this around — well, it’s almost kinda similar to some of the bullshit MLB arguments in the late 1990s about the fact that “everyone” was using steroids, so can you punish anyone? Clearly a ton of people use these tools for their schoolwork, and I can tell you from some projects I have been on that clearly parents use it for their kids’ schoolwork. If it’s so pervasive, where are the rules? How do we regulate? And can academia agree on the guardrails, or no?
Where do you come down so far?
It's just me
1 个月The interesting thing is, academia is cutting from the bottom up. If they were top heavy before, just wait. It'll be strictly the friends and family plan.
Senior Talent Acquisition Specialist at ESS Inc.
1 个月"And if he’s studying population policy, there’s literally entire libraries of information about all that. Why do you even need the Internet, really?" Because some of these AI models have already shown that one of their primary benefits may be synthesis of research. No human being can read EVERY study on subject X. AI can, and can potentially spot the connections within a field of research, and between different fields, that humans would miss simply because of the mass of data available isn't penetrable to us. We've generated too much information, one of the primary and immediate benefits of AI is its lack of limitation in this regard. As an easy to use intelligent 'card catalogue' it can already shine and add MASSIVE value to academic research. Sabine Hossenfelder had a recent video on this: https://www.youtube.com/watch?v=Qgrl3JSWWDE