My AI Writing Compendium
I decided to make a collated list of the major AI articles I have written; 10 so far. If someone really wants to know what I think about AI, they shouldn’t have to wade through all the other postings: views on politics, snarky commentary on the news, essays on free will etc. One is 6 years old and the rest are from the last year or so. Here they are:
My first major article on AI, from 2018. Have a look and see how my predictions and thoughts have panned out. You can see the beginnings of my unusual type of writing which combines science, technology, philosophy and introspection on thinking. There is apparently a fairly narrow readership for such things. Contains one of my favorite lines: Deep learning is an electronic eye not an electronic brain.
Article on ChatGPT and the software profession. Probably seemed more original at the time but still has some things of interest I think. I predict that things like LLM driven IDEs will help us write code. Github CoPilot already existed to my ignorance so not really ahead of the curve.
This article is on the effect LLMs will have on the future of writing. I think there are some insights here worth reading about how both writing and reading will become a more dynamic, interactive process. I really like this article but it didn’t take off with my technology followers. I suspect people will look back in 10 years and see this as prescient.
A short simple article with a common sense argument from AI history for why LLMs might not lead to AGI.?
This article was probably the start of my growing pessimism on how useful generative will be in industry, at least in the short term. I suppose I have come to be seen as an AI critic or pessimist though I don’t really consider myself to be that. More of a realist. Much of this is based on what I see with my clients which tend to be large corporations, many of which tend to be fairly glacial in changing technology paradigms. I suspect I should have more optimism for smaller companies and startups coming up with ideas I haven't thought of.
领英推荐
This article came about as a result of a couple of months of nightly interaction with ChatGPT (version 4) where I tried to identify what it is good at and what seems to be its inherent limitations. I came up with 12 problems that people are pretty good at answering but all LLMs seem to fail. To this day, none have scored more than 3 or 4 out of 12. I believe these things identify core limitations of transformers and hint at what needs to come next. I hope problems like these could help AI developers think of what changes and innovations are needed to move forward.
In this article I attempt to imagine what software development might look like in the future as LLMs become an essential part and capitalists do their things to try to bring market principles to sharing software and software development output and skills. Perhaps it becomes a bit speculative or vague but still a decent article. The key is that we can greatly advance productivity when we realize that massive amounts of resources are spent solving the same problems in different companies. Solving that inefficiency is something that technology, capitalism and markets should be able to deliver.
One article where I try to make the case against the current approach and ideology of ethics as it relates to AI, and, in particular, applying censorship to LLMs. Some of my political leanings (radically moderate, libertarian) are apparent but one can’t really talk about ethics free of political leanings. I think this is an important article that is definitely out of consensus in Progressive dominated tech which is why you should probably read it.
This one discusses LLMs as manifolds in a large dimensional space which model a probability distribution and why that implies that LLM performance will soon saturate. There is some indication that this is happening. Seems to be my most popular article and one which generated a ton of discussion.
This one is sort of my magnum opus (i.e. long) article on the limitations of LLMs and the difficulty we are likely to have working them into industry. The key idea is that the AI of today such as LLMs are limited to the pattern recognition mode of thinking. I demonstrate how typical human tasks, even simple conversations, involve a complex process of mode switching and that pattern recognition is only one mode of many. At the end I suggest some ways forward. One idea I put forward is essentially the same as RAG models though I was simply ignorant at the time that such things already existed. (I'm particularly good at predicting what already exists).
Like the title says, this is about how we need to think differently about LLMs. We are often mistaken in how we should apply them and likewise complain unjustifiably about their shortcomings when we use them for the wrong things. I argue that we are making a mistake to treat these as normal technological components that we should rush into production to beat the competition. Instead we need to step back and think and to appreciate the need for a new golden age of user experience design.
More articles to come
A.I. Writer, researcher and curator - full-time Newsletter publication manager.
8 个月Let me know if you ever want to write a guest post in my AI Newsletter(s).