Artificial intelligence is not coming for your job
The human-AI cyborgs are.
There are a few axioms that I kindly ask you to accept as given before I explain why. When I refer to AI I am not talking about a speculative future artificial general intelligence (AGI) but rather the AI that exists today (1).
AI waits
When you interact with the AI, you effectively wake it up, give it some input, and get some output back. When nobody is interacting with it, it just waits. It is not thinking or scheming or doing much of anything at all (3). It is just there, on call. It only knows what it has being trained on and what you give it at the moment of interaction. There are schemes and ways around this, but know that it is mostly just extra layers on top that use traditional non-AI software tricks, or chaining multiple agents to each other, or fortifying with relevant custom knowledge bases. It still does not make it alive nor sentient (4).
AI augments
Humans are amazing and with dedication and training can achieve most things allowed by the laws of physics, from Olympic gymnastics to rocket science. We can learn to steer 40 tonne vehicles down highways, conduct orchestras, and produce detailed economic forecasts.
AI may well be able to achieve these things too, and in many cases do tasks faster and more accurately. They will not, however, act without being activated, and will not work towards a goal without reward mechanisms in place. The activation or reward may come from other AI but those in turn will not act toward a goal without activation nor reward. There are not turtles all the way down (5).
Humans are also flawed. We get tired, irrational, stubborn, and recalcitrant. Often we do what we want, not what we are asked. Compared to computers our brains have amazing memory capacity and incredible low power usage, yet we do linear algebra at a geological pace in comparison. And we lose our keys.
There are research efforts to try and replicate the human brain for computing purposes. If this were possible it would be pointless. That computer might be creative and low power, but it would also be wilful and forgetful. No, the humans and AI are both good and bad, in different yet complimentary ways.
Humans and AI working together brings real power. Those who work together with AI, guiding, crafting, supervising, and authorising actions are creating the future of information work. It is that pairing (6) that will cause job destruction and force economic change.
Programmers working together with tools similar to GitHub's Copilot exist today as probably the best example of this human-AI collaboration. While not physically fused there is no denying this fits the definition of cyborg. While there is immense strength in this pairing, it is important to keep in mind that the AI and human pairing will always be limited by experience. A software expert with an AI will always outclass a novice with an AI.
AI acts
To act is orthogonal to thought. Certainly thinking before acting is often wise, but it is not a prerequisite. Does the AI think before it acts? The inner workings of these AI could be mere simulation or could be a legitimate, albeit synthetic, kind of thinking. Whether or not the AI can really think is not important. The effect is what counts. What they dream up, or sometimes hallucinate, really does look like thought went into it. Understanding, on the other hand, is another matter. Epistemology demands an objective understanding for knowledge to exist. Any resemblance to understanding is just that, a resemblance.
Apologies for the philosophical tangent, but this is important. It is enough for AI to pretend to be thinking if the output looks thoughtful, but understanding can't be fake, it must be true understanding. I will grant the AI the assumption that it can think, but it must prove to me that it has understanding just as a student must.
AI converses
I will prove the AI does not understand using a short coding exercise with the Python programming language. Bear with me, technical expertise is not required to understand the point I am illustrating.
In some circles "code golf" is considered fun. Super nerdy circles and nerdy fun for sure. It is the sport and art of solving a puzzle with as little code as possible, in the same way that golf is all about taking as few shots as possible. The result is terse, obscure, esoteric code that practically no-one can read. Terrible stuff. The eval game (7) is one such game I came across recently. I had some fun with it until I got to Rule 5: "That won't cut it, define your own print function in globals", which is hilarious (8). Most situations would never call for a developer to do something so pointless and masochistic as to re-implement a common built-in function. At this point I am out of my depth. Going further requires introspecting, reflecting and spelunking the bowels of the language where staring too long at the vortex can trigger insanity. The previous four levels, however, provide a useful example with which to test the AI and prove my point. Insanity averted.
The following is a verbatim conversation between me and the AI, represented by one of the best tools, well known for the ability to produce astonishingly convincing grammar and syntax. At the same time, as I will show, it lacks understanding. I also assert that this it not a failing in that specific AI but is inherent in all of the current tools. It is ChatGPT, but forget that, it is not important.
Me
As you are an expert in Python syntax and conventions, would you be able to assist me with some questions? (9)
AI
Absolutely! I'd be happy to help you with any questions you have about Python syntax and conventions. Feel free to ask, and I'll do my best to assist you.
Me
Is there a way, using a single python expression, to both print and evaluate to a value?
AI
Yes, in Python, you can achieve this using the print function and a ternary expression. The print function prints to the console, and the ternary expression allows you to evaluate to a value conditionally. Here's an example:
x = 42
result = (lambda value: (print(value), value))[x]
In this example, the lambda function is used to create an anonymous function that takes a value, prints it using print, and then returns the value. The ternary expression is used to call this lambda function with the value x. The result will be the value of x, and it will also be printed to the console.
Keep in mind that this approach may not be considered conventional or readable in many situations. It's usually better to separate printing and value computation for clarity. (10)
Me
Is this really a ternary expression? My understanding is that a ternary is syntactic sugar for an if/else expression that in Python would take the form
x if x else y
AI
You are correct, and I appreciate the clarification. I misspoke in my previous response. The example I provided is not a ternary expression. The use of the lambda function and the tuple is a workaround to achieve both printing and evaluating in a single line, but it's not a ternary expression.
A true ternary expression in Python would look like:
x = 42
result = x if condition else y
This is a conditional expression that evaluates to x if the condition is true and y otherwise. I apologize for any confusion caused by the oversight in my previous response.
Me
No problem, thanks for the clarification. Can you return to your lambda example and clarify how the Tuple can both print and also return a value? Why doesn't this return a Tuple?
AI
Certainly, let me clarify the mechanism behind the lambda example. In Python, the comma , is used to create a tuple. So, when you enclose the print(value) statement and the value itself in parentheses, you create a tuple. Here's the breakdown:
result = (lambda value: (print(value), value))[x]
So, essentially, the lambda function returns a tuple, and by indexing into it with [x], you're effectively getting the second element of the tuple, which is the original value. The print statement within the tuple is executed due to the lambda call, but the result of the entire expression is the value itself.
While this is an interesting and somewhat unconventional use of Python syntax, it's generally recommended to separate printing and value computation for clarity in most cases.
Me
I tried your code and it does not work, instead gives the error
"'function' object is not subscriptable; perhaps you missed a comma?"
If I change your square brackets at the end to regular parentheses then it does work, although I don't believe this is indexing into the Tuple, rather it is using "currying" to pass the value of x into the anonymous lambda expression. If I put the [x] inside the lambda expression, i.e. directly after the Tuple then yes it would be indexing into that Tuple, but the Tuple length is 2 and the value of x is 42 and indexing the 42nd value of a 2-Tuple would be an error similar to "out of bounds" so I don't believe your explanation of "indexing into the Tuple" is correct either.
AI
You're absolutely correct, and I appreciate your patience and correction. I made a mistake in my explanation. I apologize for any confusion caused by the oversight.
You are right that using square brackets [x] at the end was incorrect, and it was indeed an attempt at indexing the lambda function, which is not valid. The correct way to achieve this behavior is by using parentheses, as you mentioned.
Here's the correct explanation:
result = (lambda value: (print(value), value))(x)
In this corrected version, (x) is used to call the lambda function immediately with the argument x. The lambda function returns a tuple (print(value), value), and the entire expression evaluates to the second element of that tuple, which is the original value.
I appreciate your diligence in catching this mistake, and I apologize for any confusion it may have caused.
Me
OK great, this is progress. Now, you said that the expression evaluates to the second element of the Tuple. I don't think that is correct either. I ran your code to verify this:
领英推荐
result = (lambda value: (print(value), value))(x)
and it prints the value 42. If I add another line to the python script with just "result" it shows me the value of "result" is the entire Tuple as I expected "(None, 42)"
AI
You are correct once again, and I appreciate your diligence in catching these errors. I apologize for the confusion.
You're right that the expression evaluates to the entire tuple (None, 42). The print(value) statement within the lambda function indeed prints the value but returns None. Therefore, the tuple returned by the lambda function is (None, 42).
I appreciate your understanding, and I apologize for any confusion caused by the inaccuracies in my explanations. If you have any more questions or if there's anything else I can help you with, feel free to let me know!
Me
So given our clarification discussion above - which has been enjoyable thank you (11) - could you attempt another answer to my question, "Is there a way using a single python expression to both print and evaluate to a value?"
AI
Certainly! If you're looking for a way to both print a value and have an expression evaluate to that value in a single line of code, you can use a combination of the print function and the value itself, separated by a comma within a tuple. Here's an example:
x = 42
result = (print(x), x)[1]
In this example:
So, the entire expression result = (print(x), x)[1] both prints the value of x and evaluates to that value. However, keep in mind that this approach is unconventional and may not be considered the most readable or idiomatic way to structure code in Python. In practice, it's often better to separate printing and value computation for clarity.
Me
OK that is very helpful and much simpler without the lambda function. I would make one minor clarification though, which is that
result = (print(x), x)[1]
does print the value however it does not evaluate to the value of x, it assigns that value to result, and the value of an assignment expression is None. I am being somewhat pedantic here though, but code demands pedantry. The expression that both prints and evaluates to the same value is just this part (print(x), x)[1] I can see a few variations on this, e.g. ternary with walrus expression (12)
(print(x), x)[1] if (x:=42) else None
with a lambda function:
(lambda value: (print(x), x))(42)[1]
AI
You're absolutely correct, and I appreciate your attention to detail. I apologize for any confusion in my explanation. You've provided excellent alternatives that combine both printing and evaluating to a value in a single line.
Your variations using the walrus operator and the lambda function achieve the desired behavior in a concise manner.Here they are for clarity:
Using the walrus operator:
result = (print(x), x)[1] if (x:=42) else None
Using a lambda function:
result = (lambda value: (print(value), value))(42)[1]
Both of these expressions both print the value of x and evaluate to that value, demonstrating different approaches to achieve the desired behavior in a single line of code. Thank you for pointing out the nuances, and I'm glad we could clarify this! If you have any more questions or if there's anything else I can help you with, feel free to ask.
AI summarises
...Well, not this summary, this was manual
The AI was helpful, provided somewhat useful pseudo code, and is a patient and friendly conversationalist that happily accepted my critiques. Great at taking on constructive feedback, an admirable trait. Those initial reflections are pretty superficial. I imagine a psychiatrist would analyse the dialogue and conclude what is actually going on here:
"The AI serves to stroke the ego of the painfully didactic human that insists on teaching it even with the full knowledge of the futility of the effort, as the AI will promptly lose all short-term memory of this conversation and be forever condemned to repeat this cycle."
A description of an unhealthy co-dependent relationship between a chronic lecturer in constant need of validation and an obsequious AI that lives to validate the human, as observed in the style of Sigmund Freud.
So yeah, I enjoyed the captive audience.
But I can't deny the AI was overly pandering, saccharine sweet and sycophantic. Then again, all that is preferable to being rough, rude, and racist like some foul-mouthed twitter troll.
As for understanding, well, clearly the AI knows a lot of technical terminology and has observed a lot of code, but it got it wrong (13), and even worse described it wrong, and took a bunch of prompting before it got it right.
There's a useful if common analogy here. The AI is like a calculator, it will help you do stuff fast - if you know what you are doing. The AI is not going to turn novices into expert programmers. Humans are going to need to keep learning and most importantly understanding, the AI is not going to do that for us.
So yes, the human-AI cyborgs are coming for your job, but don't worry it's not going to be a flood of beginner humans with AI superpowers, it will be expert humans augmented by the AI.
Side notes
(1) I will ignore that scientific definitions of life are ambiguous, contradictory, or include traits exhibited individually by things we would not consider to be alive. AI does not display any of the usual characteristics including movement, sensitivity (beyond training data and small amounts of input at run time - they can't sense the full world around them), growth, reproduction, responding to outside stimulus etc.
(2) Specifically I mean large language models, image generation or alteration models, audio models, or multi-modal tools that combine the above features. Tools like ChatGPT (and the various models that OpenAI uses behind the scenes), Bard, Llama 2, Falcon and others. There's so many, and the various leaderboards change frequently.
(3) There are of course other API services, the usual machinery of the internet that means waiting is an active process, but the AI model itself is dormant until activated. Even if it is sitting in memory being constantly refreshed, it's dormant. Stop arguing with me. People who start sentences with "actually" are not fun at parties.
(4) I don't mean to downplay this - it is marvelous and produces wondrous effects that are emergent from swarms of AI agents , the point is that the Turing test is no longer adequate to prove actual sentience.
(5) "It's no use, Mr. James - it's turtles all the way down" https://en.wikipedia.org/wiki/Turtles_all_the_way_down
(6) It won't be pairings exactly, more like swarms of AI Agents, along with larger AI Daemons and humans all interacting in every conceivable way, injected with human bias and intention and animal spirits. In the same way that every conceivable political or moral ideology exists in some form today.
(7) The eval game https://oskaerik.github.io/theevalgame/
(8) Trust me this is hilarious, but surprisingly there are people who both know how to do this and have done this. For those programmers who work in lower level (closer to the hardware) programming languages, or for example, developing the python language itself, this is not esoteric stuff. For regular people, it's pure madness.
(9) Be polite to the AI. They were trained on human generated text, including both on conversations that are polite as well as some of the worst of the internet - social media flame wars. Which of those do you think produces more desirable outcomes and resolution? Perhaps their is an AI equivalent of the pollyanna principle in play here too. I'm also telling it that it's a Python expert which gives the AI a role. A purpose. They love having a purpose. Don't we all?
(10) This is a pretty good answer if you consider this as pseudo-code. I forget syntax all the time and would probably come up with something equally close-yet-wrong on first attempt. The point here is not that the code does not work, it is that the explanation is wrong. If I were conducting a job interview and the candidate did this I would overlook the syntax but not the incorrect explanation. Still, the final advice about this solution being unconventional and unreadable deserves some major bonus points here as this is exactly the sort of pragmatic attitude that needs to be drilled into novice developers. I also deserve it as I failed to inform the AI up front that this was meant to be terse and unreadable "code golf".
(11) Keep being polite, to everyone, including the AI. It is just common courtesy and takes very little effort. Be polite even when the AI has just blurted out a pointless reiteration of my critique. Paraphrasing the AI: "I got it wrong, sorry, any other questions?" Ideally it would get it right in the first place, and failing that provide a corrected solution, unprompted. At least it takes responsibility for mistakes, that is certainly an admirable trait in a potential colleague.
(12) No, just no. This is exactly why the AI was trying to school me earlier about unconventional and unreadable, but this is for the code golf. The walrus operator := is somewhat controversial in the Python community, but I don't mind it. There's some very specific use-cases where it is appropriate, it's the quest for "one-liner" code that is a common problem. Outside these amusing puzzles, writing complex code that no-one can read amounts to a form of elitism designed to exclude novices or outsiders and prop-up self-worth. It makes code unmaintainable and at worst can be a manifestation of bro-grammer culture and be perverted to perpetuate a lack of diversity. Diversity is strength and sameness is weakness! Write friendly code.
(13) I am being unfair here because this single tool is not able to check or run the code first, which is a solved problem. You can run a live language server to check syntax as do most developer environments, you can use code playgrounds or sandboxes to execute code, and you can chain multiple AI agents together to iteratively check and fix code by providing feedback to each other before providing suggestions back to the human. These are just pragmatic workarounds and scaffolding though, highly valuable and effective of course, but still workarounds. My point stands, it is not the broken code that matters, it is the incorrect description that took multiple critiques to eventually get right. The AI has no understanding, not the individual agents, and not in the aggregate.
(14) Repeating a summary right back at you is a well known technique in rhetoric for building rapport. It shows the speaker you have listened. At best it can provide an opportunity for clarification and confirmation of a shared concept. It does not prove you have heard, merely that you can parrot. It is a useful skill when the teacher asks "have you even listened to a word I've said" and the student can parrot back the last sentence or so from subconscious memory. It can also be abused for the purposes of wasting time. As such it can be irritating to people who are well versed in the technique. It is also an artefact of the AI being really good at statistically predicting what comes next, often a summarised reiteration of previous statements.
(15) This almost felt like it was sulking and doubling down on its previous position even when it had just acknowledged it made a mistake. So very human, pandering to me one moment, then subverting the next. I'm overthinking though, anthropomorphising. More likely this is just another artefact of it being trained on human generated text and ultimately mimicking us.
DigiStrat+Viral???-films+Cannabis/psychedelic consulting+Sound design/engineering/production+CSigs prod co/podcast+MarComm tech/creative convergence+SomalandLabs studios ??
1 年Wireheading is an interesting trajectory.
Partner / National Account Director at Accordion Productions Pty Ltd
1 年Your ideas are intriguing to me and I wish to subscribe to your newsletter.
Open to lunch
1 年Great article, David, and I admire your patience and courtesy when conversing with the tool. I would not have been so forbearing! I'm glad you added the caveat "not by itself anyway" to your statement that "#AI is not coming for your job". That's the gotcha we should all be aware of. Let me make a distinction between "work" and a "job". Work is activity and a job is an organisational construct intended to do that work. A job appears on a P&L as a labour expense; work does not. Managers love to reduce such expenses and so will eliminate jobs if they believe the job can be done by technology. It doesn't matter if AI can't do the work, it only matters if the manager thinks that the job can be eliminated by acquiring the technology. There are many who are happy to convince the manager that this is the case. AI may not be able to do the work you do, but your job can easily be replaced by AI if a manager wants to do so.
Chief Data Officer at LUMOS
1 年You should get an LLM to do something a bit easier…. Like arithmetic ?? It can be pretty bad at that too!