Co-Intelligence, 'therefore I am': a sense-making meditation on my experience with Gen AI so far.
Source: midjourney.com/jobs/af791e18-89a0-454b-8358-a18ad2909406

Co-Intelligence, 'therefore I am': a sense-making meditation on my experience with Gen AI so far.

"If you're good at giving instructions to people, you'll have a good time giving instructions to AI." — Ethan Mollick in BCG's flagship Thinkers & Ideas podcast by Martin Reeves in a recent episode. Highly recommend the listen, and looking forward to picking up Ethan's book on the subject soon.


Hearing these words instantly struck a 'jazzy chord' with me. It's been twelve months or so now of daily interactions with GPT4 and other 'prompt-based' models for endless use cases. But besides following some intriguing papers on 'prompt hacks' that try getting more out of these models, I've not been particularly interested in the latest 'list', 'compilations' and even full-fledged 'apps' being churned out non-stop by 'prompt engineering gurus'. And don't get me wrong, that's not because I think they're all some kind of definite 'con' — in fact a lot of it is well-meaningly trying to make these insights more accessible and actionable. But without quite realising, I now think I've been taking a 'co-intelligence' approach in the way I interact with these models.

As context windows keep on becoming mind-bogglingly longer, it's been increasingly easy to break things down and work through them as if 'live-collaborating' with a human—quick, succinct almost 'Socratic' exchanges, that culminate on a level of shared understanding that enables more comprehensive outputs at a level of depth and quality that still kind of feels 'wow' every time.

And even when after this 'quick context chat' leads to an outcome that isn't quite right, it's been feedback exchanges that mimic the previous pattern that seem much more powerful. Indeed, sometimes moving from simply "Good draft, but let's improve by x y z" to instead saying "In what ways do you think your draft doesn't quite capture the context and requirements that we've talked about so far?" and/or "I have the impression you're probably not sure about some things but tried your best anyway to put something together based on reasonable assumptions—which is great!—but let's hear some of your doubts, maybe that'll help produce something much better. So, is there something you are not sure about?" can be surprisingly effective. After all, would you have been able to do much better if someone had given you the context you just shared? In some cases, probably yes. But which cases and how often?


This makes me wonder if 'kinder', more 'equal' and 'humane' treatment of these models in the way we think about 'what', 'how' and 'why' we use them generates better outputs simply because it is more likely that training data where these values were present in general led to better outcomes. I guess there's a parallel here with the so-called 'law of instrument': "If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail." (Maslow's version in The Psychology of Science). In other words, if kindness is your only tool, you're more likely to go further.

Either way, 'co-intelligence' resonates. It makes sense. And here's what is even more interesting for me: becoming good at giving instructions to AI has helped me become better at giving instructions to people. You can quote me on that ??

Anyway, this is not a post about sharing examples or 'how' I've been seeing this happen. This is just some sense-making meditation on 'why'.

On the one hand this dialectic / overdetermined experience is not surprising: after all, this is true for every technology. And I would agree that in a Foucaultian sense this kind of 'co-intelligentality' ('mindset', 'philosophy') is itself a technology. What we shape, shapes us. But isn't the 'unsurpriseness' of it in a way itself kind of 'surprising'? This is the curve ball I see when discussing how AI is and will likely accelerate disruption of 'activities' today executed by humans. This disruption seems so much more fundamental than AI simply 'taking over' labour—or being able to formulate and execute totally novel processes/entities on its own. It's about how we relate to our own individual and collective intelligence whether it's down to how we structure and use these technologies to influence and shape our own private thoughts, or down to how, as groups, we ultimately (already) use these models as 'stakeholders' in collective debate and decision-making. Will we see live 'AI mediators' for political debates? Will our personal AI assistants ('Her' vibes anyone?) ask for space to call us out when we're reckless?

We are not remotely ready—and for me that's not even a question about the serious alignment issue we face. It's about the unforeseen implications co-intelligence already in this form have on our intelligence. It's about next time you're starting an interaction with one of these models, taking a hesitant step back to ask yourself 'what would I do differently if this was an exchange with another person?' whilst at the same time carving out moments to wonder when thinking through something with others: 'what would I do differently if they were not human?'

These are pivotal times. The very idea of consciousness and 'thinking' are materially open for questioning in a way they've never been before. Perhaps in our age 'I think therefore I am' will be more meaningfully actionable if just shortened to 'therefore I am'. My hope is that we can learn from these experiences to expand these frames of 'co-intelligence'. If rationalism and positivism merits reason above all, perhaps it is from these existing structures where an anti-thesis of co-intelligences shared with the 'Other' in natures and cultures will emerge. And perhaps the ultimate failure of the resulting synthesis will help us move further away from this legacy of modernity. But until then, my many concerns about how AI is developing co-exists with some genuine excitement for what this technology does and can do for us. Now more than ever, it's up to us to shape and be shaped.

Ariane Noronha

Fundadora e Diretora Executiva da Soul Bilíngue | Soul Intercambio | Ativista pela Educa??o | Jornalista | Ex-intercambista nos Estados Unidos

5 个月

Love those insights!

回复

要查看或添加评论,请登录

Guilherme Augusto Laidens Feistauer 賀古圖的更多文章

社区洞察

其他会员也浏览了