Thoughts on a midjourney designer

Thoughts on a midjourney designer

About a week ago, good friend and head of design (of a leading edtech company), Hardik Pandya , advertised a position (on x/twitter) for a designer who needs to be comfortable using midjourney for design work for a new product.

This (quite obviously) got a lot of eyeballs and opinions – basically did what it was supposed to do.

https://x.com/hvpandya/status/1768544166802604473?s=20

At the root of this, this essentially validates something that I have been saying for a while. This whole generative AI thing (and I am not generalizing AI for this) is something that is going to enhance the productivity of specialists. Sure some mid level people who are either not competent enough or are not adapting fast enough might get affected. But the general rule of thumb is – if you are adapting yourself to latest tools, you should be fine. This is something that you should have anyway been doing all the while.

Let us take the example of designers in this specific context of the tweet.

A good designer’s competency is his way of framing a UI screen / graphic. The way the colour palette is chosen. The way copy enhances the visual. The gradients. The user interactions. The way by which you lead the users’ attention/flow. Once the designer has this nailed (or has the capability to iterate to nail), then its a question of using the right tool for the job.

A good designer has probably anyway evolved throughout their career – from photoshop to sketch to figma. This is based on my exposure to the tool chain for a limited use-case superset that I have been involved in. I am sure there are other tool chain evolutions that others can quote.

The latest is evolving to a way by which the designer describes all of the above by way of a good prompt. Most likely, the designer would get a 80% accurate outcome (or some similar high percentage), and then the designer iterates on a tool of choice to get the final outcome.

The better the designer is with generating the prompt, the quicker (and lesser number of iterations) they get to a high fidelity outcome close to the final. By the time, they are extremely efficient, they are probably close to 95% accurate through their prompting and just have to add finishing touches outside of genAI.

Do you see the parallel with extremely competent high productivity PMs <> designers combos?

If not, I will give you an example. At Travenues (early stage small team size startup), when I used to sit down with Das (Abhisek Das ), our designer with whom I have worked with in the past, and work on a screen/graphic. I would keep rattling off my product thoughts, and Das’s hands flew on figma. It was magic watching the design come alive in front of me, as I iterated, gave feedback, gave more product thoughts, and it just evolved.

The expertise power shifts right with genAI. If the designer learns how to translate my PM thoughts into a design prompt, imagine the rate of productivity improvement.

Also, do notice here, that the designer cannot be replaced with the PM. The designer knows how to prompt way better, because he has the outcome in his mind, and he is getting the machine to churn that out as closely as possible to what he has in his mind.

The designers competency in this grows as he does this more and more (and the models become better and better).

Out of several fields getting impacted with GenAI, this is one of those where I can see first hand (in my mind and in real life recently), how this can practically increase productivity significantly. (The other one is of course code generation, which I am not that close to, but I see very similar parallels).

要查看或添加评论,请登录

社区洞察

其他会员也浏览了