Innovate, don't just iterate

Innovate, don't just iterate

Companies think they are “innovative,” but, in reality, they just aren’t.?

Companies seem to fail at innovation for several reasons:

  • Focus on short-term revenue: Revenue is important. Without it, a company can’t exist. However, when revenue in the short-term is prioritized above all things, innovation–which can take multiple quarters, even years–often gets sidelined.?
  • Fear of failure: Innovation isn’t magic. Most new ideas won’t work out. This is part of the innovation process, but creates a false sense of “failure” in most people. Instead, people should see these events as “learning” moments on the long road to innovation.???
  • It’s just too hard: As an organization gets bigger, things become harder to do. Innovation requires stamina, perseverance, and guts which many do not have the appetite for.??
  • Performance reviews: Knowing how you’re doing at work and setting goals is all good. However, frequent reviews result in a focus to control outcomes in order to get that “Exceptional” rating.” Putting a rating first tends to produce incremental vs truly innovative product ideas.?

I’ve experienced all of the above, in my 20+ years experience as a product designer. However, I’ve had the opportunity to work on some of the most innovative projects at companies large and small. Innovation is possible at any company, but it takes more hard work, discipline, and commitment than you might expect.?

Evaluating Ideas: A Critical Step in the Innovation Process

Innovation emerges from generating numerous ideas rather than a single "eureka" moment. However, generating ideas alone is not sufficient. Without a systematic approach to evaluating these ideas, innovation remains elusive. Throughout my career, I've honed a process for evaluating ideas using specific criteria, a practice I refer to as "Eval-Crit."

Eval-Crit goes as follows:

  1. Determine an overarching theme or goal for the ideas
  2. Develop an initial set of key criteria to evaluate your ideas with your team
  3. Generate ideas as a team
  4. Evaluate and score ideas based on the theme and criteria previously set
  5. Adjust evaluation criteria (if needed) based on learnings from evaluating your ideas

Best practice: 5 criteria is a good number. I like to stay between 3 to 7. More criteria become harder to keep in mind and is likely a signal that you are not clear enough on what factors are important.

Setting Effective Criteria

Determining appropriate criteria is important. Good criteria are specific, and concise yet not overly detailed or complex. They cannot be too vague and general either. It takes experience and skill to get good at developing good criteria quickly. Having someone with experience and intuition is a huge accelerator to success.?

For most projects, you will want to include these two foundational criteria:

  • Criteria 1: How much effort, cost, time, or expertise will the idea require? (feasibility)
  • Criteria 2: How valuable is the idea to the user? (user value)

Now, add 1 to 3 additional criteria that are core to your goals.?

Here’s a simple example of the criteria used to evaluate ideas on potential user applications for a company’s AI imaging technologies:??

  1. Development effort required
  2. Amount of our technology used
  3. User value?
  4. How viral or social it may be?
  5. Novelty?

I use a 1 to 5 score for each criteria–I find a 10 point scale to be overkill. I've experimented with weighting certain criteria more than others but it tends to add unnecessary complexity, especially if you’ve created the right set of criteria.?

This is likely obvious to most people that regularly do research surveys, but worth mentioning: make sure that a score of 1 is low (bad) and 5 is high (good) for each criteria. This way, when calculating the total score for each concept, they add up and don’t cancel each other out. For example, if you had a criteria such as “amount of engineering effort” a lower score should be more effort and a high score of 5 should be the least amount of engineering effort.?

Discussing and Ranking Ideas with Your Team?

The team discussion around determining scores for each idea is invaluable. New information will arise and differences of opinion or definition will become clear and be more readily acknowledged, if not completely resolved. It does a great deal to increase the team's alignment and shared knowledge.

Best practice: Reduce confusion and bias by presenting ideas in a consistent and concise format. I recommend my poster format ,? in an earlier post.? If ideas differ in format it's hard to evaluate them consistently–e.g. if one idea has a slick video it may score high versus an equally good idea that is just a napkin sketch or research paper. A consistent format ensures comparable rating of ideas.??

Don’t get too focused on exact scores, this is a directional tool, not an exact calculation. If any ideas seem out of rank it's okay to investigate and evaluate it. New interesting learnings can be gained when ideas are not where you thought they would be.? The ideas with the? highest score move to the next phase of the innovation process: prototyping. Stay tuned for my next article on that topic.

I’ve found this practice of evaluating ideas using my Eval Crit method highly effective at producing better, more data driven decisions around new ideas and projects. It keeps teams aligned and provides transparency on how ideas are rated and decisions are made.? For those of you wishing to inject more innovation into your day-to-day role or desire to help your company shift away from a “play it safe mentality,” give it a go.?

Kaz Raad

Co-Founder at MANYFOLD

7 个月

Good stuff. Long term vision seems to fall off a lot these days... This system allows our fast-paced and ADD cultures to think of the vision as a series of episodes, actually preparing us for the long haul. Is it ok if I start using "Eval-Crit" for everything? :) The "Poster Format" seems like a no-brainer; but you'd be amazed how often people skip/ignore this valuable tool/step!

Kay Kim

Head of Design, Cloud UX at Huawei Cloud

7 个月

Thanks for sharing this. Very helpful to understand how one specific domain think and work. I found that this “evidence guided product development” by Itamar from Lenny’s podcast is super interesting. Especially around GIST framework, metrics trees, and Confidence meter to help cross-functional team to create innovative product. https://www.lennysnewsletter.com/p/becoming-evidence-guided-itamar-gilad I believe amazing product development team (pm, ux, eng) have clearly defined common understanding of purpose, strategy, decision making process to build-to-learn if you are focusing on distruptive innovation area (according to innovator’s dilemma by Clayton Christensen). Sustaining innovation requires different mindset, process, culture, performance evaluation, etc. Again, thanks for sharing your thoughts!

Karl Channell

Principal Product Designer @ Duolingo

7 个月

I've been researching different formats like this to test out. This route sounds promising. Thanks for sharing !

回复
Burak Aksar, PhD

Founder @ Spiky.ai (Techstars’22) | Top 50 GTM Company | We help scale winning behaviors across your revenue teams | ex-IBM AI Research

7 个月

Andrew The "Eval-Crit" method sounds like a practical tool! It's particularly interesting how this method can adapt to different company sizes and stages, as the obstacles to innovation can vary significantly between a startup and a large corporation. In your experience, have you found that certain types of companies or industries are more receptive to this structured approach? Moreover, how does the "Eval-Crit" method ensure that ideas' creativity and originality aren't lost in the evaluation process? ????

要查看或添加评论,请登录

社区洞察

其他会员也浏览了