YOU ARE A HYPOCRITE!

YOU ARE A HYPOCRITE!

By the commitment they signed with the Science Based Targets initiative, Any organisation with a Science Based Target must unequivocally value scientific evidence over all other metrics. All policies must be 100% evidence based, using methodical, scientific evaluation with credible evidence.

[Oi! why are you pulling that face?]

One of the problems highly innovative solutions face, especially those generating data in real-time is you get a lot of evidence! It generates data for evidence at the rate of machines, not the rate of humans. For every second you spend reading this article, is 3.85 petabytes (1,024 terabytes) of data generated to support the creation of information. Every. Single. Second.

Automedi recently "Lost" a bid for phase 2 funding at the Greater Manchester Combined Authority (GMCA) despite what we felt was an extremely successful phase 1 demonstration project.

This is not the first time this has happened. Just last week we were hit by an incompetent evaluation in Leeds , where an evaluator clearly didn't read our bid. Here we are again, with the GMCA process, which our evidence shows:

  • We saved almost 6 tonnes of CO2 more than the weight of what we collected in plastic through our circular microeconomy clusters
  • We proved we could generate faster financial throughput than the monthly outgoings (for monthly outgoings of £9,950, we generate £24,000 of downstream wholesale stock in only 5 weeks). Showing it is commercially viable as long as we could find buyers for the wholesale items (phase 2)
  • We created a job for an under-represented employee
  • We developed a range of new products
  • We proved we can recycle into end consumer products (not intermediate grinds) at a rate of 600 times the existing UK circularity rate from the waste recycling industry
  • We compared the performance of the service in the GMCA region over the the rest of the UK and identified extended local markets
  • We took that knowledge and pivoted, demonstrating the results drive the business, which was a key feature of the phase 2 questions
  • We created platforms for evidence for everyone from individuals all the way up to the region and even country
  • Lots, and lots and lots more!

Each of those proving not only the evidence but confirming and even exceeding the claims we make [we have a presentation on it if you're interested, but we have real-time summary data and you know where to find it].

Recapping the SBTi Policy Commitments

The core part of the SBTi compliance requirement is the use of evidence! Evidence trumps everything else. It is expected to permeate every facet of decisions made in your organisation. About suppliers, as they are your scope 3, internal policy choices and the way public and corporate funds are spent.

Yet, the more evidence you have, the more space it takes and with the best will in the world, therein lays the dichotomy. Even if you simplify each piece of evidence down to one 10 word sentence, with 400 pieces of evidence, that is 4,000 words. There is nothing you can do about it.

You cannot fit all your evidence into a word count yet you can always fit poor or no evidence into one.

Much proper scientific evidence doesn't have, nor originate in, words. Words are clumsy, ambiguous and utterly anti-scientific. They resist formality and logical structure and rely on interpretation. They have proven their ability to be used as weapons, as ways to disrupt, threaten and destroy democracy and even call for and give cover to, genocide .

Yet, public procurement, the function of a public body that demands evidence for the efficacy of its suppliers, that governs the security of public money demand you use and create words. Ambiguous monstrosities that infect and disease everything it touches, to "present" evidence to people who at best, partially understand them but understand little if nothing else. It acts worse than rabidly overzealous lawyers in the hypocritical pursuit of evidence, but attempt to enforce an autocratic prevention of it's disclosure.

Innovation bids are particularly problematic! They attempt to evaluate often highly technical, highly-innovative models by expressly forbidding the submission and evaluation of all the articles of robust evidence. Scientific research papers, videos, datasets all doom you to failure for their inclusion as they are ignored in appendices. The more scientific the less favourable. The bidders whisper their authoritarian demand that you must ambiguate the evidence to meaninglessness.

Reversion to Type

The GMCA Foundation Economy phase 1 application process was actually really good! I can't sing its praises enough! It didn't attempt to shoehorn innovation into a box and implemented something I have been talking about for 15 years! Innovation cannot be evaluated using the toolbox of bureaucracy because the nature of true, transformative innovation, challenges the shape of the box, how many dimensions such a box has or even the need for a box at all! [Innovation grant applications being the hindrance of innovation is a topic I've written about previously ]

We are not new to bids! I have been bidding for work since 2014 in the private sector and 2005 in the third sector. We've won various bids and lost many more. Those for staid "best practise" are easy to create. They align with policy because policy aligns best practise. However, this precisely creates the hole. Since no best practise exists in innovative spaces. They are creating the best practise and ever the field. Even in their transitive stages, best practise is challenged daily by innovation. In highly innovative spaces, there is no best practise at all.

The problem is not the writing of the bid, which is an accusation we often see from naiive procurement consultants, with little or no experience in innovation, but the existence of a demand for weak or irrelevant evaluation having a driving primacy over evidence. A demand they;

"must show how you will move the deckchairs as the titanic sinks"

Sure enough, phase 1's puncture of the failing system, leaked to phase 2's sh*tshow! It reverted to type and went in diametrically the opposite direction to where it should have.

We were told a few weeks ago we were unsuccessful. Sticking in the feedback was;

"Each application was scored separately by 3 separate officers. This included one “specialist” with policy expertise aligned either to your sector, or challenge area, as well as 2X “generalist” markers, providing a variety of perspectives."

With no other explanation.

And I didn't need one. As soon as I read the words "policy expertise" I knew exactly what happened! I expected we scored in the bottom 25% to 30% of applications, despite the fact 15% of phase 1 awardees self-excluded. I knew then, that the word count was the likely culprit, the policy person wholly underqualified to evaluate innovation bids at all and the evaluators were either non or anti-scientific, sought relevance for their ignorance of robust evaluation. Anyone out of their depth, finds solace in irrelevance as an attempt to stay relevant. It's a corporate politics movement that gives rise to the Peter Principle, which only exists because we allowed it to dodge sanction by rewarding it instead.

After a non-transparent chase across two organisations to try to get the feedback. Sure enough. A few weeks later, after a consistent tantrum, I found out the word count got us! The excuse given to end the livelihoods of 4 GM residents at the same time.

Policy is the Antithesis of Innovation

The trouble with policy, is it can only ever create "words" in retrospect. There are almost no policy experts with scientific expertise and even less with engineering skill. At best, local and national governments rely on think-tanks to drive policy. Often with little or no robust comparison as they interview using methods designed for survey, not rigour, precision or accuracy.

Yet, how is it possible to create evidence based policy? Safety criteria are exactly that! Its demand for scientific evidence does not give way to a word count for readability. They are built from extensive testing, academic involvement and case law. Where the first, deploys scientific and analytical scrutiny and the latter from lessons of the past. Where bidding procedures are shaped by case law, it has come after lessons were forced upon public services through criminal negligence and corporate manslaughter claims triggering councils to act, or their leaders face prison. All because they didn't evaluate the evidence available. To set a process that is fair, truly robust and advantages those with the greatest evidence quality and quantity.

As soon as you set a word count in a bid, you immediately and unequivocally know it value prose over propriety and prudence. A word count does not actually work to create a fair evaluation, as poorer bidders are never disadvantaged by it. Only good bidders can be. Yet, bidders are claimed to have a choice.

Did we exceed the word count? By miles!

Was that a conscious decision? Yes!

Why would we be this "dumb"? Because SBT demands councils value robust, scientific evidence.

The GMCA has set SBTs and signed a commitment to the institution of the SBTi, so should be evaluating actual evidence on merit, not on irrelevance like word count. Which is an implicit demeritorisation tool which is not called a "penalty", but is that in everything but name!

Phase 1 applicants were informed bids would follow a similar procedure to phase 1 and we could get feedback on our bids before the closing date (which was a great touch for phase 1 but did not appear at all in phase 2). So evaluations could proceed without a wordy dysfunction and an unnecessary advantage that only favours those who can pay for bid writers, not those who can deliver the evidence of their work. This never happened for us.

Did they evaluate the evidence...?

We didn't know... [until yesterday night]

It took us WEEKS to get the feedback! In the meantime, we have completely restructured our business to cut costs and move resources. 4 people have lost their jobs in Greater Manchester as we move our operations to a different part of the UK and then on to the USA later in the year and into next year. GM has lost us and permanently so. Indeed, it's why it continuously loses its innovation to the USA and London as it's words don't match its actions.

How word counts fail the Planet and People alike

There are different ways in which policy provision hinders the adoption of innovation and hinder solutions, which hinders the creation of sustainable jobs and the advancement of public funds for public good. This is especially the case in corporate environments where governance, risk and compliance (GRC for short) make decisions that are fundamentally against the best interests of the organisation and almost always do that without a proper evaluation of how the policies affect a desired outcome in a positive or negative way. They are intrinsically about preventing harm to the status quo and primary value chains, not preventing harm to people or planet and in many if not most cases of transformation for good, actively facilitate it.

They also generate an inordinate amount of waste! Badly driven policies work against the organisation's targets as it actively prevents the realisation of the tools of transformation needed to get there and wastes all the public money spent to that moment in its history.

They also impede the unnecessary adaptations within the organisation needed to deliver an outcome. The more complex the system and the shorter the time, the greater the level of change needed in a faster timeline, and the worse the policy problem gets. We know this from the stagnant response to the climate crisis to date and how far behind the Greater Manchester is from its Carbon budget targets.

Even outside the climate sector, you need only see the amount of salary paid to staff to spend 2 days trying to book their annual leave, to see how much waste exists in those spaces. It is not only the cost of the annual leave to the organisation, which the staff member is rightfully entitled to by law, it is also the waste of two days of salary to get their manager and HR to sign the approval sheet. An example of policy generating wastes on top of wasted money and even emissions to do it.

Yet policy enforces word counts in many different ways. Evaluation processes can choose to:

  1. disqualify the entire answer
  2. stop reading at the word count, or
  3. just regard it as a guideline

The First two are standard procurement practise which can be deployed when the requirement is specific and known. While the last is the most appropriate for innovation and scientific results or high-evidence submissions. Especially under SBTi commitments since a word count should never prevent an organisation meeting or presenting a Science Based target.

You wouldn't limit the IPCC report, nor scientific research paper by word count, because that forces the authors and researchers to cut out key pieces of evidence just to meet the size of the box and triggers unnecessary peer-review and delays in publication. It wastes an inordinate amount of time and money.

Red where the word count ended, Amber what we actually submitted for the first scored question only

Where an organisation has set Science Based Targets, then setting word counts is a violation of the commitments organisations have made to the SBTi, as it rates the count over the science, which values words over evidence.

Perhaps more importantly for green innovation, especially for a public authority that has committed to Net Zero by 2038, it's a liability to the planet. Since you can always write rubbish within a word count but you cannot always present all evidence within a word count. If scientific evidence is longer then you must cut it out! Hampering your proposal and application.

Not only did we decide to submit it given the GMCA has Science Based Target commitments, we also submitted appendices for the rest of the evidence. Not knowing if they'd read it, as we were never told nor were we given the platform to ask.

The GMCA choose to stop reading at 550 words (an allowance of 10%) when we wrote over 2,000 for some questions. Then evaluated us stating:

“This meant that much of the context you provided later in your answers was not considered in your scores for these questions, which significantly impacted your scores.

Markers felt (including based on the above) that you could have demonstrated more progress towards your phase 1 objectives, and whilst there was good reflection on challenges faced in phase 1 more information could have been provided on how these challenges would have been addressed and overcome in phase 2.”

For those wanting to learn about the bidding process and the quality of feedback you get, there is an important trap here. Given that only 25% of our answer was actually read, the feedback suggesting we give more information has two key problems, each on its own invalidates the feedback and makes it useless.

  1. Given they haven't read the rest of the answer, they cannot demonstrate that we didn’t write what they claim we didn't write (since they didn't read it nor the appendix, how do they know we didn't write it?).
  2. Worse, they ask for more information which even if we hadn't already put it in, would extend the answer and they still wouldn't read it because it would have continued to further breach the word count. Once the camel's back is broken, it doesn't matter how much straw you put on it.

This makes this feedback completely useless! Objectively, not only is it likely incorrect, given they didn't read 75% of the answer, so cannot demonstrate we didn't write it, it also feeds-back something which can only ever result in a larger violation of the word count. It would not have changed our scores at all!

This is an important thing for those new to bidding to understand. The process is fundamentally anti-merit. If you are every left wondering what more you could have wrote, then this has happened to you.

Facilitating bad innovation and wasting public money

Anything that sets disqualification of all or part of an answer based on word counts or the narrative presentation of findings [only] is not an evidence based evaluation. If 10 bidders submit 10 bids and the top 3 highly innovative ones have lots of evidence, but the bottom 3 do not have any, the top 3 are limited in their competitiveness with how much they can put into a bid in a way the bottom 3 are not. As soon as they have to limit the evidence they put in, they are put at an unfair disadvantage relative to the other bidders.

Is this the bidders fault? Well this depends. Ordinary procurement, the answer is yes. Since the criteria for the bid are limited, specific have a raft of legislation and best practise that's presumed by the panel and the answer should converge with those too.

However in innovation, there is no limited context for the question. By it's nature, it's a divergent process.

Even worse, when the policy position of the contracting organisation is to have science based targets, this burden for failure also shifts to them. As now the science and volume of evidence is fettered as a factor in evaluation. If all other bidders meet the word count, but submit limited, non-credible evidence, which lacks scientific rigour, and the only party not to, is the one with volumes of evidence, then the word count works against the submission of scientific evidence. Always! Indeed, it gets worse the better at it that you are. Despite the fact the planet need us to get better at the science, not worse!

Ordinarily, for a public body, this wouldn't matter. But having set science based targets, the GMCA "policy person" has violated the overarching Science Based Target commitments the GMCA signed.

SBTi have a complaint procedure for organisations that do this, so we will be submitting one to the them.

Summary

Word counts are the most unfair way to evaluate innovation. The more transformative an innovation, the more evidence you must provide, to build the different lines of trust and the more likely you are to breach word counts. But providing less evidence does not, and can never, demonstrate transformative innovation has delivered each of its KPIs. So highly innovative projects are stuck in a catch 22 they can't get out of. This makes it impossible for the GMCA, or any innovation fund, to ever evaluate transformative innovators nor deliver properly against their science based targets. Even though they are committed to.

Not only that, but feedback from bids that use word count as a disqualifier or fetters answers by it are ultimately useless because they make statements that are not demonstrably true. Meaning the feedback is not objective nor is it useful and indeed, it misses out on giving you feedback you can work with, if the rest of the answers could have been improved.

This evaluation showed our regional public sector decisionmakers are nowhere near ready to meet the needs of their own Net Zero targets of 2038 nor truly commit to the application of science, to meet the signatory commitment to the SBTi. In essence, it has revealed it's lie and that of its makers.

Ethar A.

Founder at ReallyRecycle.com | "The only founder standing for true sustainability" | Circular Economy | CleanTech | Deep Generalist | Involuntary Activist | Voice Recognition Wrangler

7 个月

One for the procurement teams I guess Neil Hind, Rob Knott FCIPS

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了