Hype no more: real transformation requires more than software monkeys. Pay them bananas and invest in enabling your builders instead.
Credits to Banksy and all those who copied his art shamelessly (in his service)

Hype no more: real transformation requires more than software monkeys. Pay them bananas and invest in enabling your builders instead.

The selection paradoxon: get the 80/20 rule right before you match.

Technology never disappointed humanity. As creators and innovators we have the power to craft powerful tools and make good use of them - or not. Our ability to leverage and grow technological capability defines the path of innovation and shapes the art of the digital possible. Once crafted, a powerful tool in the right hands can be an asset whilst a tool in the hand of a fool is worthless at best or even a costly failure. Important is: value add is driven by the ability of smart application, not the tool capability itsself. Yet many companies seem to spend more effort in carefully selecting the ?right tools“ via capability-driven technology analysis than carefully thinking about the framework (governance) and skills (human capabilities) a specific tool requires to prosper. This is a pitty as software selection focuses on less than 20% of the future total cost of ownership as more than 80% are defined by the framework and peoplepower (internal/external) needed to scale (identify, qualify, prioritize, define, build, run and maintain) use cases on that software. So why is everybody bothering about licence cost and the fancy latest features when the real question at that point in time should be: which new/distinct capabilities will the tool bring (vs. existing landscape, within the respective domain architecture) and what will it take (human skills) to leverage these capabilities at scale? The latter is highly complex as skills across the entire ?idea2solution“ lifecycle (= use case pipeline) need to be covered - from business/demand to IT/supply. My key point is: the focus should be on human/software skill matching. Ability beats capability. You can buy a missing software capability at any time whilst you most probably can‘t do the same with human ability.

Features don‘t pay the bill, smart application does.

In a high-tech world only one thing is certain: new features commoditize at warp speed. What was unimaginable yesterday, is a high-cost feature today and will be pay-per-use or even freeware tomorrow. So why aren‘t we moving at warp speed? Because not the art of the possible but the speed of adoption defines the pace. There is no use in having the latest features at hand if there is no plan to bring them to good use. Commoditization of software features is a given thanks to fierce software market competition. What you really have to care about is the commoditization of deployment. This is an energy intensive process as it requires constant enablement and change management. But it‘s worth the while as in a two-speed world where the art of the possible is always the ?north star“ ahead what you can manage is your speed of pursuit. Don‘t pay a premium for fancy features if you did not invest in the ?transmission to reality engine“ beforehand.

Within most domains it‘s really not about the software!

A simple truth is: most software vendors want to sell wholesale not custom. As turnover growth with usage one has to make it simple to deploy at scale. A key lever here is increasing the potential pool of skilled builders (aka developer). ?Low code“ software made that an art by breaking down building complexity to a level that even skilled business user can now ?build“ their own use cases (no disrespect here, but real coding is for coders). That being said, let‘s change perspective:  isn‘t “low code” just marketing slang for “little differentiating”? What it really means is: anybody can use it at low (unit/marginal) cost and high scale, yet it will not bring any distinctive capability to your company that you could capitalize as there is virtually no entry barrier. The only thing it can do for your margin is tackle the cost side with efficiency - given you deploy it right (yes here competition can be smarter). In the game of marginal cost, scale of deployment is queen and king. Which again highlights the importance of a delivery framework and the peoplepower around the software. So unless you are tackling a high value domain with differentiating software, please consider ?cost per capability“ carefully. Why pay a high premium for fancy features if there is no clear incremental business case attached? It‘s amazing how software platforms integrate commodity features just to pose as the swiss army knife in their segment (which I call the ?breath over depth“ strategy). The marketing story is always nice (?we are the panacea to all your problems“) but whoever tried the swiss army knife screwdriver for real craftsmanship knows it‘s just no match for a hand-picked toolbox. From a software capability supply perspective that means: if you bought hammers, train your crew on hammering and put their skills to good use first (which does not imply seeing every use case as a nail). Only then add grippers, upskill your team and ensure both tools are used in synch to capture synergies (hammer cases + gripper cases + cases requiring both). Don‘t invest in tools if you have no crew to train, unless you buy for an exhibition (and get paid for views not implementation). And please: always acknowledge the complexity lies in smart application not the tools.

Transformation requires more than software monkeys.

All roads lead to Rome but there is a huge difference in paving a new path vs. filling potholes on existing ones. I fear bad marketing is trying to blur the lines here. Let‘s take two well hyped technology capabilities (not technologies) as example: Robotic Task Automation (wrongly labelled Robotic Process Automation - RPA) and Process Mining (the x-ray or digital twin of all digital processes). The steep career of RPA platforms started with the low-code promise ?business user can automate their manual tasks without touching underlying processes or applications“ and - without even keeping the initial promise (because it‘s not that simple in real life) - is now hyped to the universe of ?hyperautomation“ where it proclaims ?one (smart) co-worker for every employee“. Whoever got high on their own supply here should have considered the flipside of the argument as there is a huge difference between changing the status quo and putting the efficiency lipstick on a pig. More precisely:

  • if anybody can do it with 3 days training there is no competitive advantage (as explained) and there are more ways to win the race for efficiency
  • if neither processes nor applications (?backend systems“) are changed, it‘s fixing potholes on a bumpy road and no real transformation
  • if you fill the potholes with different code (on a different platform) it may be efficient at first sight but imply significant trade-offs in the mid term (additional technical debt)
  • the more fixing you do the higher the log-in effect (more code to manage/migrate) that prevents you from real transformation (throwing good money after bad money never was intelligent)
  • the log-in is costly as every change in underlying processes or applications requires making up the pig again (inefficiency multiplier: changes * systems)

Yes there might be good reasons for fixing potholes with RPA because it avoids traffic jams on critical process junctions and thus has a crystal clear business case (better short term fixing than getting stuck). The pure existence of RPA proofs that the toolbox missed one tactical tool before (no need for surgery for every flesh wound). Still real transformation towards ?superfluid“ traffic flows (processes) requires some serious traffic and road planning based on real-life data. A transformative technology does not statically prescribe an answer nor does it narrow the solution space down to binary decisions. Even with the latest ?AI“ (buzzword for machine learning) infusion your virtual workers (software bots) will not have any context awareness to systematically change the way ?how things get done“. Like a trained monkey they will just perform the narrow tasks they have been trained upon and don‘t care about the bigger picture (e.g. what happens left or right in the process). A virtual monkey army can sustain high volume transactions like clockwork but will be your worst enemy when you try to challenge the status quo, re-think traffic flows and plan new roads. In contrast to this Process Mining takes a transformative (and thus more sustainable) approach, creates data-driven transparency and insights, makes real-time process flow information actionable and enables human interaction where it is required the most: data & AI enabled traffic management and decision taking not focused on potholes but the end-to-end flow including all detours, dead-ends, payloads and outcomes. So why bother every employee with a trained software monkey when your focus should be on sustainable traffic planning and management? If everybody needs a bot you got serious infrastructure issues dude! Before founding a Zoo check if you are up for monkey business.

Why is nobody calling the virtual monkey rights group?

Over the last years we have witnessed simple bot-shoring (virtual labor arbitrage) emerge into a multi billion dollar software industry. The hype around manual task virtualization has taken many software vendors by surprise, catapulted new players to an astonishing market capitalization and lately triggered an M&A spree. Already the initial shock unleashed an expectation tsunami that brought along many new ?experts“ surfing on the low code automation waves hitting traditional IT wisdom. When the water went down to reveal what was left after science fiction got washed away (and the fundamentals of IT wisdom still stood) the marketing machinery and ?evangelists“ (I love that term because it references to a firm believe rather than critical thinking) tried to cover with a smoke screen of hyperbolic buzzwords. Red-top reporters disguised as ?analysts“ labelling skewed opinion polls as ?research“ (lacking scientific standard) contributed as good money can be earned with predictions in line with hype extrapolation (just choose your sign, for it or against it, but never challenge the fundamentals). The unfortunate truth is: buzz around platform capability (technology) is not helping the monkeys but fostering monkey business. If we called an ape an ape it would be much easier to focus on the massive effort required to systematically engage a monkey army so that it adds value to business operations. Instead we make them dance on a stage of thin air aspirations, dress them as ?knowledge workers“ and sell virtual zoos for virtual factories. Time to call the virtual monkey activists or the virtual worker‘s council as the treatment lacks respect for the supreme skills of monkeys in their domain!

Invest money where specialization is key

Don‘t get me wrong: I do believe in low-code as an efficient alternative and valid addition to the software toolbox. I know from practice that a virtual software bot army can support efficient operations if deployed and managed well. I have been a fan since the beginning and supported highly successful scaling initiatives. And well ... yes in my youth I was rather on the side of believe than wisdom (#throwthefirststone #itsalifetimejourney). But if there is one thing I learned so far it‘s that the ?t“ in tool does not per se provoke success in transformation and the ?s“ in software should provoke some standard questions:

  • Is it value-adding (differentiating) or an efficiency driver (non-differentiating)
  • Is it strategic/transformational (challenge status quo) or tactical/preservative (cement status quo but with some benefit)
  • Where does smart application call for specialization and where for commoditization

The last question is tricky as ?blunt instruments“ (how a friend would call it) like RPA are contraintuitive: complexity is shifted away from DEVOPS (commoditization of the factory/ supply side) to BIZ (specialization of the requirements definition/ demand side). This is caused by ?lot size of one deployments“: every monkey is different (as different as every pothole that needs fixing) but build and deployed in (hopefully) the same standard way. To avoid getting lost in the woods look at the market: if you can buy ?design to build to run“ ability ?as a Service“ you are most likely looking at the commodity piece of the ?idea2solution“ lifecycle. Replicate these abilities in-house if you feel you have to, but don‘t underestimate the service model and peoplepower required to sustainably deliver at scale. And why invest where you can spot buy a commodity service for delivery when the solution impact is defined in the ?idea“ (discovery) stage? There is no free lunch and a cost to everything in business. Considering the full ?idea2solution“ lifecycle:

  • Cost of use case discovery (ideate, quantify, prioritize)
  • Cost of use case delivery (qualify requirements, design, build, test)
  • Cost of service (support, maintain, phase-out)

As you can see: low-code only takes complexity out of ?build“ ...not the rest. Dumb application still comes at full cost.

So before throwing money at fancy software capabilities or investing in commodity abilities: carefully consider mid-term benefits vs. mid-term (full) cost per unit. If you pay more than bananas to software monkeys you might lack the funding for real builders and their toolbox one day.

Disclaimer: this article is my personal professional opinion. It was written with a smile and should be read with one.

Katharina Weber

Senior Manager Warehouse & Logistics

4 年

Sharp to the point ????

Devin Gharibian-Saki

CEO ? PROLOGA ? Views are my own

4 年

very nice article, Tobias Unger

Monkey First !

回复
Ralph Aboujaoude Diaz

Global Head - Product and Operations Cybersecurity

4 年

Great reading Tobias Unger ??

要查看或添加评论,请登录

Tobias Unger的更多文章

社区洞察

其他会员也浏览了