Why you should verticalize your data scientists?
Source : the Verge

Why you should verticalize your data scientists?

Building a sustainable data science team, in a non GAFA environment, is not just a matter of increasing wages. Nor a cool work-space with smoothies on tap. It’s a bit more complicated than that and you don’t fight the data science blues with furry headphones.

Sustainable data science in the industry is not science. It’s a trade-off, a dynamic trade-off between business requirements, model creation and code delivery. It probably has to do with providing a development path to your data scientists, with milestones, pride, and a feeling of belonging to something greater than themselves, greater than data science itself. If you want your data scientists to champion your digital agenda, you must view them as genuine Pokemons. You must provide the right amount of candies and battles, for the purpose of evolution.

On the contrary to many debates today – due to a shortage of resource -, that insist on encouraging “generalist” data scientists, or on segmenting data scientists in technical clusters, we would like to propose another view of a career path for data scientists. Semi-verticalization.

A semi-vertical is a “domain”, like customer analytics, or lead and campaign management, asset maintenance and utilization, workforce management, pricing and discounting, capacity planning and scheduling, …. A domain has to be broader than a process, broader than a set of data, in order to cross silos and create added value. But smaller than a function, like sales or operations, or just manufacturing. If you have an IT department decomposed into IT-domains, they might match, or you can gather a few IT-domains into one DS-domain. A semi-vertical is not a pure vertical though, in the software sense, like hospitality revenue management. So, why promoting verticalization instead of generalist skills?

Within semi-verticals:

Your data scientists will get used to the data they manipulate

They will react faster to new business needs, identify at a glance that the depth of the data that the companies own doesn’t allow a good prediction, for instance. They will also be able to lobby IT-domains to improve the quality of the data they collect (passively or actively).

Most of all they will know the biases and the routines to fix them, or at least the sets of customers or equipment eligible for Data Science. I have seen cases where among 1 terabyte of wind turbine data, less than 1 gigabyte were meaningful for predictive maintenance, without any loss of signal. Better to know it before.

Astute algorithms, for customer deduplication, for fake bookings cleaning, will become invisible assets of the company. Feature engineering is knowledge. Even though they don’t sound very sexy, they boost the quality of the data sets and minimize false expectations.

Your data scientists will get exposed to the economics behind the data

Take dispute data for instance. Dispute is probably not the core product of your company, but due to poor processes, or clever fraud behavior, disputes might enlighten the insight you get from your customers. Usually, you seek to limit them. Imagine that disputes cost you 30 full time employees, a poor customer satisfaction score, and that in 20% of the cases you agree with your customers in the end and accept their claims. Setting the data science dogs loose on dispute predictions might be a complete loss of time if they are not aware of the economic impact of false positive. Asymmetry of risk does not induce the same algorithm in the end.

They will progressively get accustomed to the process that produced the data and the ones that will consume the prediction

It’s not just about knowing the vocabulary of the business, it’s knowing its grammar. The only way to challenge upstream and downstream processes is to know them.

The data scientist is not a kind of Smaug, sleeping on a heap of data with some sort of jewel that he polishes day and night, deaf to dwarfs’ complaints.

It’s exactly the opposite, his mandate consists in producing a virtuous data stream: from the traces of past decisions, within their context, to future decisions, within their context.

Existing data streams are in general based on business rules. Say, “revise a lift every 41 days”. This business rule is partially rigid or imposed by law. The law might be 42, but the company is used to 41 for safety’s sake. Gaining 2.5% of maintenance costs is no small feat when you maintain a park of 100 000 lifts. Unbiasing business rules, and challenging them, one by one, is a very good warm up for a data scientist, before changing the game.

Are you sure that every price on your web site must end with a 9?

They can better engage with business

Yes, we know that data scientists are a strange species. Sometimes the dialog with business is not easy, particularly when fuzzy logic, of self-fulfilling prophecies have to meet formal constraints and predictions or recommendations. There might be fear too when business rules have been tuned by subject matter experts for decades, and they are the one you have to interview. The average human brain loves average reasoning (since mammoth age) but is poorly equipped for reasoning with uncertainty. Particularly when risk/benefit profiles are asymmetrical. A young data scientist, discovering the vertical, might feel unease to challenge the status quo, eager to learn, craving to get knighted. But the kings of business rules have sometimes misleading intuitions, tweaked by lack of data transparency. I have a few hundred names ??

I remember a meeting with the CEO and CMO of a large hotel group in France. We wanted to study the impact of increasing mid-week rates and studied the dynamic of bookings, p days before arrival, for each families of stay (Monday to Thursday, etc …) for a few hundred hotels. Cross views mixed with basic elasticity predictions is something that is not easy for marketing, when you have a few million bookings. But this is not rocket science at all. The CEO just laughed and said loud “Do we need mathematicians to know our customers?”. The mission was made easier after that, and both parts learned.

After two years of full-immersion into a domain, the data scientist is not a burden during ITWs with business stake holders. He might even deliver instant value, providing what-if insight and challenging status quo, without any computation. This is more efficient for the products to be delivered, and far more comfortable for the DS. He has gained legitimacy.

Imagination has poor predictive power

When you don’t engage with business in a continuous mode, you have a temptation to over-complexify the problems at stake. Your thirsty imagination generates some kind of mirage in the desert, listening to GAFAs last developments with 3 billion likes on cats and mojitos. Business daily life can be so deceptive for day dreamers.

Feeling connected with your semi-vertical day to day operations avoid many Deep Learning fantasies [sighs].

You can amortize efforts on a longer horizon

The difference between an external DS and an in-house DS is time, and of course reward. As a DS within a domain, you know that you will have the same questions for years, and as a DS you try to optimize your moves. You can even bet on the horizon and be more tactical with your customer. "I use this year to convince that increasing the prices in March is not that risky and good for revenue in the campsite industry, and I will play with last minute promotions next year, once they believe in the forecast."

Most of all, when it’s your domain, you tend to create assets, to pave the territory, to be ready for the next time, as you hate firefighting. This asset creation gives a sense of ownership that is extremely valuable for the whole data-to-decision loop. It nurtures loyalty to process disobedience. It gives perspective. It’s harder on the business side to develop this asset building mindset without someone to embody it.

We have maintained the same workforce management software for 7 years, as a DS team for Bouygues and Genesys and 15 years later I am still proud to see the progress made, and the 640 000 agents planned. Even if no line of code is mine now [which is fortunate].

Orphan verticals, so what?

What to do if you don’t have the critical mass in your company for having say three data scientist per domain? Critical mass can be either data mass (too little volume to produce predictive models) or business mass (too little workforce involved in the business side of the domain, for instance, for a good ROI of algorithms). In lack of critical mass, then it’s better to outsource the DS skills for little domains and focus on the ones you can capitalize on. For instance, if you are not very good at debt collecting, with 1 000 debts a year, it’s very unlikely that you can build sharp DS models whose costs are covered by the benefit you can get out of it. On the contrary, a software editor, expert in debt collection, will have developed a rich predictive or prescriptive model that you can embed into your daily operations, after a decent tuning phase. No need for data scientists in orphan domains, no pills worth to be developed. A generalist DS might provide help in the procurement process to make sure that the vendor implements state of the art practice and don’t oversell a genius sledgehammer for elementary flies.

A pack of three DS per vertical, allows fierce brain-teasing, sound role division, and absorbs shocks in the long run.

Why not verticals instead of semi-verticals?

The data scientist is not only a process optimizer. He is a transformation agent, augmenting or replacing human decisions. He is even a digital champion sometimes when process disruption is necessary to leverage on a new recommendation flow. Hence a pure vertical, say “workforce scheduling for bus drivers” is not a panacea.

·        Lack of aggressivity: spending too many years in contact with one type of business produces some strange kind of Stockholm syndrome. You, as a data scientist, overfit the process. For instance, your awareness of the constraints of existing sources of data or unions reactions limit your ability to challenge status quo.

·        Lack of tech stimulation: we have seen it when machine learning arrived in a world of time series experts for forecasting software. Of course, time series are great, when you have 4 years of history, when every event can be qualified, but the stimulation provoked by a new technology, deployed in other contexts, exploiting other signals like web logs, can be sound. Take the insurance industry who a few years back was refusing machine learning “I don’t want to use algorithms that are not explainable”. And each phone call deserves the same level of expertise …

·        Boring career path: business and data science have not the same pace of evolution. Complexity is not a dense function in the business world. Decision trees are far enough for many processes before you have enough data or process stability for extreme gradient boosting. You might need some fresh air, to develop new skills, find the audacity of the beginner. Besides, no one is a prophet in its own vertical.

 

The buzz about AI, machine learning and data science in general is encouraged by many actors of the industry. GAFAs, software vendors, big four and allies, some proclaimed prophets and CEOs who pack digital transformation with analytics, to name a few. I am convinced that this buzz will give birth to fantastic software deliveries, disrupt many processes and create new families of products for the end customer. This is our job, we, data scientists, to make it happen, to kill fat business rules and beyond.

But in order to deliver those promises, incumbent industries have to find a way to make it work: the operating model with data science doesn’t exist in management handbooks. Data science is not IT. It is not BI. It is not classic R&D either. But it’s risky before it pays off. It’s a new animal in the computer science technology zoo, but a strange species, replacing decisions, one by one, with a mix of empathy and aggressivity. A symbiote digging into the brain of its host for its own good, for the sake of better decision making. And the symbiote will grow if and only if data scientists can deploy their skills, with the right dose of realism and ambition. 

Olga Bershteyn

Data Scientist hos Banedanmark

5 年

Thanks for the article. But doesn't it come gradually on any data scientist (more widely, any specialist) that (s)he becomes more acquainted and involved with specific techniques, business instights and so on if only (s)he works in the same company/same area for many years? Should heads of R&D departments do anything to support that kind of development in their team members or they can just wait and harvest in time?

回复
Frédéric Riera

SENIOR CONSULTANT / Team leader / Project Manager / Expert in Market Infrastructure, Blockchain/DLT and CBDC

5 年

Very interesting and I fully agree. But is it really specific to DS? I think that this kind of organisation could be benefic for every high potential person.

Thanks Benoit Rottembourg. I share your view about the relevance of semi-vertical focus.

Mohamed-Ali Aloulou, PhD

DELMIA Product Strategy | Supply Chain | Manufacturing | Operations Research | Data Science

5 年

Inspiring as usual! Thanks Benoit :)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了