Ensuring success of Data Science Projects

Ensuring success of Data Science Projects

A Chief Data Scientist recently posted on LinkedIn on how a large number of Data Science projects fail. That made me reflect on what I do differently to make success with Data Science projects. I was heading the Data Science team at Info Edge while being the Chief Product Officer at Naukri.com.

As you all know, data Science is an experimental technique and can fail for many reasons. It can also create poor experience for a certain set of customers while it aims to improve the average experience of the customers. Also, very few business leaders understand what it is about and fear of the unknown results in several projects not seeing the light of the day. One of the reasons I believe I succeeded was my ability to seamlessly switch between my product management hat and Data Science hat.

1.   Getting a zero-loss path to launch –It is important to find a use case which is limited in its potential negative impact or almost a zero-loss situation. Organizations are ok with experiments which do not damage their existing business. Here a few real examples that explain what I mean

a.   Naukri Resume Database recommends a similar CV against the CV which a recruiter was viewing. Almost 10-15% of CVs viewed came from this route. The Data Science team objective was to build a CV recommendation engine to beat this logic that was based purely on collaborative filtering. Hence, a content-based CV recommendation was to compete with this robust algorithm which had scaled well in the past. However, a collaborative filtering approach suffers from a cold start problem and several new CVs which were recently added did not have similar CVs against them. Hence there was a Zero Loss opportunity to recommend CVs before the collaborative filtering kicked in. Of course, a content-based approach has high computational complexity. And several compromises were made in the first launch. Since it was a zero-loss situation, a smooth go live happened.

b.   CV Parsing techniques have improved – however, they are far from perfect. A large number of CVs follow no pattern and are haphazardly organized. When we wanted to make the Parser live, we identified a flow for Job applications where volume was relatively less. Also, we created an operational back-end to create the errors thrown by the parser. Operations team will manually correct the errors and the system will use that as feedback to ensure the parser improves in quality.

c.   Shiksha Assistant – An assistant to help answer the queries of students. Will it distract the students from their visit intent and distort student experience? Often, you learn from trying. Also, a Beta product go live is better than no Go live. Since it was an add-on and there was strong positive feedback about other chatbots, it was possible to Go Live early and improve the algorithm with ongoing customer feedback.

2.   Own the code end to end – If the Data Science team is different from the software engineering team, the projects can suffer from “Not Invented Here Syndrome”. Technology teams aspire to be data scientists. They also care about real time performance and code stability. They have certain legitimate concerns on security and robustness of the code.

However, we learned that transferring the code and the knowledge to a different team to make data science projects live is possibly disastrous. In one case, the other team tried to optimize the code and thereafter no one could figure out what went wrong. Therefore, the data science team has to write production ready code. That means creating APIs, ensuring performance and uptime of APIs, real time fixing of issues and following the production code discipline on code back-ups, deployment procedure, test server guidelines and archiving and updating databases. In my view, if you are a small team of data scientists, it is better to do fewer projects but own them end-to-end.

3.   Do you have sufficient good quality data to succeed? – You cannot train an autoencoder with 100,000 data points when the combination of data values exceeds a few millions. It is therefore important to know what is the quantum of data you need. Is there a sufficient depth in the dataset? Are you capturing the most important parameters to build a model with sufficient predictive power? Unobserved variables can influence outcomes. Understanding the product and how actual users are interacting with the system is important to know which data to collect and from where. It is also better to wait a few months to collect data. Of course, data collection may require changes in the product to enable more relevant data capture and over an extended period of time.

4.   Getting the success metric right – Relative and usage metrics based on A/B testing can prove effectiveness of an algorithm. However, even these metrics have serious deficiencies in effective measurement of user experience. When sending job alert mailers, we could track open rates, click rates, applies and applicants. However, these metrics are susceptible to mailer system design and failures.

It is always better to get absolute feedback, even if it is a simple “Yes or No”. I still recall a simple “Did you find these jobs relevant? – Yes or No” which was inserted in Job Alerts to measure feedback. And the volume of feedback allowed us to dive deep in which segments were happy and which ones were not. Over time, that became a gold metric.

Even when we went live on Shiksha Assistant, we measured relative improvement in engagement metrics when Assistant was used. There is now an absolute Yes or No feedback from the students. And the team analyzes the use cases where feedback is negative and with special focus on those areas, it is possible to improve and demonstrate success.  

5.   You can win even when you lose – When two algorithms compete in an A/B test, one of them wins over the other. However, that’s a WIN based on a metric that is averaged over the entire set of customers. That does not mean that Algorithm A is better than Algorithm B for every single customer segment. If you strongly believe in the intuitive appeal of the concept you have implemented, go to the next step. Analyze the customer segments where Algorithm B is better. And then go live for those customer segments alone. A combination of two algorithms can be better than both the algorithms working in isolation. A note of caution – this can cause proliferation of algorithms and increase maintenance costs. However, for mission critical applications, this is a worthy choice to make.

6.   Never Give up – If your conceptual thinking is right, no project really fails. It either succeeds or is work in progress. And some projects just take multiple attempts to succeed. One such project was “Community behavior-based profile recommender system” for one of the Info Edge businesses. It took 3 attempts to beat a domain rules-based recommendation logic which had been perfected over a period of time. And the success did not come until some of the domain rules were embedded in the final algorithm.

7.   Hire right, problem solvers are needed - Often the difference between the success and the failure of the project is the person executing the project. Several challenges exist in data collection, data cleaning, understanding existing algorithms, understanding the domain, getting the support of the stakeholders, executing the algorithm, ensuring performance of the code, integrating and going live and finally, presenting the results. I have seen many projects linger around till the right person arrives on the scene to take it to closure. Hire good quality data scientists and never underestimate the ingenuity of the person behind the project.

Some problems are hard and often, success depends on persistence. Support in terms of resources – hardware and team members – both can make a difference. My learnings are not comprehensive by any means, I am sure everyone’s experience can be different depending on their specific contexts. I hope you found some of the above useful.

My other blog posts about Data Science-

1.    InfoEdge Merit Awards - Congratulations Naukri Data Science Team

2.    AI in Recruitment : Scoring Applies in terms of Relevance to a Job

3.    AI in Recruitment: Word2Vec Opens up Interesting Possibilities

4.    Naukri.com featured as an important case study in KrantiNation

5.    AI in Recruitment - Do Job Descriptions Represent the Intent of the Recruiter?

6.    AI in Recruitment - Understanding Designations and Skills

7.    AI in Recruitment : Is Mumbai closer to Delhi than Agra?

8.    My Keynote Presentation at Data Science Conclave 2017 in Chennai

Anand Mishra

Building LLMs and Agents | IIT Kanpur

4 年

Absolutely hit on points :)

回复
Dhruvjot Sehgal

Lead Product Manager at Jubilant Foods | Ex-Times, Paytm, Naukri | FMS Delhi(2015), IIT Roorkee(2012)

4 年

Excellent article, Vivek. Really insightful about the DS-Tech ownership struggles

回复
Rohit Manghnani

CxO, CPO, CBO, E-Comm @Walmart India, InfoEdge, Unilever, CEO & Founder@ Uniplatform & LoanAlexa

4 年

Great article Vivek. Always a pleasure to learn new things from you.

回复
Moiz Saifee

Data Science | Venture Capital | IIT Kharagpur | Kaggle Master

4 年

Good summary Vivek, brought back memories from the past :-)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了