“Fail Fast, Fail Cheap” to succeed more in Analytics / Data Science

We know the power of “continuous learning” or “continuous improvement” in anything and everything that we do. When it comes to Analytics and Data Science, I feel it is even more relevant and applicable. The key to success is obviously the model of “Fail fast, Fail cheap” and to learn from mistakes quickly.

The only way a baby learns to walk is by falling. The only way we try to hit a “bulls eye” is by continuously trying to hit the arrow closer to the center. Over and over and over again. We, humans, are programmed to learn through failure, we do condition ourselves to run from risk and plan to mitigate around that as well. At the same time, failure is definitely a necessary ingredient for innovation.

  1. Data Science demands innovation and attitude towards it.
  2. Innovation demands risk taking, experimentation abilities; which obviously have high likelihood of failures.
  3. Embracing these approaches and thus failure quickly to create the opportunity to accelerate success.

If we look at every process, majority of the cases have no “right” or “wrong” answers. Instead, we want to get an appropriate outcome given the context of the situation. In Analytics and Data Science world, we deal with data; try to explore, and understand it; try to understand what is the objective and what questions / what insights can help answer business problems; what all different type of experiments can be explored; which would be more relevant, appropriate, accurate etc. and help business get decisions either effectively or quickly or both.

When we fail fast and fail cheap, these experiments are smaller in both time and money. There are multiple paths, approaches explored and thus giving way to success in a relatively quicker way.

I feel these are increasingly appropriate because we are experiencing it more and more in Analytics / Data Science projects. Some key thoughts are below:

  1. Proof of Concept / Proof of technology - or we may call it Proof of Approach which is tried to accomplish something quickly and “scale” it further for the most viable product development or any larger product demonstration.
  2. Multiple experiments during model building – is something where we try and explore different algorithms to see which one works better and has a better outcome as per “actual” expectation.
  3. Assumptions – can always be assumptions, however they can be tweaked / updated over a period as we go along and in a “continuous manner”. We need to start by understanding the results we need to achieve and re-strategize our assumptions and work with business to accomplish the same.
  4. Step wise approach / building block approach in Data Science – we typically gather all information into one system and start analyzing it by making experiments. Because, for any organization and their business, we need to understand what portion of it is “relevant” and then refine as we progress. Lot of experiments needed in this case.
  5. Historical data – understanding the amount of historical data available and developing your analytical capabilities on top of that saves time and money.
  6. Quality of data – If we think we can fix these issues later, that’s not appropriate in a Data Science scenario. When we really understand what information is key to decision-making, what standardization, pre-processing logic are to be done to analyze data in a meaningful way, we have to do it NOW and try multiple approaches by failing fast, then we are AHEAD in the game already than our competition.
  7. Stop chasing perfection from step 1 – we need to accept it as a fact in analytics and data science and start “continuous learning” path way.

At a ground level, the “fail fast, fail cheap” model is propelling the top 5 trends that are getting more prominent these days as well. On a lighter side: seems to be following the first 5 letters of alphabet series (A-B-C-D-E) ??.

·       A – Analytics, more importantly predictive analytics…more businesses are leveraging from prediction analytics and forecasting approaches to get arms and ammunitions!

·       B – Big data platforms: greater number of organizations are adopting Hadoop and Spark big data platforms

·       C – Conversational interfaces and Chatbots: the movement towards conversational interfaces and chatbots are getting accelerated

·       D – Deep learning: Deep learning technology is getting mainstream [it is already getting attention over past few years. Noticeable outcomes are accomplished for many applications such as machine translation, different forms of language processing, object classification & detection in images, facial recognition, automatic game playing, pattern recognition and lot more. Deep learning also being adopted because of more computing power and accessibility of technology thanks to various open source frameworks such as tensorflow, deeplearning4j etc., which can be experimented at a lower cost so fail fast to achieve the most appropriate approach quickly through various algorithms]

E – Enhanced Data Security: need for stronger data security is rising as always [Increasing number of cyber-attacks and such similar incidents are the point of rising attention to the question of data security. Security analytics may demand lot of experiments to enable we get answers to some critical problems to save time and money for organizations and society by large. One of the recent trends in security analytics is increased usage of machine learning algorithms, including deep learning for anomalies and other fields of data science security across multiple industries]

Thanks for reading. Have a great weekend!


Manoj Palaniswamy

Director & Principal Architect - Data Platform Architecture

7 年

While working for tech giants, there is a lot at stake with respect to reputation and we can't afford fail in the first place let alone failing fast. The customers come to you because you are the pioneers and you know how to do it, so you can't even talk to them about failure . I guess failing fast is not a real failure but a lot of quick iterations and get the best one for the customers which they like, instead of showing something after a lot of work and they end up not liking it.

回复
Sachin Kumar

Solving Customer’s Problems with Generative AI , Data Science , and Data Engineering to Drive Revenue Growth

7 年

Failing fast means you can then deploy your resources elsewhere. That could be a big advantage in today's highly competitive world with behemoth organization struggling to get their act together. Fail fast learn fast, as you said, is powerful to make any organization become great over time.

要查看或添加评论,请登录

Kamal M.的更多文章

  • AI role in recruitment of right-fit talent

    AI role in recruitment of right-fit talent

    Organizations are transforming themselves by leveraging AI tools and technologies to ensure they bring in right-fit…

  • Balancing Act for Data Science and AI

    Balancing Act for Data Science and AI

    With ever increasing business challenge, need for faster success and quicker time to market for realising benefit and…

    2 条评论
  • AI into 2019 - A PoV

    AI into 2019 - A PoV

    As we move forward to 2019, Data Science and AI is getting smarter and smarter. It's always challenging to predict the…

    3 条评论
  • Data Science and AI into 2018 - A PoV

    Data Science and AI into 2018 - A PoV

    As we leave past memories of 2017 and embark onto 2018, we look forward to more interesting and mature aspects of Data…

    3 条评论
  • Selecting Forecasting Methods in Data Science

    Selecting Forecasting Methods in Data Science

    We are dealing with plethora of data and information in the world today and expectation is to predict and forecast how…

    7 条评论

社区洞察

其他会员也浏览了