Citizen Data Scientist, Jumbo Shrimp, and Other Descriptions That Make No Sense

Citizen Data Scientist, Jumbo Shrimp, and Other Descriptions That Make No Sense

Okay, let me get this out there: I find the term “Citizen Data Scientist” confusing. Gartner defines a “citizen data scientist as “a person who creates or generates models that leverage predictive or prescriptive analytics but whose primary job function is outside of the field of statistics and analytics.”

While we teach business users to “think like a data scientist” in their ability to identify those variables and metrics that might be better predictors of performance, I do not expect that the business stakeholders are going to be able to create and generate analytic models. I do not believe, nor do I expect, that the business stakeholders are going to be proficient enough with tools like SAS or R or Python or Mahout or MADlib to 1) create or generate the models, and then 2) be proficient enough to be able to interpret the t-tests, f-scores, p-values and residuals necessary to ascertain the analytic model’s goodness of time.

No one would say “Citizen Lawyer” or “Citizen Nuclear Physicists” or “Citizen Physician.” I guess a “Citizen Physician” would be someone who “practices medicine but whose primary job function is outside of the field of medicine (meaning that they’ve had no training in medicine or medical procedures).” They call those people quacks (not quants…he-he-he).

WebMD doesn’t make someone a doctor any more than analytics makes someone a data scientist. Analysis of the analytic results and insights is an important step in the process, particularly when the results contradict each other. Data scientists provide the necessary experience about the different analytic techniques and algorithms required to decipher the results, validate the results and then turn the results into actions or recommendations.

What’s wrong with the definition is that it doesn’t properly acknowledge the deep training in analytic disciplines such as machine learning, cognitive computing, data mining, computer programming, and applied mathematics. It also dismisses the critical importance of gaining hands-on, data science experience through years of apprenticeships and tutelage under the guidance of master data scientists.

In order to understand the importance of the role of the data scientist, I solicited the help of the best data scientist that I know …Wei Lin. Wei and I have done numerous big data projects together and every time I engage with Wei, I learn tons. So naturally, I’d call upon a true master data scientist to help me write this blog.

Data Scientist Capabilities are a Good Starting Point…

The starting point for the data scientist discussion starts with an understanding of the types of tasks at which a data scientist must become proficient. Below is a summary of these tasks. I think you can quickly see that an effective data scientist requires a wide and deep range of capabilities including:

  • Data acquisition. The data scientist is going to pull data from a wide variety of sources in a wide variety of formats. Some of the data will be accessible as tables using SQL. However, much of the data will be in log files and will be extracted using tools such as R and Python to grab the raw log files. Some of the data will be pulled from websites, in which case one can either use the provided API’s (if there are API’s) or they screen scrape the data. A wide variety of expertise across a wide variety of tools is required to acquire the data from whatever the source may be – structured (tables, csv), semi-structured (log files) and unstructured (text files, documents, images, video files).
  • Data preparation. The data scientist needs to go through a process of cleaning up the data (especially if screen scraping was used), normalizing, aligning, and enriching (adding new variables such as frequency, recency, monetary and indices) the data. There is a common sense component required during this process to ensure that one is aligning like levels of granularity and is comparing like entities. Tools used here include SQL, R, Python and Java.
  • Data exploration/data visualization. The data scientist then starts to explore the data looking for outliers and visual correlations in the data. This is where an inquisitive mind is useful as the data scientist digs deeper and deeper into the granular data and looks for opportunities to link other data sources. Missing values may be discovered in the exploration phase, in which case the data scientist needs to decide how to handle the missing values. Tools used here include Tableau, Spotfire and R (ggplot2).
  • Model development. This is where the data scientist starts to quantify cause and effect by actually building predictive models. Quantifying correlations coefficients, statistical errors and residuals using tools such as SAS, R, Python, MADlib, and Mahout is required to ascertain if the model being built is more predictive or not.
  • Model validation. The data scientist then needs to determine the model “goodness of fit” using measures such as F-test, t-tests and p-values. The tools that were used to build the model (SAS, R, Python, MADlib, Mahout) provide the goodness of fit metrics.
  • Results visualization: Once the data scientist has a model with which they are confident that is “good enough” given the problem that they are trying to address (see my blog “Understanding Type I and Type II Errors”), then the data scientist needs to use many of the same data visualization tools (Tableau, Spotfire, ggplot2) to determine the optimal way to present the results so that the users can understand the results in order to act on the analytic results.

But The Key Is The Experience

Understanding algorithms is different from deciphering the results and translating the knowledge into business actions or client treatment. Going back to our WebMD example, a person who reads WebMD will have challenges trying to match their symptoms to wide variety of potential diseases and illnesses (except for the easy, more frequent illnesses), and to properly prescribing the “right” mix of medications, treatments and therapy.

Data scientist often frames a question into its business value and data context. It makes question more readable. Those questions could go in several different levels so rather than asking it all in one, the question itself could be break down into smaller business questions. There are methods to further reduce complexity by dimension reduction, variable decomposition or principle component analysis, etc.

There are many analytic algorithm and modeling options. Choosing a proper algorithm could be a challenge. The alternatives are to run large number of algorithms to search. With that, large number of results will need to be analyzed.

Interpreting results is a complex task. By running a large number of algorithms, the results tend to partial converge or partial conflicting. The conflict resolution and the weights of the variables require further modeling or ensemble.

Data Science Requires More Than Smart

But it isn’t just the analytics capabilities, skills, training, apprenticeship and hands-on experience that make an outstanding data scientist. Our best data scientists also exhibit outstanding “bed side manners” or humility. They understand the power of humility that immediately puts others at ease, allowing for a more open and more inclusive conversation. To me, this is the real key to being an effective data scientist, where I define “effective” to mean “comes up with reasonable recommendations that the users can understand and take action on.” The best data scientists quickly learn that in order to deliver outstanding outcomes, they need to be able to engage, listen and learn from others of all types.

But I could argue that humility is the key to success no matter your profession. Whether becoming a physician, or a nuclear physicist, or a lawyer or a barista or a teacher/coach, humility is imperative for continued growth and mastery of your craft.

As we like to say during our Big Data Vision Workshop engagements, all ideas are worthy of consideration. Because the minute you think you know all the answers, is the time when you are no longer relevant to the conversation.

To quote the “Lego Movie”

“A special ‘Master Builder’ will defeat Lord Business and become the greatest ‘Master Builder’ of all. The key to true master building is to believe in yourself and follow your own set of instructions inside your head.”

Sounds like a Master Data Scientist to me (especially when said in Morgan Freeman’s voice)!

--------------------

Thanks for taking the time to read my post. I’m fortunate that I spend most of my time with very interesting clients which fuel many of my topics. I hope that you are able to leave a comment or some thoughts about the blog. If you would like to read my regular blogs, please follow me on LinkedIn and/or Twitter.

In case you are interested, here are some of my favorite posts:

·     Determining the Economic Value of Data

·     The Big Data Intellectual Capital Rubik’s Cube

·     How to Avoid “Orphaned Analytics” 

·     To Achieve Big Data’s Potential, Get It Into The Boardroom

·     Vision Workshop

·     Big Data Business Model Maturity Index (animation)

·     How I’ve Learned To Stop Worrying And Love The Data Lake

I am the author of two Big Data books: “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”.   I also teach the "Big Data MBA" at the University of San Francisco (USF) School of Management, where I was named the School of Management’s first Executive Fellow. The opportunity to teach at USF gives me the perfect petri dish to test new ideas and concepts both in the classroom and in the field with clients.


Thomas Speidel

Statistician and Data Scientist

8 年

Fantastic article! The issues raised are perfectly in line with both my experience and that of fellow data scientists I know. Yet, they are seldom grasped, or willfully ignored by so many who are distracted by latest hype. Data science is hard. Statistics is hard. Worse it's often counterintuitive. Data preparation is 70% of the work. Let's stop pretend we can democratize it. We can't. And if we could, it would be like democratizing operating rooms.

Alan Simon

Author, Consultant, University Instructor

8 年

Hadn't heard the Citizen Data Scientist term before. I agree with you; makes about as much sense as those old Holiday Inn Express commercials where someone stays there and suddenly can perform brain surgery, talk down an angry bear, or whatever...

要查看或添加评论,请登录

Bill Schmarzo的更多文章

社区洞察

其他会员也浏览了