Data Mining

Data Mining

Data mining?is the process of extracting and discovering patterns in large?data sets?involving methods at the intersection of?machine learning,?statistics, and?database systems.?Data mining is an?interdisciplinary?subfield of?computer science?and?statistics?with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.?Aside from the raw analysis step, it also involves database and?data management?aspects,?data pre-processing,?model?and?inference?considerations, interestingness metrics,?complexity?considerations, post-processing of discovered structures,?visualization, and?online updating.

The term "data mining" is a?misnomer?because the goal is the extraction of?patterns?and knowledge from large amounts of data, not the?extraction (mining) of data itself.?It also is a?buzzword?and is frequently applied to any form of large-scale data or?information processing?(collection,?extraction,?warehousing, analysis, and statistics) as well as any application of?computer decision support system, including?artificial intelligence?(e.g., machine learning) and?business intelligence. The book?Data mining: Practical machine learning tools and techniques with?Java[8]?(which covers mostly machine learning material) was originally to be named?Practical machine learning, and the term?data mining?was only added for marketing reasons.?Often the more general terms (large scale)?data analysis?and?analytics—or, when referring to actual methods,?artificial intelligence?and?machine learning—are more appropriate.

The actual data mining task is the semi-automatic?or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and?dependencies?(association rule mining,?sequential pattern mining). This usually involves using database techniques such as?spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and?predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a?decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps.

The difference between?data analysis?and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a?marketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.

The related terms?data dredging,?data fishing, and?data snooping?refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.

Etymology

In the 1960s, statisticians and economists used terms like?data fishing?or?data dredging?to refer to what they considered the bad practice of analyzing data without an?a-priori?hypothesis. The term "data mining" was used in a similarly critical way by economist?Michael Lovell?in an article published in the?Review of Economic Studies?in 1983.Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative).

The term?data mining?appeared around 1990 in the database community, with generally positive connotations. For a short time in 1980s, a phrase "database mining"?, was used, but since it was trademarked by HNC, a?San Diego-based company, to pitch their Database Mining Workstation researchers consequently turned to?data mining. Other terms used include?data archaeology,?information harvesting,?information discovery,?knowledge extraction, etc.?Gregory Piatetsky-Shapiro?coined the term "knowledge discovery in databases" for the first workshop on the same topic?(KDD-1989)?and this term became more popular in?AI?and?machine learning?community. However, the term data mining became more popular in the business and press communities.?Currently, the terms?data mining?and?knowledge discovery?are used interchangeably.

Background

The manual extraction of patterns from?data?has occurred for centuries. Early methods of identifying patterns in data include?Bayes' theorem?(1700s) and?regression analysis?(1800s).The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. As?data sets?have grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such as?neural networks,?cluster analysis,?genetic algorithms?(1950s),?decision trees?and?decision rules?(1960s), and?support vector machines?(1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns.in large data sets. It bridges the gap from?applied statistics?and artificial intelligence (which usually provide the mathematical background) to?database management?by exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.

Process

The?knowledge discovery in databases (KDD) process?is commonly defined with the stages:

  1. Selection
  2. Pre-processing
  3. Transformation
  4. Data mining
  5. Interpretation/evaluation

It exists, however, in many variations on this theme, such as the?Cross-industry standard process for data mining?(CRISP-DM) which defines six phases:

  1. Business understanding
  2. Data understanding
  3. Data preparation
  4. Modeling
  5. Evaluation
  6. Deployment

or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation.

Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners.The only other data mining standard named in these polls was?SEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models,and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.

Pre-processing

Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is a?data mart?or?data warehouse. Pre-processing is essential to analyze the?multivariate?data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing?noise?and those with?missing data.

Data mining

Data mining involves six common classes of tasks:

  • Anomaly detection?(outlier/change/deviation detection) – The identification of unusual data records, that might be interesting or data errors that require further investigation.
  • Association rule learning?(dependency modeling) – Searches for relationships between variables. For example, a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis.
  • Clustering?– is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data.
  • Classification?– is the task of generalizing known structure to apply to new data. For example, an e-mail program might attempt to classify an e-mail as "legitimate" or as "spam".
  • Regression?– attempts to find a function that models the data with the least error that is, for estimating the relationships among data or datasets.
  • Summarization?– providing a more compact representation of the data set, including visualization and report generation.

Results validation

An example of data produced by?data dredging?through a bot operated by statistician Tyler Vigen, apparently showing a close link between the best word winning a spelling bee competition and the number of people in the United States killed by venomous spiders.

Data mining can unintentionally be misused, producing results that appear to be significant but which do not actually predict future behavior and cannot be?reproduced?on a new sample of data, therefore bearing little use. This is sometimes caused by investigating too many hypotheses and not performing proper?statistical hypothesis testing. A simple version of this problem in?machine learning?is known as?overfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening.

The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is called?overfitting. To overcome this, the evaluation uses a?test set?of data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" e-mails would be trained on a?training set?of sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it had?not?been trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such as?ROC curves.

If the learned patterns do not meet the desired standards, it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge.

要查看或添加评论,请登录

Nivedita singh的更多文章

  • Front-End vs. Back-End: What’s the Difference?

    Front-End vs. Back-End: What’s the Difference?

    Front-End Development Front-end development focuses on the user-facing side of a website. Front-end developers ensure…

  • Talend

    Talend

    What is Talend? Talend is an open source software platform which offers data integration and data management solutions.…

  • Snowflake

    Snowflake

    Snowflake Inc. is a cloud computing–based data cloud company based in Bozeman, Montana.

  • Data Profiling

    Data Profiling

    What Is Data Profiling? Data profiling is the process of reviewing source data, understanding structure, content and…

  • Data Engineering

    Data Engineering

    In the modern world, it is tough to think of any industry that has not been revolutionized by data science. Although…

  • Data Scrubbing

    Data Scrubbing

    What is Data Scrubbing? If in the course of doing household chores, someone told you to clean the floor, you most…

  • Computer Vision

    Computer Vision

    What is computer vision? Computer vision is a field of artificial intelligence (AI) that enables computers and systems…

  • CSS

    CSS

    What is CSS? Cascading Style Sheets (CSS) is used to format the layout of a webpage. With CSS, you can control the…

  • Microsoft 365

    Microsoft 365

    Microsoft 365 is a product family of productivity software, collaboration and cloud-based services owned by Microsoft…

    2 条评论
  • Front-End Developer

    Front-End Developer

    Front-End Front-End Development Front-end development focuses on the user-facing side of a website. Front-end…

社区洞察

其他会员也浏览了