WHAT IS BIG DATA
Ashish Ranjan
IT Recruiter- Talent Acquisition || B.TECH(EEE) || Tech & Non-Tech Hiring || Leadership Hiring || Corporate Hiring
Big data?primarily refers to?data sets?that are too large or complex to be dealt with by traditional?data-processing?application software. Data with many entries (rows) offer greater?statistical power, while data with higher complexity (more attributes or columns) may lead to a higher?false discovery rate.[2]?Though used sometimes loosely partly because of a lack of formal definition, the interpretation that seems to best describe big data is the one associated with large body of information that we could not comprehend when used only in smaller amounts.[3]
Big data analysis challenges include?capturing data,?data storage,?data analysis, search,?sharing,?transfer,?visualization,?querying, updating,?information privacy, and data source. Big data was originally associated with three key concepts:?volume,?variety, and?velocity.[4]?The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept,?veracity,?refers to the quality or insightfulness of the data. Without sufficient investment in expertise for big data veracity, then the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture?value?from big data.[5]
Current usage of the term?big data?tends to refer to the use of?predictive analytics,?user behavior analytics, or certain other advanced data analytics methods that extract?value?from big data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem."[6]?Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on".[7]?Scientists, business executives, medical practitioners, advertising and?governments?alike regularly meet difficulties with large data-sets in areas including?Internet searches,?fintech, healthcare analytics, geographic information systems,?urban informatics, and?business informatics. Scientists encounter limitations in?e-Science?work, including?meteorology,?genomics,[8]?connectomics, complex physics simulations, biology, and environmental research.[9]
The size and number of available data sets have grown rapidly as data is collected by devices such as?mobile devices, cheap and numerous information-sensing?Internet of things?devices, aerial (remote sensing), software logs,?cameras, microphones,?radio-frequency identification?(RFID) readers and?wireless sensor networks.[10][11]?The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[12]?as of 2012, every day 2.5?exabytes?(2.5×260?bytes) of data are generated.[13]?Based on an?IDC?report prediction, the global data volume was predicted to grow exponentially from 4.4?zettabytes?to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data.[14]?According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[15][16]?While?Statista?report, the global big data market is forecasted to grow to $103 billion by 2027.[17]?In 2011?McKinsey & Company?reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year.[18]?In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.[18]?And users of services enabled by personal-location data could capture $600 billion in consumer surplus.[18]?One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[19]
Relational database management systems?and desktop statistical software packages used to visualize data often have difficulty processing and analyzing big data. The processing and analysis of big data may require "massively parallel software running on tens, hundreds, or even thousands of servers".[20]?What qualifies as "big data" varies depending on the capabilities of those analyzing it and their tools. Furthermore, expanding capabilities make big data a moving target. "For some organizations, facing hundreds of?gigabytes?of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration."[21]
Definition[edit]
The term?big data?has been in use since the 1990s, with some giving credit to?John Mashey?for popularizing the term.[22][23]?Big data usually includes data sets with sizes beyond the ability of commonly used software tools to?capture,?curate, manage, and process data within a tolerable elapsed time.[24]?Big data philosophy encompasses unstructured, semi-structured and structured data; however, the main focus is on unstructured data.[25]?Big data "size" is a constantly moving target; as of 2012?ranging from a few dozen terabytes to many?zettabytes?of data.[26]?Big data requires a set of techniques and technologies with new forms of?integration?to reveal insights from?data-sets?that are diverse, complex, and of a massive scale.[27]
领英推è
"Variety", "veracity", and various other "Vs" are added by some organizations to describe it, a revision challenged by some industry authorities.[28]?The Vs of big data were often referred to as the "three Vs", "four Vs", and "five Vs". They represented the qualities of big data in volume, variety, velocity, veracity, and value.[4]?Variability is often included as an additional quality of big data.
A 2018 definition states "Big data is where parallel computing tools are needed to handle data", and notes, "This represents a distinct and clearly defined change in the computer science used, via parallel programming theories, and losses of some of the guarantees and capabilities made by?Codd's relational model."[29]
In a comparative study of big datasets,?Kitchin?and McArdle found that none of the commonly considered characteristics of big data appear consistently across all of the analyzed cases.[30]?For this reason, other studies identified the redefinition of power dynamics in knowledge discovery as the defining trait.[31]?Instead of focusing on intrinsic characteristics of big data, this alternative perspective pushes forward a relational understanding of the object claiming that what matters is the way in which data is collected, stored, made available and analyzed.
Big data vs. business intelligence[edit]
The growing maturity of the concept more starkly delineates the difference between "big data" and "business intelligence":[32]
- Business intelligence uses applied mathematics tools and?descriptive statistics?with data with high information density to measure things, detect trends, etc.
- Big data uses mathematical analysis, optimization,?inductive statistics, and concepts from?nonlinear system identification[33]?to infer laws (regressions, nonlinear relationships, and causal effects) from large sets of data with low information density[34]?to reveal relationships and dependencies, or to perform predictions of outcomes and behaviors.[33][35][promotional source?]