Big Data and the changing nature of financial mathematics

Big Data and the changing nature of financial mathematics

The aftermath of the 2009 financial crisis and the numerous flash crashes that followed led to social uproars and ethical discomforts in the scientific community, resulting in visible alterations in quantitative finance.

Greater reliance on information technology and computer storage abilities that spans multiple industries has allowed Big Data to become more mainstream. The arrival of Big Data is somewhat unique, and its scope can be described as far-reaching and profound. But it’s essential to start by asking, “What is Big Data?” Yet, the term Big Data has been thrown around loosely due to the lack of formal definition. The common interpretation of the numerous definitions of Big Data can be described as a large body of information that we could not comprehend when used only in smaller amounts. Characterising Big Data in this way indicates that Big Data is more than just reducing the confidence interval of an estimated parameter that would benefit from increasing the sample size, which is one of the most repeated statistician views. Thus, the introduction of the term “datafication” is viewed as a replacement to the confusing term of Big Data to allow readers to search the term rather than guess the meaning easily. Therefore, Big Data can be defined as any data which is too large, too complex or too unstructured to be processed or analysed by traditional data analysis tools and techniques. Implying that conventional data management principles stretched to their limits, a different, more formalised proposition is required when controlling Data Volume, Velocity, and Variety. These three dimensions, known as the “3 V’s”, are the fundamental elements of many of the definitions of Big Data.

No alt text provided for this image

The subjectiveness of the word “Big” can coincide with the size of the data. An example thrown around when it comes to information storage is that there are currently 320 times more data than inhabitants on this earth. Another way to quantify this is if we saved all the data on CDs, and these CDs were stacked upon each other, the pile will reach the moon five times. Another illustration of the data size is that 90% of the current data was created in the past few years. The sheer size of data allows us to question the quality and practicality of the stored data.

The financial industry can date back over three centuries ago, but only in the last century, serious efforts were placed in the mathematisation process of the financial system. However, the development is somewhat slow and incremental. But the presence of the internet and cloud computing has resulted in an exponential growth in data collection and modelling in the financial industry. With the sub-prime market crisis, governments worldwide placed pressure on regulators to establish efficient risk monitoring systems and investigate modelling systems to prevent similar crises in the future.

Going back to the early 20th century and looking at the fundamental contributions Louis Bachelier provided to Mathematical Finance, since then, the world of Quantitative Finance has seen incremental changes due to development in STEM. This relates to developments in Probability Theory, Computer Science, Physics, and Statistical Mechanics, which have gradually brought Quantitative Finance to become the most challenging STEM fields as its interdisciplinary nature of Mathematics, Computer Science, Physics, Economics and Finance, which makes it increasingly difficult to master in full. Today Bayesian Statistics, Signal Processing Statistics, Game Theory and Machine Learning are progressively adding to the interdisciplinary complexity of quantitative finance. Moreover, with the rise of data scientists and Big Data, the approach towards modelling has changed from a Top-Down approach traditionally used by financial mathematics to a Bottom-Up approach used by data scientists. With the algorithmic trading aggressive change in the market participants viewpoints, we notice an exciting re-balancing shift between models and data. Thus, there is a move from an attitude that presumed that models assumed data towards data is reassuming the models, gradually making notable Financial Mathematics models obsolete. It can be argued that the Sub-prime crisis is a trigger point for Machine Learning to increase in application and the traditional Financial Mathematical models to decline within the scope of Quantitative Finance.

It is important to note that Mathematics is without a doubt the best tool to analyse and derive meaning from a specific hypothesis. Moreover, Financial Mathematics is an ever-expanding and evolving field that integrates data analytical techniques. Though, there is a shift in emphasis from Big Data towards Fast Data. The vast volumes of data is no longer an issue for financial institutions to process and analyse, but data contains a greater value when promptly pursued. This diverts financial institutions attention towards implementing real-time data processing systems, as the volume and rate of data production are increasing faster. Thus, focusing on cluster computing systems continues to evolve and specialises in the ongoing development of real-time processing technologies such as Spark and Storm.

__________________________________________________________________________

Sadiq Alesia is the Co-founder and Managing Director at Ashford & Alesseea. You can follow him on twitter @SadiqAlesia or contact him on [email protected].?

Visit https://ashfordalesseea.com/blog to view and partake in other interesting topics.?

David Jones

Senior Strategy and Research Manager

3 年

Great article as usual. It's essential for financial institutions to become more agile, especially at gathering and analysing data. Fast Data is the way forward to get a competitive edge.

要查看或添加评论,请登录

Sadiq Alesia的更多文章

社区洞察

其他会员也浏览了