Leaving Randomness behind
Roberto Cintra
Portfolio Management, Risk Management, Asset and Liability Management, Applied Quantitative Modeling
Warning: if you are one of those blessed individuals capable of reading the future, don't waste your time reading: I will just add uncertainty to your certainties.
From a previous text, we know that during the decades of the 1940s and 1950s, the Japanese Professor Kiyoshi It? laid the foundations of the theory of Stochastic Integrals and Stochastic Differential Equations, now known as It? Calculus. It?'s lemma stands as his most celebrated conclusion and has persistently been utilized, and at times, perhaps not always judiciously, in the realm of financial derivatives since the early 1970s.
In the 19th century, Sir William Hamilton generalized the so-called Lagrange Equations, establishing the basis for the synthesis of classical mechanics. In the same century, Carl Jacobi, a German mathematician, made central contributions to matrix calculus as well as dynamics. Richard Bellman, a brilliant American mathematician, among other spectacular contributions, established the basis of dynamic programming in the 1950s. (The letters from their names formed the famous HJB acronym.)
Amidst the various assumptions inherent in It? Calculus, the paramount one requires modeling the random component as a Brownian motion. Particular attention should be paid to the boundaries these assumptions imply before drawing generalized conclusions.
The unpredictable nature of uncertainty gives rise to an infinite spectrum of potential futures, leading to a diverse array of alternative paths and outcomes. Confronted with this extensive range of plausible scenarios, as depicted in Figure 1, the question arises: What actions can be taken?
Individuals may choose a specific path based on their intuition or their expectations about the likely future, or they may opt for the straightforward approach of averaging across a vast number of equally probable simulated paths to determine the most likely outcome. Given the available information set, the answer seems simple: average them! However, it's essential to exercise caution and apply this strategy only if the generating process is somehow stationary.
When using Bootstrap with replacement, special care should be taken to avoid excessively independent and identically distributed paths (as masterfully pointed out by López de Prado on several occasions).
For those patient readers who have reached this point, real-life applications of modeled phenomena emerge, including both financial and gambling modeling, where decisions at the 'table' need to be made. These scenarios stand out as desired outcomes of this mathematical branch. For instance, determining the opportune time to either entry or exit a position poses a challenging question. Can this query be answered without resorting to fortune-telling? (Not suggesting anyone should do that).
Regrettably, for those afflicted by profound uncertainties, a clear solution remains elusive, supplanted by an expected value among possible answers. Why does this prevail? Each specific sequence of values for the random component generates a unique path for cumulative return and final price. For instance, predicting the exact sequence that will unfold is impossible; therefore, opting for a specific sequence seems both naive and uninformed. With an infinite number of potential answers, when weighted by their respective likelihoods and summed, we attain the expected value for the answer.
Even though prices (paths) aren't stationary, returns (for a specific time change) are, and that's a significant difference. A potential issue here, as pointed out by Lopez de Prado, is that too much information may be thrown out in order to obtain stationarity.
Beyond the stochastic process, one frequently examines functions where these processes serve as the arguments. For instance, when forecasting future gains and losses in connection with a stock price, mathematical expectations of price returns re-emerge in the quest for answers. Consequently, at least from a stylized perspective, we are dealing with both a return process generator and a function of those returns.
We've reached the point where we are better prepared to begin exploring more effective tools to tackle the question: should we leave the table now, or should we stay for a while?
领英推荐
To address the problem, we need to take two steps:
If the process can be approximated by the equation below -- actually, it is the solution -- where A and B are deterministic and well-behaved functions and W is a Brownian motion:
An example of such a process is apparent in the logarithm of the stock price of a specific company, where ln[price(t)] = Z(t). In this scenario, envision a process where the instantaneous return comprises a deterministic component proportional to time and a second component proportional to a standard Normal distribution:
Given that the differential of ln(Z) is dZ/Z, the logarithmic function emerges as a natural candidate to be incorporated into the solution of the Stochastic Differential Equation mentioned above. If Z=ln(X), use It?'s lemma and then the well known solution is given by:
It's important to note that this solution isn't the final answer sought; rather, it unveils the underlying generating process, serving the purpose of illustrating the steps taken.
One inspiration for defining the objective function is to adopt a Optimal Control approach(welcoming HJB equation). This involves specifying a scalar cost function and then obtaining controls that minimize this cost function. In our specific context of interest, these controls must be linked to 'exiting the table', signifying the ending of investment within the current cycle before reaching either a pre-defined critical loss or a pre-defined desired gain; when the function hits those boundaries, a stopping time is said to have occurred.
The cost index is a combination of the carry cost, which occurs continuously over time, and a final cost associated with the point reached when the first stopping time occurs. The value function, v(x), represents the infimum of the expected cost index and is the candidate function to be minimized:
An alternative approach that disregards interim costs is often considered:
The reader may have already connected HJB, value function, It?'s lemma and the brilliant Dynkin's formula, guiding us to a beautiful system of equations that, in turn, will furnish us with at least numerical proxies to help address the fundamental question of when to exit. The system of equations I've mentioned is not random; it consists of a Partial Differential Equation (PDE) along with at least two boundary conditions. Specifically, this system is recognized in the literature as Two-Point Boundary Value (TPBV) problems.
Prior to reaching this system of equations, expectations have transformed potentially infinite paths into a single trajectory. This transformation not only allows us to move away from randomness but also facilitates the identification of clear entry and exit points(which are the controls in a certain sense). It is crucial to acknowledge that simplifying assumptions were made to derive these results.
It should be evident that the solution will manifest within a mean value "world," representing the cost incurred to eliminate uncertainty from the original equation. In practical terms, this implies a higher likelihood of numerous ex-post better final results, better aligned with the actual trajectory that unfolded. A more comprehensive problem is the recurring one, meaning that the trader will keep buying, holding, and selling that stock for many cycles.
The questions arise: Are these assumptions reasonable, or do they deviate significantly from reality? What does the data reveal about the validity of these assumptions? It's essential to remember that models serve as pale representations of the real world. However, within the realm of Quantitative analysis, they stand as the best tools at our disposal.