Sure or Not? Expert Advice on FX exchange rates
TLDR
In the recent past, I had to deal with some FX transfers during some volatile SGDMYR exchange rates.
It felt unnerving at first to make these decisions so I sought out the opinions of various experts. They were all right and wrong to varying degrees.
Instead of making gut decisions on who to trust, I built a simple model to quantify and merge the various expert opinions.
Distilling an Opinion
For each expert, I ask 3 questions:
This created a nice quantile spread. I felt this was a simple enough method to hold discussions with different people.
Here are some of the expert opinions (names redacted, of course!):
banker = {
"q05": 3.10,
"q25": 3.20,
"median": 3.25,
"q75": 3.33,
"q95": 3.35,
"confidence": 0.50,
}
youtuber = {
"q05": 2.90,
"q25": 3.10,
"median": 3.20,
"q75": 3.28,
"q95": 3.30,
"confidence": 0.25,
}
fx_trader = {
"q05": 3.20,
"q25": 3.25,
"median": 3.28,
"q75": 3.35,
"q95": 3.40,
"confidence": 0.75,
}
As you can see, the spread is quite significant
The biggest takeaway here is to theres greatest uncertainty in how low it will go. The expectation here affects behaviour when the currency is trending downard and there's a time crunch to exchange some currency: wait or bite the bullet?
Building an ExpertOpinion Model
To quantify the expert opinion, I decided on a beta distribution because of its flexibility.
The downside is this is a bounded distribution. I attempted fitting upper and lower bounds with linear and S-curve logistic functions, but in the end, what worked best was simply taking upper and lower quantile and extending it by 0.1....
The objective function was a simple mix of errors at each quantile, with an emphasis on the median and outer bounds.
Here's a stub:
def _quantile_objective(self, params):
"""
Enhanced objective function for beta parameter optimization
incorporating confidence and multiple quantiles
"""
alpha, beta = params
# Basic quantile matching
q25_err = (stats.beta.ppf(0.25, alpha, beta) - self.norm_q25) ** 2
q75_err = (stats.beta.ppf(0.75, alpha, beta) - self.norm_q75) ** 2
median_err = (stats.beta.ppf(0.5, alpha, beta) - self.norm_median) ** 4
confidence_weight = 1 + self.expert.confidence
# Basic error
error = median_err**confidence_weight + q25_err + q75_err
# Add tail quantile matching if available
q05_err = (
(stats.beta.ppf(0.05, alpha, beta) - self.norm_q05) ** 4
if self.norm_q05
else 0
)
error += q05_err**confidence_weight
q95_err = (
(stats.beta.ppf(0.95, alpha, beta) - self.norm_q95) ** 4
if self.norm_q95
else 0
)
error += q95_err**confidence_weight
# Add shape penalty to prevent "fat" curves
mode = (alpha - 1) / (alpha + beta - 2) if alpha + beta > 2 else 0.5
mode_penalty = ((mode - self.norm_median) ** 2) * self.expert.confidence
# mode_penalty = 0
# Penalty for extreme parameters (prevents "fat" curves)
param_penalty = 0.01 * (alpha**2 + beta**2) / (alpha + beta) ** 2
# param_penalty = 0
return error + mode_penalty + param_penalty
For each , I fit a beta distribution to it using numerial optimization and initializing the initial values. The outcome was... acceptable. I found that for the beta distribution, the parameter estimation would "blow up" if the bounds and initalization were not done nicely.
Side Note:
领英推荐
While the curves looked good-enough, it felt like the distribution values didn't feel intuitive.
After some reading, I then tried fitting a skew-normal distribution using mcmc methods. I loved how conceptually intuitive this felt.... however, the results needed more finessing. This now lives in the "someday maybe" pile.
Additional note: half the battle was setting up the correct env for pymc5, but I highly recommend pymc5 over its predecessors. It feels alot leaner.
Aggregating the Models
Great: I've interviewed some experts, converted their opinion to a distribution.... Now comes the exciting part: with N expert opinions, how to aggregate them?
I used a straighforward but effective strategy: empirical sampling approach. A meta-sampler queries the model group and generates an aggregate sample dataset, arbitrarily taking a sample size of 10_000 per expert.
Here's a simple histogram plot:
... and some pocket-summaries:
fx_rate
mean 3.243234
std 0.102834
min 2.831691
25% 3.184129
50% 3.255911
75% 3.317466
max 3.47799
Thoughts & Someday Maybe
Some rough edges:
For future consideration, it would be a simple fun next step to convert this to a bayesian model.
WIP Github Repo: https://github.com/lennardong/fx_bayesian