??????AI-driven Trading Algorithms Using Data Science ????? - TIDES Newsletter - 24
Kalilur Rahman
Director @ Novartis | Technology Transformation Leader| Author | Ex-Accenture/Cognizant/TCS | Life Long Learner | Quizzer | Mentor | Speaker | Influencer | Operations | Consulting | Quality Engineering
I had the opportunity to complete several Coursera courses thanks to the supportive learning environment provided by my current employer. This took place over the course of a few weeks, mainly during weekends and any available free time.
In this article, we will explore how Data Science and Python can be harnessed to assess stock performance. For this two-part analysis, I have selected two groups of stocks: one comprising the top-performing stocks in the US, and the other featuring some of the key stocks from the Indian Stock market. Using the knowledge gained from the course and employing key ratios commonly used by investment bankers to predict stock performance, we will compare these stocks. The code used for this analysis is rather basic, providing a preliminary assessment with room for further optimization and fine-tuning.
The comparative analysis of Leading Stocks was conducted over three distinct time periods:
The above five stocks, often referred to as FAAAM or FAMGA stocks, serve as the baseline for our comparison. Additionally, we will assess the performance of three reference stocks:
Let's delve into the performance of these stocks. Over the past five years, NVIDIA, Amazon, and Tesla have emerged as leaders in terms of returns, with NVIDIA boasting an impressive 20x+ return during this period.
The performance of the FAMGA stocks over the past five years has seen Amazon, Microsoft, and Apple emerge as the leaders in this group, with Google showing impressive gains, albeit at a 2x+ return rate.
When we examine the performance of these stocks in correlation with one another, we notice the following patterns:
1. Google and Microsoft exhibit the highest correlation among the eight stocks.
2. Tesla and Netflix, on the other hand, represent the least correlated pair, with Tesla standing out as an outlier.
3. Microsoft demonstrates a reasonable correlation with both Apple and Amazon.
4. Amazon exhibits a stronger correlation with Facebook compared to its correlation with Apple.
领英推荐
Considering the risk-return trade-off for these stocks, NVIDIA emerges as a strong performer, followed by Tesla, Amazon, Netflix, Microsoft, Apple, Facebook, and Google.
In terms of risk profiles, Google has the lowest risk, followed by Microsoft, Apple, and Amazon, while Tesla carries the highest risk, closely followed by NVIDIA.
In summary, the overall performance of the FAMGA stocks over the past five years aligns with their status as key players in the market. It's worth noting that various performance metrics, including Sharpe, Adjusted Sharpe, Burke, Martin, Modigliani, Sortino, Alpha, Beta, and others, all highlight the exceptional performance of these five stocks. The same can be said for Netflix, NVIDIA, and Tesla. While each of these stocks faces unique challenges, their consistent front-runner positions make them clear winners. Detailed analysis, including images and graphs, can be found in the attached markdown files and PDFs for those interested in a more comprehensive examination.
Stay tuned for the next post, which will delve into the bellwether stocks of the Indian Stock market.
# coding: utf-8# # Indian School of Business - Trading Specialization Course?
# ## A comparative study of Leading Stocks
# ### 3 Way Comparison -?
# #### Data from 2015-2020 - End of October
# #### Data from 1 year 1-Sep 2019 to 31-Aug 2020
# #### Data from? 1-Jan-2010 to 12-Sep-2020
#?
# This comparison will give us an indication on hoe the following stocks performed <br>?
#?
# 1. **Amazon** - __'AMZN'__
# 2. **Apple** - __'AAPL'__
# 3. **Alphabet / Google** - __'GOOG'__
# 4. **Facebook** - __'FB'__
# 5. **Microsoft** - __'MSFT'__
# 6. **Netflix** - __'NFLX'__
# 7. **NVidia** - __'NVDA'__
# 8. **Tesla** - __'TSLA'__
# # In[1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import seaborn as sns
get_ipython().run_line_magic('matplotlib', 'inline')
import warnings
warnings.filterwarnings("ignore")
from pandas_datareader import data as pdr
#!pip install fix_yahoo_finance --user
#import fix_yahoo_finance as yf
import yfinance as yf
yf.pdr_override()
# In[2]:
symbols = ['AMZN','AAPL', 'GOOG', 'FB', 'MSFT', 'NFLX', 'NVDA','TSLA']
symbols_2 = ['AMZN','AAPL', 'GOOG', 'FB', 'MSFT']
# In[3]:
start = '2015-09-13'
end = '2020-09-12'
# In[4]:
data = pdr.get_data_yahoo(symbols, start, end)['Adj Close']
data2= pdr.get_data_yahoo(symbols_2, start, end)['Adj Close']
# In[5]:
data.head(10)
# In[6]:
data2.head(10)
# In[7]:
data.columns
# In[8]:
data.describe()
# In[9]:
data2.describe()
# In[10]:
#for symbol in symbols:
# df = pdr.get_data_yahoo(symbols, start, end)['Adj Close']
normalize_stocks = data.apply(lambda x: x / x[0])
normalize_stocks.plot(figsize=(12,6)).axhline(1, lw=1, color='black')
plt.xlabel("Date")
plt.ylabel("Adj Close")
plt.grid()
plt.title("Stocks Adj Close Price")
plt.show()
# In[11]:
data.plot(figsize=(16,10))
# In[12]:
#for symbol in symbols:
# df = pdr.get_data_yahoo(symbols, start, end)['Adj Close']
normalize_stocks = data2.apply(lambda x: x / x[0])
normalize_stocks.plot(figsize=(12,6)).axhline(1, lw=1, color='black')
plt.xlabel("Date")
plt.ylabel("Adj Close")
plt.grid()
plt.title("Stocks Adj Close Price")
plt.show()
# In[13]:
data2.plot(figsize=(16,10))
# In[14]:
# ['AAPL', 'GOOG', 'FB', 'MSFT', 'NFLX', 'TSLA']
corr_rest = data.corr()
corr_rest
# In[15]:
# Top 5
corr_rest = data2.corr()
corr_rest
# In[16]:
corr_rest = data.corr()
# In[17]:
pair_value = corr_rest.abs().unstack()
pair_value.sort_values(ascending = False)
# In[18]:
corr_rest['AAPL'].sort_values(ascending=False)
# In[19]:
corr_rest['GOOG'].sort_values(ascending=False)
# In[20]:
corr_rest['FB'].sort_values(ascending=False)
# In[21]:
corr_rest['MSFT'].sort_values(ascending=False)
# In[22]:
corr_rest['NFLX'].sort_values(ascending=False)
# In[23]:
corr_rest['TSLA'].sort_values(ascending=False)
# In[24]:
corr_rest['AMZN'].sort_values(ascending=False)
# In[25]:
corr_rest['NVDA'].sort_values(ascending=False)
# In[26]:
#!pip install pandas.tools.plotting
#from plotting import scatter_matrix
from pandas.plotting import scatter_matrix
scatter_matrix(corr_rest, figsize=(16,12), alpha=0.3)
# In[27]:
# Returns
for symbol in symbols:
? ? returns = data.pct_change()
returns.head()
# In[28]:
returns = returns.dropna()
# In[29]:
sns.pairplot(returns[1:])
# In[30]:
# Worst Single Day Returns
returns.idxmin()
# In[31]:
# BEst Single Day Returns
returns.idxmax()
# In[32]:
def plot_bar_chart(pd_df,plot_value,plot_title):
? ? plotdata = pd_df.DataFrame(plot_value)
? ? # Plot a bar chart
? ? my_colors = 'gbymc'? #red, green, blue, black, etc.
? ? my_colors_t = 'Spectral' #'GnBu' #GreenBlue
? ? plotdata.style.use('seaborn-dark-palette')
? ? plotdata.plot(kind="bar",cmap=my_colors_t, color=my_colors, title=plot_title,legend=None)
# In[33]:
returns.std()
#pd_df = pd.DataFrame(returns.std())
#plot_bar_chart(pd_df,returns.std(),"Standard Returns")
# In[34]:
sns.pairplot(returns.dropna())
# In[35]:
returns_fig = sns.PairGrid(returns.dropna())# Using map_upper to specify upper triangle scatter plots.
returns_fig.map_upper(plt.scatter,color='seagreen',alpha=0.5)# Using lower triangle for kde plot
returns_fig.map_lower(sns.kdeplot,cmap='coolwarm')# diagonal will be a series of histogram plots of the daily return
returns_fig.map_diag(plt.hist,bins=30, alpha=0.6)
# In[36]:
sns.heatmap(returns.corr(),annot=True)
# In[37]:
rest_rets = returns.corr()
rest_rets
# In[38]:
#Plot Scatter Matrix
scatter_matrix(rest_rets, figsize=(16,10))
plt.show()
# In[39]:
rest_rets.hist(bins=15, figsize=(16,12))
# In[40]:
rets = returns.dropna()
area = np.pi*20
plt.figure(figsize=(16,8))
plt.scatter(rets.mean(), rets.std(),alpha = 0.5,s =area)# Set the x and y limits of the plot
plt.ylim([0.0,0.04])
plt.xlim([0.0,0.005])#Set the plot titles for x and y axis
plt.xlabel('Expected returns')
plt.ylabel('Risk')
plt.title("Risk vs. Expected Returns")
for label, x, y in zip(rets.columns, rets.mean(), rets.std()):
? ? plt.annotate(
? ? ? ? label,
? ? ? ? xy = (x, y), xytext = (50, 50),
? ? ? ? textcoords = 'offset points', ha = 'right', va = 'bottom',
? ? ? ? arrowprops = dict(arrowstyle = '-', connectionstyle = 'arc3,rad=-0.3'))
# In[41]:
rest_rets = returns.corr()
pair_value = rest_rets.abs().unstack()
pair_value.sort_values(ascending = False)
# In[42]:
# Normalized Returns Data
Normalized_Value = ((returns[:] - returns[:].min()) /(returns[:].max() - returns[:].min()))
Normalized_Value.head()
# In[43]:
Normalized_Value.corr()
# In[44]:
normalized_rets = Normalized_Value.corr()
normalized_pair_value = normalized_rets.abs().unstack()
normalized_pair_value.sort_values(ascending = False)
# In[45]:
#symbols = ['AAPL','AMZN', 'GOOG', 'FB', 'MSFT', 'NFLX', 'TSLA','NVDA']
stocks = ['GOOG', 'AAPL', 'MSFT', 'AMZN','FB']
# In[46]:
relate_industry = pdr.get_data_yahoo(stocks, start, end)["Adj Close"]
# In[47]:
relate_industry.head()
# In[48]:
relate_ind_summary=relate_industry.describe()
relate_industry.describe()
# In[49]:
sns.heatmap(relate_ind_summary,annot=True,cmap='RdYlBu_r', fmt='g')
# In[50]:
#for stock in stocks:
# df = pdr.get_data_yahoo(symbols, start, end)['Adj Close']
normalize_stocks = relate_industry.apply(lambda x: x / x[0])
normalize_stocks.plot(figsize=(12,6)).axhline(1, lw=1, color='black')
plt.xlabel("Date")
plt.ylabel("Adj Close")
plt.grid()
plt.title("Stocks Adj Close Price")
plt.show()
# In[51]:
corr_tech= relate_industry.corr()
corr_tech
# In[52]:
sns.heatmap(corr_tech,annot=True)
# In[53]:
AAPL = data
AAPL = AAPL.assign(AAPL = relate_industry['AAPL'].values)
AAPL.head()
# In[54]:
AAPL_rets = AAPL.pct_change().dropna()
AAPL_rets.head()
# In[55]:
AAPL_rets.corr()
# In[56]:
AAPL_rets = AAPL_rets.corr()
pair_value_1 = AAPL_rets.abs().unstack()
pair_value_1.sort_values(ascending = False)
# In[57]:
GOOG = data
GOOG = GOOG.assign(GOOG = relate_industry['GOOG'].values)
GOOG.head()
# In[58]:
GOOG_rets = GOOG.pct_change().dropna()
GOOG_rets.head()
# In[59]:
GOOG_rets.corr()
# In[60]:
GOOG_rets = GOOG_rets.corr()
pair_value_2 = GOOG_rets.abs().unstack()
pair_value_2.sort_values(ascending = False)
# In[61]:
MSFT = pd.concat([data, relate_industry['MSFT']], axis=1)
MSFT.head()
# In[62]:
MSFT_rets = MSFT.pct_change().dropna()
MSFT_rets.head()
# In[63]:
MSFT_rets.corr()
# In[64]:
MSFT_rets = MSFT_rets.corr()
pair_value_3 = MSFT_rets.abs().unstack()
pair_value_3.sort_values(ascending = False)
# In[65]:
FB = pd.concat([data, relate_industry['FB']], axis=1)
FB.head()
# In[66]:
FB_rets = FB.pct_change().dropna()
FB_rets.head()
# In[67]:
FB_rets.corr()
# In[68]:
FB_rets = FB_rets.corr()
pair_value_4 = FB_rets.abs().unstack()
pair_value_4.sort_values(ascending = False)
# In[69]:
AMZN = pd.concat([data, relate_industry['AMZN']], axis=1)
AMZN.head()
# In[70]:
AMZN_rets = AMZN.pct_change().dropna()
AMZN_rets.head()
# In[71]:
AMZN_rets.corr()
# In[72]:
AMZN_rets = AMZN_rets.corr()
pair_value_5 = AMZN_rets.abs().unstack()
pair_value_5.sort_values(ascending = False)
# ### Z-Value Rules Strategies
# #### To confirm when to enter and exit a position of pairs trading, mean spread and standard deviation of formation period should be calculated first, based on this data, calculate each day’s price spread and Z-Score.
# #### Entry: When the absolute value of Z-Score more than 1, enter a position by buying stock with lower price and selling the higher one.
# #### Exit: When the absolute value of Z-Score less than -1, exit the position by selling the stock with lower price and buying the higher one.
# # In[73]:
import statsmodels
from statsmodels.tsa.stattools import coint
S1 = relate_industry['AAPL']
S2 = data['MSFT']
score, pvalue, _ = coint(S1, S2)
print(pvalue)
ratios = S1 / S2
ratios.plot()
plt.axhline(ratios.mean())
plt.legend(['Ratio'])
plt.show()
# In[74]:
S1 = data['AAPL']
S2 = data['GOOG']
score, pvalue, _ = coint(S1, S2)
print(pvalue)
ratios = S1 / S2
ratios.plot()
plt.axhline(ratios.mean())
plt.legend(['Ratio'])
plt.show()
# In[75]:
S1 = data['AAPL']
S2 = data['AMZN']
score, pvalue, _ = coint(S1, S2)
print(pvalue)
ratios = S1 / S2
ratios.plot()
plt.axhline(ratios.mean())
plt.legend(['Ratio'])
plt.show()
# In[76]:
def zscore(series):
? ? ? ? return (series - series.mean()) / np.std(series)
# In[77]:
zscore(ratios).plot()
plt.axhline(zscore(ratios).mean())
plt.axhline(1.0, color='red')
plt.axhline(-1.0, color='green')
plt.show()
# In[78]:
zscore(ratios).plot(figsize=(14,8))
plt.axhline(zscore(ratios).mean(), color='black')
plt.axhline(1.0, c='r', ls='--')
plt.axhline(-1.0, c='g', ls='--')
plt.legend(['Spread z-score', 'Mean', '+1', '-1'])
# In[79]:
ratios = relate_industry['GOOG'] / data['AAPL']
print(len(ratios))
# In[80]:
ratios = data['AAPL'] / data['GOOG']
print(len(ratios))
# In[81]:
ratios = data['AAPL'] / data['AMZN']
print(len(ratios))
# # Pair Measurements
# # Alpha\t
# # Beta\t\t
# # Volatility\t
# # Sharpe Ratios\t
# # Treynor Measure\t\t
# # Appraisal\t\t
# # M square\t
# # T square
# # Jensen Measure
# # Sortino Ratios
# # Total Return
# # Beta of market
# # Beta of portfolio# In[82]:
# Market Data TSLA
TSLA = yf.download('TSLA',start,end)['Adj Close']
TSLA.head()
# In[83]:
TSLA.rets = TSLA.pct_change(1).dropna()
# In[84]:
TSLA_risk = TSLA.std()
TSLA_risk
# In[85]:
rf = 0.018 # risk free rate
excess_returns_1 = relate_industry['AAPL'].pct_change(1).dropna() - rf
excess_returns_1.head()
# In[86]:
excess_returns_2 = data['GOOG'].pct_change(1).dropna() - rf
excess_returns_3 = data['MSFT'].pct_change(1).dropna() - rf
excess_returns_4 = data['FB'].pct_change(1).dropna() - rf
excess_returns_5 = data['AMZN'].pct_change(1).dropna() - rf
# In[87]:
ret = pd.concat([relate_industry['AAPL'].pct_change(1).dropna(),
? ? ? ? ? ? ? ? ?data['GOOG'].pct_change(1).dropna(),
? ? ? ? ? ? ? ? ?data['MSFT'].pct_change(1).dropna(),
? ? ? ? ? ? ? ? ?data['FB'].pct_change(1).dropna(),
? ? ? ? ? ? ? ? ?data['AMZN'].pct_change(1).dropna()] ,
? ? ? ? ? ? ? ? ?axis=1)
ret.head()
# In[88]:
excess_returns = pd.concat([excess_returns_1, excess_returns_2, excess_returns_3, excess_returns_4,excess_returns_5],axis=1)
excess_returns.head()
# In[89]:
excess_returns.mean()
print(np.dtype(excess_returns.mean()))
plot_bar_chart(pd,excess_returns.mean(),"Excesss Returns Mean")
# In[90]:
excess_returns.std()
plot_bar_chart(pd,excess_returns.std(),"Excesss Returns Std")
# In[91]:
sharpe_ratio = excess_returns.mean()/excess_returns.std()
print('Sharpe Ratio:')
sharpe_ratio
plot_bar_chart(pd,sharpe_ratio,"Sharpe Ratio")
print(sharpe_ratio)
# In[92]:
from scipy import stats
beta, alpha, r_value, p_value, std_err = stats.linregress(relate_industry['MSFT'].pct_change(1).dropna(), TSLA.rets.dropna())
print("MSFT and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[93]:
beta, alpha, r_value, p_value, std_err = stats.linregress(data['AAPL'].pct_change(1).dropna(), TSLA.rets)
print("AAPL and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[94]:
beta, alpha, r_value, p_value, std_err = stats.linregress(data['GOOG'].pct_change(1).dropna(), TSLA.rets)
print("GOOG and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[95]:
beta, alpha, r_value, p_value, std_err = stats.linregress(data['FB'].pct_change(1).dropna(), TSLA.rets)
print("FB and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[96]:
beta, alpha, r_value, p_value, std_err = stats.linregress(data['NFLX'].pct_change(1).dropna(), TSLA.rets)
print("NFLX and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[97]:
beta, alpha, r_value, p_value, std_err = stats.linregress(data['AMZN'].pct_change(1).dropna(), TSLA.rets)
print("AMZN and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[98]:
beta, alpha, r_value, p_value, std_err = stats.linregress(data['NVDA'].pct_change(1).dropna(), TSLA.rets)
print("NVDA and TSLA")
print("Beta: %f" , beta)
print("Alpha: %f" % alpha)
print("R-Squared: %f" % r_value)
print("p-value: %f" % p_value)
print("Standard Error: %f" % std_err)
# In[99]:
rf = 0.018
mrk_rate_ret = (TSLA.rets[-1] -TSLA.rets[0])/ TSLA.rets[0]
m = np.matrix([ret['AAPL'], TSLA.rets])
beta = np.cov(m)[0][1] / np.std(TSLA.rets)
er = rf + beta*(mrk_rate_ret-rf)
tr = (er - rf) / beta
print('Treynor Ratio: ', tr)
# In[100]:
# Beta
from matplotlib.ticker import FuncFormatter
print("Beta:")
beta_arr = []
for column in ret:
? ? mrk_rate_ret = (TSLA.rets[-1] -TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? print(ret[column].name,beta)
? ? beta_arr.append(beta)
print(beta_arr)
plot_bar_chart(pd,beta_arr,"Beta")
# In[101]:
# Alpha
print("Alpha:")
alpha_arr=[]
for column in ret:
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? alpha = np.mean(ret[column]) - beta * np.mean(TSLA.rets)
? ? print(ret[column].name, alpha)
? ??
? ? alpha_arr.append(alpha)
print(alpha_arr)
plot_bar_chart(pd,alpha_arr,"Alpha")
# In[102]:
# Unsystematic Risk or Total Risk
Close = pd.concat([relate_industry['AAPL'], data['MSFT'],? data['AMZN'], data['GOOG'], data['FB']], axis=1)
# Close = AdjClose.applymap(float)
Stock_risk = Close.std()
Unsystematic_risk = Stock_risk - beta*TSLA_risk
print('Unsystematic Risk:')
print(Unsystematic_risk)
plot_bar_chart(pd,Unsystematic_risk,"Unsystematic Risk")
# In[103]:
# Treynor Ratio
print("Treynor Ratio:")
treynor_arr=[]
for column in ret:
? ? mrk_rate_ret = (TSLA.rets[-1] - TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? er = rf + beta*(mrk_rate_ret-rf)
? ? tr = (er - rf) / beta
? ? print(ret[column].name,tr)
? ? treynor_arr.append(tr)
print(treynor_arr)
plot_bar_chart(pd,treynor_arr,"Treynor Ratio")
? ??
# In[104]:
# Modigliani Ratio, M2
print("Modigliani Ratio:")
mod_arr=[]
for column in ret:
? ? rf = 0.018
? ? mrk_rate_ret = (TSLA.rets[-1] - TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? er = rf + beta*(mrk_rate_ret-rf)
? ? np_rf = np.empty(len(ret))
? ? np_rf.fill(rf)
? ? rdiff = ret[column] - np_rf
? ? bdiff = TSLA.rets - np_rf
? ? mr = (er - rf) * np.std(rdiff) / np.std(bdiff) + rf
? ? print(ret[column].name,mr)
? ? mod_arr.append(mr)
print(mod_arr)
plot_bar_chart(pd,mod_arr,"Modigliani Ratio")? ?
# In[105]:
print("Information Ratio:")
ir_arr=[]
for column in ret:
? ? diff = ret[column] - TSLA.rets
? ? ir = np.mean(diff) / np.std(diff)
? ? print(ret[column].name, ir)
? ? ir_arr.append(ir)
print(ir_arr)
plot_bar_chart(pd,ir_arr,"Information Ratio")? ?
# In[106]:
print('Omega Ratio:')
or_arr = []
for column in ret:
? ? ?annual_return_threshhold = 0.0
? ? ?returns = ret[column]
? ? ?daily_return_thresh = pow(1 + annual_return_threshhold, 1 / 252) - 1
?
? ? ?returns_less_thresh = returns - daily_return_thresh
?
? ? ?numer = sum(returns_less_thresh[returns_less_thresh > 0.0])
? ? ?denom = -1.0 * sum(returns_less_thresh[returns_less_thresh < 0.0])
? ? ?if denom > 0.0:
? ? ? ? ? omega_ratio = numer / denom
? ? ?else:
? ? ? ? ?print('none')
? ? ?print(ret[column].name, omega_ratio)
? ? ?or_arr.append(omega_ratio)
print(or_arr)
plot_bar_chart(pd,or_arr,"Omega Ratio")? ?
# In[107]:
print('Sortino Ratio:')
so_arr=[]
for column in ret:
? ? returns = ret[column]
? ? numer = pow((1 + returns.mean()), 252) - 1
? ? annual_volatility = returns.std() * np.sqrt(252)
? ? denom = annual_volatility
? ? if denom > 0.0:
? ? ? ? ?sortino_ratio = numer / denom
? ? else:
? ? ? ? print('none')
? ? print(ret[column].name, sortino_ratio)
? ? so_arr.append(sortino_ratio)
print(so_arr)
plot_bar_chart(pd,so_arr,"Sortino Ratio")? ?
# In[108]:
print("Calmar Ratio:")
cr_arr=[]
for column in ret:
? ? rf = 0.018
? ? mrk_rate_ret = (TSLA.rets[-1] - TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? er = rf + beta*(mrk_rate_ret-rf)
? ? max_dd = 1.0 - (ret[column] / np.maximum.accumulate(ret[column])).min()
? ??
? ? calmar_r = (er - rf) / max_dd
? ? print(ret[column].name, calmar_r)
? ? cr_arr.append(calmar_r)
print(cr_arr)
plot_bar_chart(pd,cr_arr,"Calmar Ratio")? ?
# In[109]:
print("Sterling Ratio:")
sr_arr=[]
for column in ret:
? ? rf = 0.018
? ? mrk_rate_ret = (TSLA.rets[-1] - TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? er = rf + beta*(mrk_rate_ret-rf)
? ? average_dd = 1.0 - (ret[column] / np.maximum.accumulate(ret[column])).mean()
? ? sterling_r = (er - rf) / average_dd
? ??
? ? print(ret[column].name, sterling_r)
? ? sr_arr.append(sterling_r)
print(sr_arr)
plot_bar_chart(pd,sr_arr,"Sterling Ratio")? ?
# In[110]:
print("Appraisal Ratio:")
appraisal_ratio = alpha / Unsystematic_risk
print(appraisal_ratio)
plot_bar_chart(pd,appraisal_ratio,"Appraisal Ratio")
# In[111]:
print("Burke Ratio:")
br_arr=[]
for column in ret:
? ? returns = ret[column]
? ? rf = 0.018
? ? mrk_rate_ret = (TSLA.rets[-1] - TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([returns, TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? er = rf + beta*(mrk_rate_ret-rf)
? ??
? ? average_dd_squared = 1.0 - ((returns / np.maximum.accumulate(returns)).mean())**2
? ? round_average_dd = round(average_dd_squared,4)
? ??
? ? burke_r = (er - rf) /math.sqrt(abs(round_average_dd))
? ? print(ret[column].name, burke_r)
? ? br_arr.append(burke_r)
print(br_arr)
plot_bar_chart(pd,br_arr,"Burke Ratio")? ?
# In[112]:
# Ulcer Index 14 days
max14 = Close.rolling(window=14,min_periods=1).max()
drawdown_percent = 100*((Close-max14)/max14)
avg_sq = round(drawdown_percent * drawdown_percent, 2)
Ulcer = np.sqrt(avg_sq.rolling(window=14).mean())
Ulcer_index = Ulcer.dropna()
Ulcer_index.head()
# In[113]:
# Martin Ratio
print('Martin Ratio:')
rf = 0.018
annual_return = returns.mean() * 252
martin_ratio = (annual_return - rf) / Ulcer_index.sum()
print(martin_ratio)
plot_bar_chart(pd,martin_ratio,"Martin Ratio")
# In[114]:# Pain Index
max14 = Close.rolling(window=14,min_periods=1).max()
drawdown = 100*((Close-max14)/max14)
pain = drawdown.rolling(window=14).mean()
pain_index = pain.dropna()
pain_index.head()
# In[115]:
# Pain Ratio
print('Pain Ratio:')
rf = 0.018
annual_return = returns.mean() * 252
pain_ratio = (annual_return - rf) / pain_index.sum()
print(pain_ratio)
plot_bar_chart(pd,pain_ratio,"Pain Ratio")
# In[116]:
from scipy.stats import kurtosis, skew
# In[117]:
print("Skewness:")
sk_arr=[]
for column in ret:
? ? stock_skewness = skew(ret[column])
? ? print(ret[column].name, stock_skewness)
? ? sk_arr.append(stock_skewness)
print(sk_arr)
plot_bar_chart(pd,sk_arr,"Stock Skewness")? ?
# In[118]:
print("Kurtosis:")
sku_arr=[]
for column in ret:
? ? stock_kurtosis = kurtosis(ret[column])
? ? print(ret[column].name, stock_kurtosis)
? ? sku_arr.append(stock_kurtosis)
print(sku_arr)
plot_bar_chart(pd,sku_arr,"Stock Kurtosis")?
# In[119]:
# Adjusted Sharpe Ratio
print("Adjusted Sharpe Ratio:")
Adj_SR = sharpe_ratio * (1 + (stock_skewness / 6.0) * sharpe_ratio + (stock_kurtosis - 3) / 24.0 * sharpe_ratio**2)
print(Adj_SR)
plot_bar_chart(pd,Adj_SR,"Adjusted Sharpe Ratio")
# In[120]:
# Downside Risk
downside_risk = ret[ret < ret.mean()].std(skipna = True) * np.sqrt(252)
downside_risk
plot_bar_chart(pd,downside_risk,"Downside Risk")
# In[121]:
# Upside Risk
upside_risk = ret[ret > ret.mean()].std(skipna = True) * np.sqrt(252)
upside_risk
plot_bar_chart(pd,upside_risk,"Upside Risk")
# In[122]:print("Bernado Ledoit Ratio:")
blr_arr=[]
for column in ret:
? ? mrk_rate_ret = (TSLA.rets[-1] -TSLA.rets[0])/ TSLA.rets[0]
? ? m = np.matrix([ret[column], TSLA.rets])
? ? beta = np.cov(m)[0][1] / np.std(TSLA.rets)
? ? er = rf + beta*(mrk_rate_ret-rf)
? ? target = 0
? ? # ret = np.array(ret)
? ? threshold = 0.0
? ? order = 1
? ? threshold_array = np.empty(len(ret))
? ? threshold_array.fill(threshold)
? ? diff = ret[column] - threshold_array
? ? diff = diff.clip()
? ? hpm = sum(diff ** order) / len(ret)
? ??
? ? diff_1 = threshold_array - ret[column]
? ? diff_1 = diff.clip()
? ? lpm = sum(diff_1 ** order) / len(ret)
? ??
? ? gain_loss = hpm / lpm
? ? print(ret[column].name, gain_loss)
? ? blr_arr.append(gain_loss)
print(blr_arr)
plot_bar_chart(pd,blr_arr,"Bernado Ledoit Ratio")?
# In[123]:
print("Value at Risk(VaR):")
var_arr=[]
for column in ret:
? ? returns = ret[column]
? ? sorted_returns = np.sort(returns)
? ? index = int(alpha * len(sorted_returns))
? ? VaR = abs(sorted_returns[index])
? ? print(ret[column].name, VaR)
? ? var_arr.append(VaR)
print(var_arr)
plot_bar_chart(pd,var_arr,"Value at Risk Ratio")?
# In[124]:
print("Conditional VaR:")
cvar_arr=[]
for column in ret:
? ? returns = ret[column]
? ? sorted_returns = np.sort(returns)
? ? index = int(alpha * len(sorted_returns))
? ? sum_var = sorted_returns[0]
? ? for i in range(0, index):
? ? ? ? sum_var += sorted_returns[i]
? ? CVaR = abs(sum_var / index)
? ? print(ret[column].name, CVaR)
? ? cvar_arr.append(CVaR)
print(cvar_arr)
plot_bar_chart(pd,cvar_arr,"Conditional Value at Risk Ratio")?
# In[125]:
# Cumulative?
print("Cumulative:")
for column in ret:
? ? returns = ret[column]
? ? cum_ret = returns.cumsum()
? ? print(cum_ret.head())
# In[126]:
cum_ret = ret.cumsum()
cum_ret.head()
# In[127]:
from numpy import where, mean
avg_ret = mean(ret)
avg_ret
plot_bar_chart(pd,avg_ret,"Average Returns")
# In[128]:
returns = avg_ret
avg_win = returns.where(returns > 0, 0)
print(avg_win)
plot_bar_chart(pd,avg_win,"Average Wins")
# In[129]:
avg_loss = returns.where(returns <= 0, 0)
print(avg_loss)
plot_bar_chart(pd,avg_loss,"Average Loss")
def plot_bars_sns(data_map, title, xlab, ylab, plotter):
? ? """Barplot using seaborn backend.
? ? :param data_map: A dictionary of labels and values.
? ? :param title: Plot title.
? ? :param xlab: X axis label.
? ? :param ylab: Y axis label.
? ? :param plotter: A wub.vis.report.Report instance.
? ? """
? ? data = pd.DataFrame({'Value': list(data_map.values()), 'Label': list(data_map.keys()),
? ? ? ? ? ? ? ? ? ? ? ? ?'x': np.arange(len(data_map))})
? ? ax = sns.barplot(x="x", y="Value", hue="Label", data =data, hue_order=list(data_map.keys()))
? ? ax.set_title(title)
? ? ax.set_xlabel(xlab)
? ? ax.set_ylabel(ylab)
? ? ax.set_xticks([])
? ? plotter.pages.savefig()
? ? plotter.plt.clf()?
What do you think?
I hope you enjoyed this as much as I did writing. I am all ears to hear from you. Caring is sharing. Feel free to like, share or comment on what you think! Please tag me if you forward this for relevance.
Credits: The header and most of the images are designed using Canva. The course was tried out at Coursera and GitHub is used for storage purposes. All other linked quotes and images are available freely on the Public Internet. Respective trademarks are owned by corresponding firms. Quotes are freely available on the Internet. The opinions highlighted are from a personal experience standpoint and in no way reflect the views of my current or past employers or clients.
#WhatInspiresMe #datascience #trading #investment #algorithmictrading #KRPoints #topperformingstocks #technicalanalysisusingpython #investmentadvice #investmentportfolio
Marketing Analyst
10 个月K
LinkedIn Top Voice | Tourism & Real Estate Promoter | Family Office Advisor | Space Dreamer | I need a NEW Challenge in My Life
10 个月Great share brother Kalilur Rahman ????