Feed Forward NN Regression for Alzheimer Disease Prediction
Original Research by Double Check Consulting: Alzhiemer Disease Prediction

Feed Forward NN Regression for Alzheimer Disease Prediction

There’s tremendous interest in predicting Alzheimer disease and related dementia in all ethnic population. In USA the current cost of treatment is $277 Billion and by 2050 costs could be $1.1 Trillion (https://www.hpcwire.com/2019/08/26/how-ai-can-help-identify-the-risk-of-alzheimers-disease/ ). One could look at Behavioral component, Genetic, claims and find the best way to obtain meaningful results. Methods include Regression and NLP (https://www.healthcarefinancenews.com/news/optum-using-ai-predict-alzheimers-disease ). One issue researchers face is small sample size because genetics is present in limited people of any ethnic group and hence success depends upon richness of data – cognitive, behavioral, genetic and more (https://undark.org/2019/03/04/artificial-intelligence-ai-alzheimers/ ) Our data had behavioral  component such as drinking, smoking etc and genetic markers and outcome variable such as heart rate, stress test results, outcome from cognitive tests. Our method is Feed Forward Neural Network that indexed each row and also scored them for Predictive Analytics. Regression model was applied on indexed data to find how effective each component alone and in combination was effective towards prediction of Alzheimer disease. We also found maximum predictive score possible for each component in limited sample size.

 Results of 5 Genetic Factors included in this research showed Linear component was strongest in Predicting outcomes of Alzheimer disease. R- square found was 29%, and summary statistics shows statistical significance (P<.05). Our sample size was 240; maximum prediction score was 830. Expectation was to reach 1000 but low sample size didn’t allow that. Hence the Minimum sample size needed is 830 x 170 (1000-870) = 141,100 unique rows. One should start Proof of concept (POC) with at least that many unique rows when analyzing such Genetic factors responsible for Alzheimer disease.

 Our results show 6 Non-Genetic (NG) Factors didn’t do well for Alzheimer disease prediction. The slope was negative and R-square for Linear, quadratic and cubic models were disappointing. We took 240 rows and maximum prediction found was 890, where maximum score expected was 1000. So the difference is 110 or 890 x 110 = 97,900 unique row. The genetic factors were only at 830 but statistically significant. Or 890-830=60. So the penalty for lack of statistical significance is = 97,900 x 60 =5,874,000. Interpretation is if same behavioral components were used for prediction for Black box models then at least 6 million unique rows will be needed in POC itself.  Attaining a score of 890 in a small data set of 240 but lacking the statistical significance was interpreted as small region of high signal among Behavioral variables; more such variables at that sample size is expected to bring out enough Signal regions to attain respectable Predictive Accuracy and practical usability.

 The summary statistics of Behavioral components or non-genetic factors show large differences between R-square and Adjusted R-square shows or adding non-linear models inflated R-square. Hence R-square is biased estimator; linear components may be less important for prediction from Behavioral component. Summary statistics show 95% on slope included zero and P>.05. Needless to write, this shows why large number of research includes more genetic Factors in Alzheimer diseases prediction. Our recommendation is before switching on a black box model, have at least 6 million unique rows AND include large numbers of behavioral components. That.. Intense variable selection is needed by General Linear Models (GLM) and Generalized Linear Models. Logistic Regression could be more helpful on such a group of predictors; will add significant Transparency before Black Box model is delivered. This research also indicates that Prediction from Behavioral component alone for Black Box model is a risky business. Our research indicates behavioral prediction is difficult and high accuracy is not guarantee of successful drug development for Alzheimer disease (https://www.outsourcing-pharma.com/Article/2019/08/21/Machine-learning-model-forecasts-Alzheimer-s-driven-cognitive-decline ).

The picture above is Regression after Feed Forward scores each rows of independent and dependent variables; Non-genetic factors or behavioral component is combined with Genetic factors to investigate how these together impact outcomes for Alzheimer disease. The Linear R-square is 9% here, whereas that was 29% from Genetic factors. Such a steep decrease in R-square is statistically significant (P<.01). Needless to write, Behavioral variables are more like small regions of DNA coding region among Ocean of non-coding region. Having high predictive analytics score but statistically not significant observation mentioned above means exactly that. It is critical for Data Scientists to be very careful when cleaning behavioral variables because small region of high signal can be cleaned away easily; low prediction ability will further go down! Hence, we feel sample size from 100’s to thousands to millions BEFORE moving to Billions is wise strategy – each iteration helps us Optimize analytics processes, time and talent. This type of “Upstream Learning” Adds trust in business process while eliminating less than optimal #AI projects delivery after black box methods– something we read so often.  

 Lastly, #AI explainability strategy involves best practices as well. https://H2O.ai 's Patrick Hall says if you want to make your #model explainable, don't make it too #complex  (https://www.zdnet.com/article/data-2020-outlook-part-ii-explainable-ai-and-multi-model-databases/ ).  Furthermore, most existing #AI #Researchers focus on new explaining methods, and not on #usability , practical interpretability and #efficacy on real users (https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8466590 ).

Navin Sinha, MS, MBA

Owner and CEO at Double Check Consulting (BPO): #AI 4 #Healthy #Food and #Humans

4 年

The goal was obtain high R-square for Behavioral variables because many research show high r-square for Genetic components and fail in drug discovery - as one article referenced showed. Another objective was combining ability of behavioral variables with that of genetics. The multiple r-square for behavior variable was 11% and in combination with genetics, r-square for linear and quadratic model were 9%.. so the r-square came close to the limits of 11%. The day someone finds r-square of 50% for such behavioral and genetic components, a billion dollar drug will be borned. So the search continues for behavioral variables that result into high r-square alone and in combination with genetics factors. Such research are known as genotype*environment interaction that so many research ignores but is established science for last 50 years in Agriculture and Human genetics. So our research continues,. Our question is can we get statistically significant results for behavioral variables from small sample? How many and which variables? Another contribution of our research is sample size determination for corporate analytics that didn't exist before. 3rd contribution is "upstream learning.

回复
Calvin Bradshaw, Ph.D.

Polymath | Author | STEM Professional | Empowerment Enthusiast | Retired USAF Officer

4 年

Navin Sinha, MS, MBA: this is very interesting. Even with NNs, the Rsquares don’t look very promising. Thanks for sharing!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了