My New Publication "dataDL.ai" Mission on medium.
so long time ago I decided to share my knowledge of Machine/Deep learning in a way the people who are like past me understand stuff easily.
So I created a publication called “ Deep Math Machine Learning.ai ” where I mostly talked about Math, and Theory of Machine learning.
And I keep adding stories to that.
Now I feel like I wanna share the practicals of Machine learning, so I am starting this publication called “ dataDL.ai ” where I talk about problem solving in machine learning.
My goal is to share my findings and thoughts about solving machine learning problems and of course , I learn a lot this way.
so here are the things that I am gonna write in this publications.
Exploring open source datasets (Vision and NLP)
in simple terms , I take a dataset and I visualise it, understand it, modify it and codify it.
which involves many sub tasks like data preprocessing, data augmentation, data reduction, data vector generation and etc..
Model building and Improvements
building a good model for a dataset is what most deep learning practitioners do which is the most important job in the field.
so I talk about building better models (first I take a random model then I understand it and build better models or improves the current ones based on the dataset )
which involves many sub tasks like model/network visualization, exploring optimization/solution space and etc..
Hyper parameter tuning
well, now a days anyone can build deep learning models very easily because the frameworks that are available are pretty easy and simple to use but if we lack of understanding the problem statement/data/model/math, we can’t fine tune them properly.
so my goal is to spend a lot of time to understand the models and share my findings about Hyper parameter tuning of models (General and Specific to datasets)
Domain specific tasks (vision, NLP, reinforcement learning)
I also wanna talk about some concepts of these domains which we use in solving problems in these domains
like word vectors of different models, transfer learning of different models, artificial general intelligence and etc..
and of course we code them and use them to solve problems.
State of the art (SOTA) for models.
I basically wanna read the latest papers and code it by myself for the state of the art results for some tasks.
and I share my findings about those papers and my experiments.
Software Development of Models
I basically wanna give an end to end solution for the problem that I take, so the execution of the final model is super important to me.
for example, if I build a solution for MNIST data, my goal is run that model instantly on anyone’s phone or computer so either I create a sample app or service that runs on cloud so anyone can check it immediately.
which involves tasks like saving light weighted models for production and serving them on web and etc..
I may create an android app and/or use django and java script for web for visualizations.
and that’s it , thats all I would love to share.
The target audience would be
- deep learning researchers
- deep learning practitioners
- data scientists ( junior or senior)
- DL developers
this is really a fun thing to do for me so let me know if you have any suggestions/questions/thoughts/critics.
These are my LinkedIn and Twitter profiles so feel free to reach out to me.
have a good day/night.
SimOps Advocate; HPC & Cloud Consultant with 40+ years driving strategies & innovation at global companies; Former president & co-founder of Simr; Expertise in starting & growing companies and communities from scratch.
5 年Hi Madhu, great stuff, many thanks!? Recently, at a conference, I came across several approaches which are using a mix of model reduction and interpolation which can help to build near real-time models, like e.g.?Principal Component Analysis (PCA), Proper Orthogonal Decomposition (POD), and Proper Generalized Decomposition (PGD). Perhaps, you know of some additional information / paper(s) about these approaches??
Mendix Developer | Crafting solutions through Low Code
5 年great madhu!