Neural Networks Made Fun With TensorFlow Playground!
Introduction
When we embark on Deep Learning, we are inevitably confronted with the difficulty of understanding the very concept of the neural network but also its relation to its configuration.?Indeed, it is difficult to fully understand each adjustment lever of such a mechanism which seems so abstract!
Personally, I have always found the idea that we can model almost anything with such a seemingly simple system (from a unitary, neuronal point of view) magical.?But what is the real impact on the whole system if:
Interface presentation
The interface is rather simple and uncluttered, let’s detail it in part.
The upper strip allows you:
Then we have the neural network inputs:
Note: Orange points have a value of -1 while blue points have a value of +1
We then have the output:
The output graph shows the result of the neural network.?What is particularly interesting at this level is that we have above the result the curves of the cost functions (loss) by epoch (epochs).?This is one of the most important things to watch out for when you go to start learning to see if your network is performing or not.?The training cost curve is gray while the test one is black.
领英推荐
Most interesting (for the end) maintaining the network design graphically in the middle:
The network is presented of course by vertical layers.?You can add up to 8 hidden layers by clicking on the + (or removing it with them -).?On each layer, you can also add or remove neurons with the + and – above the layers.
What to note here (once you have started your learning):
Some tests
We’re ready to go for a few tries now.?Try it out several times, changing parameter by parameter to see its impact.?Look particularly at the cost curves, it is necessary of course that they are as low as possible together.?Be careful that it does not diverge, which would mean that you are for example over-learning (training curve at the bottom and test curve higher).
To go more law and for example detect a disappearance of gradient, try the distribution of the data in the form of a spiral:
You will already see that the training takes a lot longer.?It will undoubtedly be very unstable insofar as it will sometimes have stagnation (plateaus) then recovery, etc.?If you have put more than 3 hidden layers, also notice the evolution of the weights on the layers on the left compared to those on the right.
Conclusion
This tool is above all a fun tool but it allows you to feel the importance of the major adjustment levers of a neural network.?So if you are embarking on this Deep Learning adventure I highly recommend that you play with this simulator.
AI & Data Analysis Enthusiast Driven by a passion for data and AI, I am focused on developing skills in data analysis and machine learning to solve real-world problems
2 年????? ????