Hacking Darts, part two
The first part of this post concluded with “recognizing the exact position of the darts in the board is rather problematic”. What I didn’t write (for obvious reasons ;-)) is that the approach we took ended up in having a misclassification rate of about 93%. Not something to brag about or show at events…. Fortunately we had the luxury of additional time, plus support from SAS Marketing and Facilities to set up a more permanent development environment. Another fortunate event was the availability of Rik de Ruiter who, together with Jaimy van Dijk, was able to develop an improved version of the solution that led to the reverse outcome. In this case that means a correct classification of about 95% of darts thrown. Wait…. From a 93% misclassification rate (just a little better than a random guess) to only 5%? That’s quite an achievement, so how did they do this?
The first step was to sit down with the team and evaluate the project results of the hackathon. We quickly decided to abandon the ‘classic’ approach of taking pictures, label them and train a neural network to correctly classify new events. Then we started to think about what we were actually trying to achieve, which is determining the correct score based on camera input. We also realized that a dartboard has fixed dimensions (a 34 centimeters diameter divided into 63 sections). This means that it should be possible to calibrate the camera in such a way that there’s always a correct identification of which section is being hit. Or, better stated, which pixel belongs to which section on the board.
In the pictures above you can see how the red and black segments are being identified by the program; the same is done for the white and green segments. This leads to a board description in mathematical terms where every pixel is now part of an identified section. The rest is easy then, right? Well, not so fast… There’s still the challenge of figuring out where the dart hits the board, and of course you want to be able to throw three darts in a row, and have the correct score for each of them. The first step to solve this puzzle is image substraction. With each turn, an empty dart board is stored as a baseline. Then, when a dart is thrown, the image changes (which is detected automatically). By substracting the empty board image from the new one, you end up with just the dart. The second step is process that image using SAS Viya to clean it up and figure out where the tip of the dart is. Finally, when the location of that pixel is found, it can be mapped to the correct section and a score can be shown. Well, not just shown: the computer will tell you the score, and after three darts also reads out the total of the turn. In the animation below you can see the entire flow:
Although the results right now are good enough for fun and demo’s, it’s not perfect yet. And to be frank, I don’t think 100% accuracy is even possible with this setup. Things like lighting conditions, shades, dart angles, overlapping darts on the image or slight vibrations all have a negative effect on determining the correct score. Even for humans it’s often hard to see from a distance where the dart has landed. But unlike camera’s, we can walk up to the board and take a closer look. One thing we could do for instance to improve our solution is adding more cameras. Or, of course, make the dartboard smart enough to catch the darts at the right position regardless of where they’re aimed at :-) But, until then, I don’t think dart referees will be out of a job anytime soon.
To be continued!
Gepassioneerde veranderaar, maker en inspirator. Robotbouwer en kustzeiler in mijn vrije uurtjes.
6 年https://youtu.be/MHTizZ_XcUM problem solved;)
Account Director at Gielissen Interiors, Crafting Memorable Experiences
6 年Got to love #datascience and even better, I get to throw darts during work #SAS #innovation #viya