KODAKOne Platform Development | Update IV
Volker Brendel
Senior Director @ Capgemini Invent, Intelligent Industries Chemicals, Life science & Pharma
A brief introduction to artificial intelligenceand essential frameworks we use
Dear KODAKOne enthusiasts,
KODAKOne is revolutionizing the way rights management solutions are handled. Let’s take a closer look at how we combine cutting edge technology - artificial intelligence and blockchain - in one unique service platform.
The following article was written by Emna Amor, our data science lead. She is driving this topic within the KODAKOne team.
A place where artificial intelligence and blockchain collide
In addition to being a blockchain-based image rights management platform, KODAKOne is one of the first platforms that offers artificial intelligence solutions to its users.
The platform does not simply function as a “police officer” (better: “protective agent” or “watchman”) - finding all violations in the web and providing licenses to convert infringers into customers - it also analyzes visual content (image / video). This is where Machine Learning plays an important role.
What is machine learning?
“Machine learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.”
Source: Daniel Faggella for TechMergence
The above definition encapsulates the ideal objective behind the KODAKOne platform. The platform encounters various problems such as image quality, image classification, image aesthetic, object recognition, nudity detection (abusive content), celebrity recognition, scene understanding and many other super models.
How KODAKOne technology works
KODAKOne uses deep learning, a subset of machine learning, to understand any type of visual content such as video, image etc.. Deep learning plays an important role in modern image analysis and computer vision research. It’s the most effective way to analyze massive amounts of data and solve problems that were impossible to solve before.
Which technologies does KODAKOne use?
KODAKOne uses multiple technologies to deliver its high performance models. Here is the list of our most used technologies:
Tensorflow
Tensorflow is one of the most well-maintained and extensively used frameworks for machine learning. Tensorflow is open-source and easy to deploy across a variety of frameworks.
Keras
Keras is an open source software library designed to simplify the creation of deep learning models. KODAKOne is mostly using keras on top of tensorflow because of its user-friendliness, modularity, and ease of extensibility
Caffe
Caffe is a machine learning framework that focuses on expressiveness, speed, and modularity.
What are the basic KODAKOne platform AI models?
In KODAKOne, we have a diversity of AI models that we can divide to 2 categories:
? Related face models
? No related face models
__________________________________________________________________________
Related face models:
Face detection, Emotion detection, Celebrity recognition, Skin color, Age range detection, Eye close/open, Object detection, Eye contact (yes / no), On phone (yes / no), Image quality,
No related face models are:
Event detection, Red carpet detection, Nudity detection, Image aesthetic, Stage (concert / theater) detection, Object detection, Categorizer (people, animals, architecture and building, flowers and plants, food and drink, nature and landscape, sports, abstract), Image quality
__________________________________________________________________________
All the mentioned models are createdand tested by the KODAKOne data science team, Every time we improve them by retraining the models with new data through a powerful GPU-Cluster. Results can provide value to our platform users.
What did we learn?
We took our time to make own ideation processes on a perfect model for a very specific use case and organize them in a pipeline with sequences and parallel processing. We used a lambda-based approach. This approach proved to be most effective. We learned not to search too long for existing models; making our own was more useful.
We learned GPU power is a must for training - but is a wasteful production cost. You will find that most models will run better in CPU clusters. Think about a good mix of GPU and CPU clusters in your learning infrastructure.
Shrinking down models to its narrative makes them less complex. More models in microservices will bring more throughput and makes the scale out more effective by binding them into smaller AI chains.
We have seen that some non-profit organization projects go the same direction as our approach and run AI in different pipelines as shown - e.g. in the imaginem project. Go serverless! We are not in azure at our end, but this project is similar to what we have built at AI side in KODAKOne.
I must thank Emna Amor as Data Science Lead at KODAKOne for her hard work over the last 3 years to make this happen and again. Thank you Emna – and the whole team - for giving this short introduction into our AI part of the platform.
Get involved!
For me this is now an opportunity to share more publicly what we have been working on at RYDE Holding, Inc. to build our KODAKOne platform. None of this would have been possible without our truly remarkable development teams and the amazing work from all involved departments and partners.
Because this will be one of many posts to come, I would love to hear your thoughts regarding our choices and the overall development of the platform. Actually, any feedback is more than welcome!
Yours truly,
Volker