THE WATSON CHRONICLES -- PART II
THE WATSON CHRONICLES -- PART II
If you read my prior post, you know that we've had an IBM AC922 at my Company for awhile, and have recently installed the system with the base Watson machine learning software; this is WML-CE or "Watson Machine Learning - Community Edition". WML-CE is actually an umbrella term that encompasses about a dozen pieces of software. In no particular order, the main components are: Tensorflow, PyTorch, Apex, Arrow, Bazel, Caffe, Dali, Dask, DDL, Fastavro, Gflags,Graphsurgeon, Keras, Magma, and, of course, PowerAI and CUDA, this last being a set of high-performance drivers for Nvidia's GPUs, without which Watson don't go.
One of the main reasons to do the base install is to use -- and develop -- what are called "Jupyter Notebooks"; programming code written in Python, that takes advantage of the AI capabilities of the system. You can actually do this install of WML-CE on any Power or x_86 system; there are community (free) distributions for both platforms. But the key is the Nvidia Graphics Processing Unit(s); without this hardware, your code won't run well, and probably not at all. But having said that, I've installed WML-CE on an HP laptop, with a GPU of course, and me and my team are using this little machine for training.
The AC922 that sits in our data center is a beast: it has 40 Power 9 CPUs, 256 GB of Memory and about a million 10GB network ports. Various storage. And, of course, four Nvidia Tesla V-100 GPUs. It's supremely cool to run a Jupyter notebook or a large data model on this system and watch as the machine "learns" to, say, distinguish a beagle from a German Shepard, or a sprained ankle from a brain tumor. With one command, you look at the brain of the machine as it sifts through, classifies and ranks data, then gives you an answer based on whatever question you ask it; on the Jupyter notebook side, you watch as the machine goes through "Epochs", which are timeslices where WML takes the data sets you've provided and tries to make a determination as to which answer to your question is most likely the correct one; status bars show you WML's opinion of how it's doing in any given Epoch. At the end of the Jupyter output is something which is humorously called -- at least I think so -- a "Confusion Matrix" where WML gives you an analysis of what it thinks it got right and wrong, with percentages also given for false positives and negatives.
This week, with the help of IBM, I installed "Maximo Visual Inspection", an IBM product that lets you build huge data models simply by dragging and dropping your input data sets into wells, then pointing and clicking to build models and projects to solve whatever you're looking for. When you're dealing with tens of thousands of Xrays, MRIs or PET scans, the loads become even simpler; got all your images in a huge zip file? Just drag 'em and drop 'em, let Visual Inspection take care of the rest.
Oh, I forgot: The whole shebang runs on Redhat Linux v7.6; you can also run Watson on Ubuntu, but to my mind, the Redhat support structure blows Ubuntu's out of the water... especially since IBM acquired them.
I think that's enough for now. More to come as we integrate Watson more fully into our Enterprise!
Sales Vice-President: Medical Imaging | Radiology, Cardiology, Hemodynamics, Cloud & AI | Merge by Merative (formerly IBM Watson Health)
4 年Great read and great use of technology!