What is machine learning?
Machine learning is to data scientists what mining automation is to gold diggers. That is to say, it is now profitable to extract gold from large piles of rubble and sand that were previously considered too expensive, or even impossible, to manually process.
Machine learning is ushering in a paradigm shift in the way software is designed and developed. Traditional software engineering is essentially coders writing step by step how a machine is supposed to transform data. These instructions are written using a programming language, which is then translated into a program that a machine can execute. When we talk about machine learning, a software engineer’s work is to describe what the problem is as a ‘machine learning model’ and let a machine learning algorithm automatically discover how to best tune the model, based on the training data. Take as an analogy the work of a business consultant. A business consultant can either define a business process model top down, based on industry best practices, regulations and personal experience. Or she can observe informal processes within an organisation, conduct interviews and then summarise her findings as one or more business process models, based upon these learnings.
While machine learning has been around as a very active research field for a while, it is only recently being adopted by the wider industry, after game changing success stories by Internet giants like Google, Microsoft and Facebook. But why are we now only seeing a wide adoption of machine learning?
First, there is a caveat: for a machine learning algorithm to perform well, it needs a lot of data, which is now increasingly available thanks to the Internet. The more data you throw at it, the more accurately a machine learning algorithm can learn a representation. The second reason is affordable specialized processors, initially GPUs (Graphical Processing Unit) that were developed for gaming, that could crunch through these huge datasets in reasonable time. Thirdly, machine learning frameworks like Keras or Tensorflow are now available and greatly facilitate the development, training and deployment of very powerful machine learning solutions.
How can it help the IoT and supply chains?
What could we do with machine learning that couldn’t be done before? Automated business workflow and rules? Well if you know the rules that governs your data then you don’t need machine learning. A rules engine such as EVRYTHNG’s powerful Reactor™ is a better fit! Data exploration? Check out our latest dashboard widgets and query tools!
Machine learning and in particular, deep learning, lets you extract insights from massive amounts of data when visualizations become too complex and writing rules practically impossible because of the number of permutations.
The IoT is generating data at an unprecedented rate and this is only the beginning. Machine learning frameworks will help distill information from vast data pools containing unstructured, semi-structured or well structured data, and can be used in the following example use cases:
Gray market detection: Gray market can be seen as a classification task. By classifying products by their expected market, a machine learning algorithm can learn the context, route, etc. of products in each class. Products that are purchased in a place other than their intended market are considered as sold on the gray market.
Product authenticity: We can use machine learning to add further intelligence to scans of physical products (known as THNGS). This is done by training a machine learning model on the context of product scans of authentic and counterfeits products. Once deployed, each scan will be result in a probability of authenticity.
Replenishment: The EVRYTHNG platform makes it easy to train a predictive machine learning model on appliance telematics data coming from a collection of similar appliances, for example coffee machines or washing machines. We are able to predict when to reorder coffee beans based on the vibration and duration of the coffee machine in use. Because the accuracy of machine learning improves with more data, this collective “knowledge” will yield overall a more reliable, personalised reordering service. Furthermore, our platform can act as a mediator between the coffee machine and the supplier. With a simple Reactor™ rule, we can reorder coffee from our favourite supplier when they are only two days of coffee beans supplies left, ensuring that we always have coffee at the office. You don’t need machine learning to predict what happens when software engineers are deprived of coffee!
Preventive maintenance: This is similar to the replenishment use case. The difference being that we use the Reactor™ to dispatch a maintenance notification to a person and that we leverage third-party data sources, such as the current weather report, to augment operational data from appliances. Read our appliance telematics blog to learn more about how we make this a reality.
Applying machine learning to detect gray markets
Let’s focus on one of these use cases specifically. Customers often come to us with seemingly simple questions, such as: can you tell me if a product is being sold on the gray market or not?
The only way for brands to detect gray market problems is by having visibility over their supply chain: product digitization and item-level traceability is a must. This is an area in which EVRYTHNG specialize – our platform makes it easy to integrate different information systems and write apps and analytics tools to gain insights from vast data pools. The EVRYTHNG Labs team have looked at ways to use deep learning to help our customers make even more use of the vast amount they manage in our platform.
If each step in the supply chain is logged, determining parallel imports should be straightforward, just look for products that were bought where they were not supposed to be sold. Unfortunately, this only works in theory, because it would require every step to be recorded and it would require every party in the supply chain to use the same ‘vocabulary’ which is often far from being the case. So in many cases the data is either wrong, or missing. This is common among many supply chains which are only partially instrumented. Could we possibly go through millions of records and figure out the correct destination of every product? No, but our newly developed machine learning feature could!
Naturally, it is essential to keep the human in the loop. We do this for example by predicting the value of every new record, irrespective of whether this value is missing. This way users can see how well the model is performing, and it will help improve the model over time.
Getting started with machine learning for your products
After over a year of research in this space we are now in the process of introducing machine learning features into our platform. Rather than offering raw machine learning capabilities, our productization approach is to ‘pre-package’ trained networks that can be used to achieve specific goals. The idea being that our customers can activate these trained networks within a few clicks to start learning from the incoming data. For instance, as explained above, to detect gray market or product authenticity, or enhance automatic replenishment and preventive maintenance.
We’ll announce soon when our new machine learning capabilities are generally available, but in the meantime, if you’d like to trial these exciting features please contact us.