Blogthng

Authors Joël Vogt (Research Engineer) & Dominique Guinard (CTO & Co-founder)
Published

We’re excited to report on a machine learning project that we’ve recently completed. The goal of our project was to build an interactive tool that would allow our sales team to engage customers with an interactive experience of EVRYTHNG’s machine learning capabilities. Perhaps more important was what we learned about developing user centric machine learning-based products with the EVRYTHNG platform.

The first blog post in our machine learning series was about how we built a machine learning component on top of the EVRYTHNG platform to help our customers detect gray market issues with a 94% accuracy. This blog post is about the human factor of machine learning. After all, the purpose of any machine learning project is to work with people and understand their requirements and needs, then build a model that makes useful predictions.

Let’s look at a use case to better understand the context and how the tool we built fits into the EVRYTHNG proposition.

Use Case: An online insurance company for home appliances

Imagine the following use case: you had the brilliant idea to start an online insurance company for home appliances.

The premium is based on the types of appliances included in the policy. Sounds simple enough, but consider some of the complications. How would a customer get a quote — would you be required to dispatch agents to each customer’s home? That defeats the purpose of online insurance. You could ask customers to self-declare their appliances online, but the drawback here is that going through a massive catalogue of appliances will scare away all but the bravest customers.

Rather, what if each appliance could identify itself, based on observable features that are emitted when they run?


The images above show multiple patterns over time, attributed to two different types of coffee machines.

Observe an appliance long enough and a unique pattern emerges. This is what machine learning is all about: recognizing patterns to transform data into logic — in this case a model that maps vibrations to appliances.

To accomplish this, we can use cheap sensors to take measurements of the appliances’ vibrations, then send the measurements to a service in the cloud. This service will classify the appliances based on the measurements received and return a personalized insurance policy. Now we have the business model and technology solution for our online insurance company!


Figure 1: A model, supported by the EVRYTHNG Platform, that shows how vibrations are mapped to appliances.

Let’s put it all together: A customer, after registering for an insurance policy, orders a WiFi-enabled multi-sensor device for each of their appliances. Once a multi-sensor has been attached to each appliance, it will begin to send measurements to the EVRYTHNG platform. Next, a machine learning component is used to predict the type of appliance based on the measurements received from a Pycom IoT device. Then, the artificial intelligence component creates an action containing the type of appliance. In turn, this action triggers one or more Reactor™ rules on the EVRYTHNG Platform, such as a rule to generate a personalized insurance policy and another rule to send a text message noting the new appliance that was detected.


We built the demo tool using an IoT sensor running our agent and placed on a coffee machine.

Lessons learned

It’s easy to forget that machine learning, and artificial intelligence in general, was beyond the reach of most people until very recently. We’ve all been users, directly or indirectly, of specialized AI applications such as spam filters or autopilots. But we’re now witnessing the dawn of the democratization of AI.

Managing expectations

It’s imperative that end users understand what machine learning is and the types of problems it can solve. We found that users tend to get super excited at first and then a bit disappointed when reality doesn’t meet expectations. A number of machine learning algorithms are essentially glorified pattern matchers. Artificial intelligence today can generalize locally and do specific tasks very well, but it can’t think outside its very narrow box.

It’s also important to use the right tools and methods for a given task. Take neural networks for example. Neural networks are very well suited for supervised learning problems and dealing with huge datasets. But they are not necessarily the best tool for typical IoT examples, such as anomaly detection, which would require unsupervised learning techniques. Since we were familiar with neural networks and Keras, a popular deep learning framework, we decided not to do anomaly detection for the first release.

Show me the data

Traditional software engineering is about writing algorithms that precisely state what a machine does. With machine learning, it’s the data, not the program,  that does the heavy lifting.

If you want to detect a new type of appliance all you need is to measure its vibration patterns, for example, and update the model. You won’t need to write an additional line of code. As machine learning pioneer Pedro Domingos puts it: “[machine] learners turn data into algorithms”.  But there is a catch — machine learning algorithms require a lot of data to elicit patterns. And in the case of supervised learning, someone will have to painstakingly label the training data.

Had we wanted to recognize different spin cycles of a washing machine, we would have had to run every cycle several times, then manually label the sensor data with the name of the cycle that it measured. This had profound implications on what we could realistically deliver, which is why we settled for classifying appliances by type — we could simply place a few IoT sensors on our appliances at our London office, and wait for the data to be collected by coffee drinking colleagues!

Consider the why, where, when and how of end users

Unlike sensor networks that are out of reach of people (inside jet engines, on wind turbines, etc), we collected data in an open environment. Inevitably we had to deal with “noise”, that included doors being slammed, curious colleagues playing with the IoT devices, vibrations from nearby appliances being picked-up, and more. We had to come up with strategies to deal with this noise, otherwise the model would be less accurate.

One solution we used was to collect a lot of data, to drown out the noise. To that end, we drank up to ten cups of coffee a day. In addition, we also ignored events below a certain threshold — we were mainly dealing with coffee machines, so we had a rough idea of the time it takes to make a cup of coffee.

Getting good training data is really important, but you should also ask yourself how the tool will be used  (for us it’s a way for our sales team to demo the capabilities of machine learning with the EVRYTHNG platform). Understand who the end users are, where they will use the tool,  and what they will try to accomplish with the tool.

The first time our tool was shown to customers, the demo almost failed. We hadn’t anticipated that the sensor device would be handed around and examined before using it on the coffee machine. The device picked up vibrations from being handed around and sent a message informing that the washing machine was working (probably because the washing machine life cycle covered so many different sub patterns); we hadn’t trained the model to recognize human activities! Luckily the second event contained the measurements from the coffee machine in action, which the model correctly predicted.

Machine learning should be the least of your concerns

One reason machine learning is such a hot topic is because a number of frameworks are freely available and allow software engineers, without a formal background in artificial intelligence, to build powerful machine learning solutions.

Take another look at the workflow in Figure 1 of this blog. Only one step is about artificial intelligence. We built our model using Keras, which means we could have easily deployed the same model on AWS or Microsoft Azure. Even if we wanted to move to another deep learning framework, APIs  are similar enough that we could probably create a comparable model with a different framework relatively easy. From a technical perspective, the tricky part is getting the data to the model and managing the entire experience end to end in one seamless workflow.

For the Internet of Things, the EVRYTHNG platform is clearly worth considering. Our model provides a high-level taxonomy for Internet of Things resources without being too dogmatic. We found this a healthy trade-off that facilitated data collection and transformation. The EVRYTHNG platform also uses open Web protocol, which means that most devices can talk to the EVRYTHNG platform “out of the box”.

You can read more about our API on our developer portal. Or if you seek a deeper understanding of the Internet of Things, the book “Building the Web of Things”, which was written by two of our cofounders, is really insightful.

What’s next?

We wanted to share our journey towards a more intelligent, user-centric Internet of Things. Hopefully this blog will help you avoid some of the challenges we encountered and give you a better understanding of the types of IoT problems that are present for machine learning. If you’d like to use EVRYTHNG to build your next machine learning and IoT project, check our detailed tutorial or contact us. We look forward to hearing about your projects!

Search