Blogthng

Authors Joël Vogt (Research Engineer) & Dominique Guinard (CTO & Co-founder)
Published

We’re excited to report on a machine learning project that we’ve recently completed. The goal of our project was to build an interactive tool that would allow our sales team to engage customers with an interactive experience of EVRYTHNG’s machine learning capabilities. Perhaps more important was what we learned about developing user centric machine learning-based products with the EVRYTHNG platform.

The first blog post in our machine learning series was about how we built a machine learning component on top of the EVRYTHNG platform to help our customers detect gray market issues with a 94% accuracy. This blog post is about the human factor of machine learning. After all, the purpose of any machine learning project is to work with people and understand their requirements and needs, then build a model that makes useful predictions.

Let’s look at a use case to better understand the context and how the tool we built fits into the EVRYTHNG proposition.

Use Case: An online insurance company for home appliances

Imagine the following use case: you had the brilliant idea to start an online insurance company for home appliances.

The premium is based on the types of appliances included in the policy. Sounds simple enough, but consider some of the complications. How would a customer get a quote — would you be required to dispatch agents to each customer’s home? That defeats the purpose of online insurance. You could ask customers to self-declare their appliances online, but the drawback here is that going through a massive catalogue of appliances will scare away all but the bravest customers.

Rather, what if each appliance could identify itself, based on observable features that are emitted when they run?


The images above show multiple patterns over time, attributed to two different types of coffee machines.

Observe an appliance long enough and a unique pattern emerges. This is what machine learning is all about: recognizing patterns to transform data into logic — in this case a model that maps vibrations to appliances.

To accomplish this, we can use cheap sensors to take measurements of the appliances’ vibrations, then send the measurements to a service in the cloud. This service will classify the appliances based on the measurements received and return a personalized insurance policy. Now we have the business model and technology solution for our online insurance company!


Figure 1: A model, supported by the EVRYTHNG Platform, that shows how vibrations are mapped to appliances.

Let’s put it all together: A customer, after registering for an insurance policy, orders a WiFi-enabled multi-sensor device for each of their appliances. Once a multi-sensor has been attached to each appliance, it will begin to send measurements to the EVRYTHNG platform. Next, a machine learning component is used to predict the type of appliance based on the measurements received from a Pycom IoT device. Then, the artificial intelligence component creates an action containing the type of appliance. In turn, this action triggers one or more Reactor™ rules on the EVRYTHNG Platform, such as a rule to generate a personalized insurance policy and another rule to send a text message noting the new appliance that was detected.


We built the demo tool using an IoT sensor running our agent and placed on a coffee machine.

Lessons learned

It’s easy to forget that machine learning, and artificial intelligence in general, was beyond the reach of most people until very recently. We’ve all been users, directly or indirectly, of specialized AI applications such as spam filters or autopilots. But we’re now witnessing the dawn of the democratization of AI.

Managing expectations

It’s imperative that end users understand what machine learning is and the types of problems it can solve. We found that users tend to get super excited at first and then a bit disappointed when reality doesn’t meet expectations. A number of machine learning algorithms are essentially glorified pattern matchers. Artificial intelligence today can generalize locally and do specific tasks very well, but it can’t think outside its very narrow box.

It’s also important to use the right tools and methods for a given task. Take neural networks for example. Neural networks are very well suited for supervised learning problems and dealing with huge datasets. But they are not necessarily the best tool for typical IoT examples, such as anomaly detection, which would require unsupervised learning techniques. Since we were familiar with neural networks and Keras, a popular deep learning framework, we decided not to do anomaly detection for the first release.

Show me the data

Traditional software engineering is about writing algorithms that precisely state what a machine does. With machine learning, it’s the data, not the program,  that does the heavy lifting.

If you want to detect a new type of appliance all you need is to measure its vibration patterns, for example, and update the model. You won’t need to write an additional line of code. As machine learning pioneer Pedro Domingos puts it: “[machine] learners turn data into algorithms”.  But there is a catch — machine learning algorithms require a lot of data to elicit patterns. And in the case of supervised learning, someone will have to painstakingly label the training data.

Had we wanted to recognize different spin cycles of a washing machine, we would have had to run every cycle several times, then manually label the sensor data with the name of the cycle that it measured. This had profound implications on what we could realistically deliver, which is why we settled for classifying appliances by type — we could simply place a few IoT sensors on our appliances at our London office, and wait for the data to be collected by coffee drinking colleagues!

Consider the why, where, when and how of end users

Unlike sensor networks that are out of reach of people (inside jet engines, on wind turbines, etc), we collected data in an open environment. Inevitably we had to deal with “noise”, that included doors being slammed, curious colleagues playing with the IoT devices, vibrations from nearby appliances being picked-up, and more. We had to come up with strategies to deal with this noise, otherwise the model would be less accurate.

One solution we used was to collect a lot of data, to drown out the noise. To that end, we drank up to ten cups of coffee a day. In addition, we also ignored events below a certain threshold — we were mainly dealing with coffee machines, so we had a rough idea of the time it takes to make a cup of coffee.

Getting good training data is really important, but you should also ask yourself how the tool will be used  (for us it’s a way for our sales team to demo the capabilities of machine learning with the EVRYTHNG platform). Understand who the end users are, where they will use the tool,  and what they will try to accomplish with the tool.

The first time our tool was shown to customers, the demo almost failed. We hadn’t anticipated that the sensor device would be handed around and examined before using it on the coffee machine. The device picked up vibrations from being handed around and sent a message informing that the washing machine was working (probably because the washing machine life cycle covered so many different sub patterns); we hadn’t trained the model to recognize human activities! Luckily the second event contained the measurements from the coffee machine in action, which the model correctly predicted.

Machine learning should be the least of your concerns

One reason machine learning is such a hot topic is because a number of frameworks are freely available and allow software engineers, without a formal background in artificial intelligence, to build powerful machine learning solutions.

Take another look at the workflow in Figure 1 of this blog. Only one step is about artificial intelligence. We built our model using Keras, which means we could have easily deployed the same model on AWS or Microsoft Azure. Even if we wanted to move to another deep learning framework, APIs  are similar enough that we could probably create a comparable model with a different framework relatively easy. From a technical perspective, the tricky part is getting the data to the model and managing the entire experience end to end in one seamless workflow.

For the Internet of Things, the EVRYTHNG platform is clearly worth considering. Our model provides a high-level taxonomy for Internet of Things resources without being too dogmatic. We found this a healthy trade-off that facilitated data collection and transformation. The EVRYTHNG platform also uses open Web protocol, which means that most devices can talk to the EVRYTHNG platform “out of the box”.

You can read more about our API on our developer portal. Or if you seek a deeper understanding of the Internet of Things, the book “Building the Web of Things”, which was written by two of our cofounders, is really insightful.

What’s next?

We wanted to share our journey towards a more intelligent, user-centric Internet of Things. Hopefully this blog will help you avoid some of the challenges we encountered and give you a better understanding of the types of IoT problems that are present for machine learning. If you’d like to use EVRYTHNG to build your next machine learning and IoT project, check our detailed tutorial or contact us. We look forward to hearing about your projects!

Author Dominique Guinard (CTO & Co-founder)
Published

Continuing on our quest to provide our customers with a broad range of integrations they can use to make their products smarter, we are happy to announce a new integration with a new partner: OriginTrail. This integration will allow EVRYTHNG’s customers to push selected transactions from and about products to OriginTrail’s cutting-edge blockchain solution, leveraging some of the core features of decentralization such as tamper proof transactions or consensus based verification.

OriginTrail is building a specialized protocol for supply chains based on blockchain technology. Their goal is to augment the supply chain with some of the very things blockchains are good at. For instance, OriginTrail allows supply chain transactions to be immutable. It also allows several actors in a supply chain to validate data without necessarily having to trust each other. These are really exciting features and is a great continuation in EVRYTHNG leveraging some of the interesting aspects of blockchains while not reinventing the wheel.

How does it work?

To make this integration a reality both teams sat together and collaborated on a connector. The connector was built as a Reactor script allowing our (or OriginTrail’s) customers to build a scalable bridge between the two platforms. This bridge is essentially capable of converting EVRYTHNG supply chain transactions (called Actions in our world) tagged with a createOriginTrail=true custom field into the GS1 EPCIS standard that both platforms use to communicate. The transaction is then automatically pushed to a decentralized OriginTrail node via its API and made available in the OriginTrail platform built on top of the Ethereum public blockchain.


An OriginTrail verified transaction as seen in the EVRYTHNG Dashboard.

Making Barry even smarter!


Barry the Bear at the EU Commission in Brussels!

We tested the integration on Barry, the incredibly smart bear we created with GS1 for the GS1 Global Forum. Some of the key steps of Barry’s supply chain journey were sent through the OriginTrail – EVRYTHNG bridge to make them tamper proof, thanks to the immutability of blockchains. This basically allows building an authenticity certificate for Barry that can be verified on OriginTrail.


Barry’s Web app showing OriginTrail-EVRYTHNG certified transactions.


Verification tool to check the transaction via the OriginTrail protocol

These features were added to the Web app served via the EVRYTHNG platform when scanning Barry’s tag, closing the loop of making the verification features available to consumers.

Our firend’s at OriginTrail then took the new version of Barry to the European Commission where they were invited to present solutions that can help fight counterfeit-related crime in the digital age in frame of a memorandum of understanding pledging to fight the sale of counterfeit goods online signed by companies like Alibaba, Nike, Adidas, and Channel. We heard Barry made quite an impression on the EU Commission 🙂 .

Next steps

After this successful Proof of Concept both teams are now looking at packaging the connector, making it available to any of our respective customers wanting to connect to OriginTrail’s network to leverage some of the unique features decentralization has to offer.

This step will also include EVRYTHNG operating its own OriginTrail nodes to create a scalable and secure bridge between the two platforms. Stay tuned for updates on that and meanwhile check our other blockchain integrations.

Author Iker Larizgoitia Abad (Program Manager & Research Engineer)
Published

EVRYTHNG and recycl3R collaborate with Carrefour and The Circular Lab to pilot a new initiative leveraging Smart Products to improve recycling habits.

Logroño, 13th April.- A collaboration between EVRYTHNG, recycl3R, Carrefour, and The Circular Lab has developed a mobile application, Recicla Ya, that allows customers of Carrefour to scan products and receive information on how to properly recycle them. Customers are incentivized by a reward system to use the app and recycle their products.

The retailer has successfully launched the first trial of the project at the Circular Lab, the innovation center of Ecoembes, located in Logroño, Spain. During the pilot, 50 Carrefour customers had the opportunity to try the Recicla Ya app and give feedback.


Consumers in the supermarket area selecting products and going through the checkout process

In the pilot, consumers experienced shopping with a twist. They took a simulated visit to a Carrefour supermarket, selected some of their regular products, and went through the checkout process as usual.

The barcode on the receipt in this case had an enhanced use. By scanning it with the Recicla Ya app, the purchased products appear and are catalogued within the app. Consumers can then use the app to keep track of their products and get information on how to sort their waste, depending on the parts of the products and the corresponding recycling scheme in the area (plastic/metal, cardboard, glass or regular waste).


Consumers scanning their receipts and checking their products in the app

The app also geolocates the closest waste and recycling containers and displays information about the collection schedule. Additionally, street bins become “Smart Bins” by the use of Smart Tags deployed on the bins with QR Codes and NFC technology (this last one provided by our partner Thinfilm). Consumers can dispose of their garbage and use the Recicla Ya app to check and acknowledge which container they used.


Consumers interact with the SmartTags on the bins

All of the actions carried out by consumers in the application were virtually rewarded through a point system. As part of the pilot, we surveyed the consumers to test their reactions to the idea of being rewarded in different ways for improving their recycling habits. We’ll be sharing the results and technical aspects in an upcoming blog — stay tuned!

Next steps include deploying the system at scale in two cities in Spain — Logroño and Palma de Mallorca — in the following months.

EVRYTHNG’s Smart Products Platform provides the capabilities to execute and deploy such a system at scale, providing a digital identity to all the products from Carrefour and all the street bins deployed in the city. It also provides seamless integration with external data systems, such as the recycling information service developed by recycl3R and the NFC tagging technology provided by Thinfilm, contributing to the digital transformation of fast moving consumer goods — in this specific case with the aim of improving the recycling habits of consumers.

 

This initiative was carried out inside the TagItSmart project, a European Union’s Horizon 2020 research and innovation program under grant agreement No 688061. http://tagitsmart.eu/

Author Dominique Guinard (CTO & Co-Founder)
Published

There is no shortage of hype around how the blockchain can help the IoT and the supply chain. Yet, there are very few real-world and practical examples of how this can work.

Thanks to our work for the Blockchain Research Institute over the past year we had the chance to deep-dive into the subject with our labs team and look at the opportunities and challenges. This allows us to take a pragmatic and reasonable approach to introducing Distributed Ledger Technologies (DLT) into our products. Our 2018 agenda is packed with innovations in this space, and we are happy to report on the first integration here: building on the PoC we launched last year, we now make it possible for our customers to use the benefits of a public blockchain (e.g., Ethereum or Bitcoin) in conjunction with EVRYTHNG to verify supply chain transactions!

It’s all about trust!

The specific problem this solves is simple: it helps fix trust issues! As an example, consider the state of affairs in consumers’ trust around product provenance. It should be no big surprise that a number of data manipulation scandals around provenance have made consumers increasingly critical. Do you really trust your product is organic? How can you be sure its country of origin was not changed?

This is where public blockchains can help as they have one pretty important feature: they are immutable and publicly auditable. Meaning that the data you put on a public blockchain is available to everyone and cannot be modified by anyone, you included! To be 100% accurate, it could be modified, but this would pretty quickly be detected by the decentralized network and discarded.

Similarly, this system can help fix the lack of trust between the partners within a supply chain, leading to better transparency.

Don’t be naive: beware of the blockchain.

Great, so why don’t we put all supply chain transactions on a public blockchain? Well, here’s the bad news: blockchains alone won’t save the supply chain. Quite on the contrary, naively implemented blockchains are arguably the worst performing, most expensive, and least sustainable supply chain databases ever.

Implementing a supply chain information system with a blockchain where every transaction in the supply chain leads to a transaction on a blockchain is simply not going to work beyond a prototype: it would not be able to support the scale of any significant brand.

Without going into too much detail (our full research project with the BRI on the subject will be made public on May 22 2018) it is impractical to store large amounts of transactions with large amounts of associated data on a public blockchain primarily because the throughput of public blockchains is small: in the order of 50-1000 transactions per minute. Compare this to the 5 billion GS1 barcode scans in a day or the spikes of 1.5 million transactions per minute EVRYTHNG currently manages, and you quickly understand the issue. Beyond scalability, an important concern is the energy consumption of blockchain transactions as well as their cost. All these problems are being researched and worked on (see for example discussions on validation algorithms). However, because the benefits and security of blockchains are actually linked to these slow transaction processing times, solutions are not trivial.

As a consequence, blockchain based solutions for the supply chain (and generally the IoT) today only really make sense as extensions of centralized supply chain information systems.

In short: store most transactions and most of the data in a high-availability centralized platform but replicate some transactions in public blockchain when needed and valuable.


Figure 1: This shows the building blocks of an Active Digital Identity™ associated with a unique product item on the EVRYTHNG platform, extending the standard Web Thing Model to support blockchain constructs for certain transactions.

How does it work?

Our solution is based on a protocol called Chainpoint. Basically, Chainpoint provides two things: first, it provides a more optimized way of linking transactions to a public blockchain by allowing us to batch them. In short, instead of sending one transaction for each supply chain operation you wish to verify, it allows us to batch thousands of them in a single transaction on the blockchain.


Figure 2: Chainpoint protocol – source: https://chainpoint.org/

Second, it verifies the integrity and existence of data without relying on a trusted third-party. In other words it implements what we need: a system that can leverage a public blockchain to validate EVRYTHNG-stored supply chain transactions.

Our current integration relies on an implementation of the Chainpoint protocol provided by a decentralized platform, Tierion, combined with a new service that can be deployed in the powerful EVRYTHNG Reactor, our customizable rules engine.

Real-world example?


Barry the Smart Bear

The system is being piloted by a number of EVRYTHNG’s customers. It will also be used to power the provenance data of Barry the Bear, a demo we presented at the GS1 Global Forum to showcase our new product support for the web-enablement of GS1 identifiers. Barry the Bear is a teddy bear with an EVRYTHNG Active Digital Identity (ADI), offering a lot of powerful features.


Web application for Barry the Bear, providing provenance data verified by blockchain transactions.

One of these features is verified provenance data. All the steps in the manufacturing and supply chain of the bears have been recorded using EPCIS transactions in the EVRYTHNG platform. Two of these steps were selected to be validated on the Bitcoin or Ethereum blockchain as well. Because these blockchains are public and immutable, the blockchain validation process ensures that the provenance data was not tampered with or altered after being recorded in the EVRYTHNG platform.

Implementing this was as simple as enabling our open source Reactor integration of the Chainpoint protocol.


Figure 3: Reactor script for Chainpoint integration

This extension script fits in 100 lines of code and basically ensures that each EVRYTHNG Action (i.e., step in the supply chain) that uses a special parameter gets hashed and transmitted to the blockchain. The resulting blockchain transactions IDs (hashes) are then received by the EVRYTHNG platform via the Reactor script and the corresponding blockchain verification Actions created. From then on they can be audited on the blockchain, and verifiers can attest that the Action data was not modified.


Figure 4: The EVRYTHNG dashboard showing blockchain verification Action for several steps in the supply chain

Note for the tech-savvy readers: the transactions are not directly transmitted to the blockchain but first aggregated into a Merkle Tree whose root is then sent to the Bitcoin or Ethereum blockchain. This is done at the top of the hour via the Tierion service using the Chainpoint protocol.


Figure 5: Blockchain transaction (root of the Merkle tree) for one of the teddy bears. See it on the blockchain for yourself: https://blockexplorer.com/tx/f256da7cd4e1f85941593aaffb8e9163edd8ec3de478a07e3cac2dfb945d81ef

Want to get started with blockchain validations?

We are very excited to offer our customers the ability to integrate a blockchain validation process for supply chain transactions in a small number of steps. If you are interested in trying this for your use cases, check our detailed technical tutorial or contact us for more information. We are eager to hear more about your blockchain supported use cases!

Authors Joël Vogt (Research Engineer) & Dominique Guinard (CTO & Co-Founder)
Published

As part of our contribution to the EU-funded research project, TagitSmart, the EVRYTHNG Labs team have been working on environment-sensitive product tags connected to the Internet of Things.

That is: low-cost tags that can record data about the environment they are deployed in.

TagItSmart & Environment-sensitive, web-enabled tags

TagitSmart is an EU-funded IoT initiative that’s developing a new technology for ink-based, serializable tags.  assign a specific value to a product; a number or URL for example.

Once a traditional barcode (1D or 2D) is printed, the data in it, such as a product code or url  cannot be changed or updated.


TagItSmart example: Thermochromic ink printed as a QR code. This tag contains two indicators. “ON” indicates that the tag has started measuring, and “Above High Limit” indicates if the temperature limit has been exceeded.

In contrast, an environment-sensitive tag is a barcode with values that can automatically change when certain environmental conditions change. These tags will be able to recognize their current and past environmental exposure (such as temperature fluctuation) to update physical products that they are attached to with dynamic digital information.

These types of “sensitive” barcodes are not new. For example, these technologies are already being used in cold-chain and food-safety applications. What is new, however, is the convergence of environment-sensitive tags with open IoT technologies.

Supply-chain visibility and brand protection as use cases

Item-level identification of each physical and digital object is at the core of the IoT and environment-sensitive tags.

Being able to put information into context is powerful. The more that is known about an object’s physical context, such as its environment, the more useful the object becomes to decision makers when analyzing  products and business processes.

If this sounds a bit abstract, let us look at two use cases — supply chain visibility and brand protection — to develop an understanding for when environment-sensitive tags could be used.

Supply-chain visibility

Item-level supply-chain visibility is needed to monitor cold-chains for  food-safety and ‘keepability’, to ensure that consumers are only being sold food that is safe for consumption.  With growing focus on food waste, this is a booming area – according to Supply Quarterly¹, the market for cold chain monitoring is expected to reach $6.2 billion by 2022.

It’s also necessary in healthcare to make sure drugs are being kept within a specific temperature range during transportation and storage.

Brand protection

Despite high-tech anti-counterfeit measures, counterfeiting  is still on the rise. The Economist² quote a 2016 OECD³ report that fake goods account for 2.5% of all global trade, a total of $461 billion. And counterfeiters aren’t just harming the perfume and apparel industries.

Increasingly, counterfeiters have begun to target pharmaceuticals, plane parts, children’s toys, and beverages. Through their actions, counterfeiters are introducing products to the market that are unsafe and potentially  pose a risk to people’s lives.

Product authenticity through product passports

Environment-sensitive tags provide the basis to build product passports — labels that provide a certificate of authenticity by storing the “fingerprint” of each product which is a combination of digital data in the cloud with local data about the product’s interactions with the physical environment. They can visibly change when a product expires or can show the environmental conditions a product was stored in, making tampering much more difficult.

Like your personal passport, a product passport will allow you to verify the authenticity of each and every  product in a tamper-proof manner by combing physical properties that make them hard to copy and by leaving a digital trace that a counterfeiter cannot replicate. The collective histories of similar products also opens up the possibility of using machine learning algorithms for fraud detection.

The combination of physical and digital information therefore makes it harder and considerably more expensive, for counterfeits to copy products.

Outlook

To wrap up, environment-sensitive tags, as proposed by TagItSmart, combined with identifiers that are connected to the Web through an IoT platform such as EVRYTHNG, have the potential to address customer needs. We focused on specific use cases in this blog, but others will rapidly emerge too.   Transparency, authenticity and visibility have always proved difficult for manufacturers, but the technology is there now to make these things considerably easier.

 

Endnotes

  1. http://www.supplychainquarterly.com/news/20161212-cold-chain-monitoring-market-to-reach-623-billion-by-2022/
  2. http://www.economist.com/news/international/21697218-china-grew-richer-and-more-innovative-people-assumed-it-would-counterfeit-less-think
  3. http://dx.doi.org/10.1787/9789264252653-en

Search