Blogthng

Author Dominique Guinard (CTO & Co-founder)
Published

We are thrilled to announce the launch of our Blockchain Integration Hub, a milestone in our exploration of what blockchain technologies can bring to supply chain & the IoT and in particular to the digitization of CPG and apparel products. In short, the hub packages integrations with several blockchains and makes it easy for consumer product brands to rapidly test and scale applications to deliver traceability, transparency, authenticity, data sharing and reward tokens to customers through distributed ledger technologies (DLT). What really makes the hub are the integrations and partnerships we have built with great partners such as OriginTrail, Tierion and BlockV.

What’s this Hub about?

When we started our exploration of the blockchain about 2 years ago and joined the Blockchain Research Institute to write a 35 pages report about the influence of DLT on IoT and supply chain, it became clear that blockchain was going to have an impact on our business and on our customers. It also became clear that a lot of the solutions were not ready for prime time, with significant scalability, interoperability and immaturity challenges ahead of them. This said the reality is — we rarely have a customer conversation in which the blockchain is not mentioned.

So, we had two options:

1) Re-inventing the wheel and building our own blockchain (which seems to be a trend these days…) adding another blockchain to the interoperability challenge.

2) Focusing on what we do best: providing mass-scale digital identities, intelligence and analytics for products, and partnering with a number of leading blockchain protocols and solutions to act as a “hub”, making it possible for our customers to test and combine the best of what each solution has to provide.

We picked the second solution. Let me share a little more about the Blockchain Integration Hub because it is going to totally change the consumer products landscape.


The EVRYTHNG BlockChain Integration Hub

The EVRYTHNG Blockchain Integration Hub comprises of packaged scripts running in EVRYTHNG’s most popular component; our rules engine, called the Reactor. This creates a powerful and scalable integration layer between the EVRYTHNG platform and the platforms of our blockchain partners. In essence it enables Actions to be replicated (e.g., steps and location in the supply chain, consumer scans, etc.) and Properties to be updated (e.g., temperature of goods, number of steps) to different blockchains.

How does it help make products smarter?

This is why we are so excited! Working together we are able to extend the smart capabilities of digitized products with the decentralized features of integrated blockchains.

For instance:

  • The integration with OriginTrail allows decentralized data sharing across supply chain partners based on the Ethereum blockchain. This makes it possible for our customers to share specific parts of their data. For instance, information required for compliance purposes or data to provide greater transparency in environments where trust is lacking.
  • The Tierion integration allows anchoring hashes of supply chain transactions to the public Bitcoin blockchain. This ensures that these transactions are immutable and cannot be modified by anyone. A useful feature when it comes to trustable product provenance data.
  • Finally, with the BlockV integration, EVRYTHNG Active Digital Identities™ (ADI) can be linked to virtual objects that are unique, verifiable and tradable. For instance this allows embedding physical products into Virtual Reality and Augmented Reality games and creating very compelling, token-based loyalty programs.

Even better, all of these features can be combined through the EVRYTHNG platform, acting as an orchestration hub! This allows interoperability across different blockchain solutions – a very important feature in the blockchain world, experiencing an unprecedented rate of innovation and change.

Give me technical details!


Architecture on the EVRYTHNG Blockchain Integration Hub

The best place to get all the nitty gritty details is to go straight to our developer portal but let me summarize some parts of the blockchain hub integration pattern here. As mentioned before, the integrations were built using scripts running in our powerful Reactor service. This service is capable of running custom (Node.js) securely and at scale for any transaction sent to the EVRYTHNG platform: from supply chain tracking data, to consumer engaging with products and to live data from LPWAN tracked containers. This code is then responsible for translating the transactions from the EVRYTHNG model (based on the W3C Web Thing Model) to the models used by our blockchain partners. It also then pushes transactions to the selected blockchain(s).

This is done either via the public API of the blockchain partner (e.g., BlockV) or via blockchain nodes hosted within the EVRYTHNG platform (e.g., OriginTrail).

The script finally receives back a back transaction hash that it stores on EVRYTHNG. This ensures the transaction can be leveraged by apps using both the EVRYTHNG platform API the APIs of the blockchain platforms. As we further strengthen the integrations we will blog about each one separately here but you can already have glimpse at the Tierion or OriginTrail integration and could build your own integrations based on this pattern.

What’s next? Looking for more blockchain partners!

Integrations with our first partners are readily available from our developer portal but we are busy deploying new blockchain nodes and will make them available to our customers soon. We are also looking at partnering with other blockchain protocols and platforms so, if you believe you can help us making CPG and apparel products smarter with your blockchain solution, do contact us, we’d love to hear from you!

Finally, after a successful real-world pilot with the now famous barry the bear, we are excited to be onboarding our first large scale customers. For instance, Almond will allow consumers to scan a unique ADI on cans of Fact, an organic, flavored water launching next week, to unlock cash reward tokens and reveal the product’s story as told through blockchain-based supply chain data.

Authors Joël Vogt (Research Engineer) & Dominique Guinard (CTO & Co-founder)
Published

We’re excited to report on a machine learning project that we’ve recently completed. The goal of our project was to build an interactive tool that would allow our sales team to engage customers with an interactive experience of EVRYTHNG’s machine learning capabilities. Perhaps more important was what we learned about developing user centric machine learning-based products with the EVRYTHNG platform.

The first blog post in our machine learning series was about how we built a machine learning component on top of the EVRYTHNG platform to help our customers detect gray market issues with a 94% accuracy. This blog post is about the human factor of machine learning. After all, the purpose of any machine learning project is to work with people and understand their requirements and needs, then build a model that makes useful predictions.

Let’s look at a use case to better understand the context and how the tool we built fits into the EVRYTHNG proposition.

Use Case: An online insurance company for home appliances

Imagine the following use case: you had the brilliant idea to start an online insurance company for home appliances.

The premium is based on the types of appliances included in the policy. Sounds simple enough, but consider some of the complications. How would a customer get a quote — would you be required to dispatch agents to each customer’s home? That defeats the purpose of online insurance. You could ask customers to self-declare their appliances online, but the drawback here is that going through a massive catalogue of appliances will scare away all but the bravest customers.

Rather, what if each appliance could identify itself, based on observable features that are emitted when they run?


The images above show multiple patterns over time, attributed to two different types of coffee machines.

Observe an appliance long enough and a unique pattern emerges. This is what machine learning is all about: recognizing patterns to transform data into logic — in this case a model that maps vibrations to appliances.

To accomplish this, we can use cheap sensors to take measurements of the appliances’ vibrations, then send the measurements to a service in the cloud. This service will classify the appliances based on the measurements received and return a personalized insurance policy. Now we have the business model and technology solution for our online insurance company!


Figure 1: A model, supported by the EVRYTHNG Platform, that shows how vibrations are mapped to appliances.

Let’s put it all together: A customer, after registering for an insurance policy, orders a WiFi-enabled multi-sensor device for each of their appliances. Once a multi-sensor has been attached to each appliance, it will begin to send measurements to the EVRYTHNG platform. Next, a machine learning component is used to predict the type of appliance based on the measurements received from a Pycom IoT device. Then, the artificial intelligence component creates an action containing the type of appliance. In turn, this action triggers one or more Reactor™ rules on the EVRYTHNG Platform, such as a rule to generate a personalized insurance policy and another rule to send a text message noting the new appliance that was detected.


We built the demo tool using an IoT sensor running our agent and placed on a coffee machine.

Lessons learned

It’s easy to forget that machine learning, and artificial intelligence in general, was beyond the reach of most people until very recently. We’ve all been users, directly or indirectly, of specialized AI applications such as spam filters or autopilots. But we’re now witnessing the dawn of the democratization of AI.

Managing expectations

It’s imperative that end users understand what machine learning is and the types of problems it can solve. We found that users tend to get super excited at first and then a bit disappointed when reality doesn’t meet expectations. A number of machine learning algorithms are essentially glorified pattern matchers. Artificial intelligence today can generalize locally and do specific tasks very well, but it can’t think outside its very narrow box.

It’s also important to use the right tools and methods for a given task. Take neural networks for example. Neural networks are very well suited for supervised learning problems and dealing with huge datasets. But they are not necessarily the best tool for typical IoT examples, such as anomaly detection, which would require unsupervised learning techniques. Since we were familiar with neural networks and Keras, a popular deep learning framework, we decided not to do anomaly detection for the first release.

Show me the data

Traditional software engineering is about writing algorithms that precisely state what a machine does. With machine learning, it’s the data, not the program,  that does the heavy lifting.

If you want to detect a new type of appliance all you need is to measure its vibration patterns, for example, and update the model. You won’t need to write an additional line of code. As machine learning pioneer Pedro Domingos puts it: “[machine] learners turn data into algorithms”.  But there is a catch — machine learning algorithms require a lot of data to elicit patterns. And in the case of supervised learning, someone will have to painstakingly label the training data.

Had we wanted to recognize different spin cycles of a washing machine, we would have had to run every cycle several times, then manually label the sensor data with the name of the cycle that it measured. This had profound implications on what we could realistically deliver, which is why we settled for classifying appliances by type — we could simply place a few IoT sensors on our appliances at our London office, and wait for the data to be collected by coffee drinking colleagues!

Consider the why, where, when and how of end users

Unlike sensor networks that are out of reach of people (inside jet engines, on wind turbines, etc), we collected data in an open environment. Inevitably we had to deal with “noise”, that included doors being slammed, curious colleagues playing with the IoT devices, vibrations from nearby appliances being picked-up, and more. We had to come up with strategies to deal with this noise, otherwise the model would be less accurate.

One solution we used was to collect a lot of data, to drown out the noise. To that end, we drank up to ten cups of coffee a day. In addition, we also ignored events below a certain threshold — we were mainly dealing with coffee machines, so we had a rough idea of the time it takes to make a cup of coffee.

Getting good training data is really important, but you should also ask yourself how the tool will be used  (for us it’s a way for our sales team to demo the capabilities of machine learning with the EVRYTHNG platform). Understand who the end users are, where they will use the tool,  and what they will try to accomplish with the tool.

The first time our tool was shown to customers, the demo almost failed. We hadn’t anticipated that the sensor device would be handed around and examined before using it on the coffee machine. The device picked up vibrations from being handed around and sent a message informing that the washing machine was working (probably because the washing machine life cycle covered so many different sub patterns); we hadn’t trained the model to recognize human activities! Luckily the second event contained the measurements from the coffee machine in action, which the model correctly predicted.

Machine learning should be the least of your concerns

One reason machine learning is such a hot topic is because a number of frameworks are freely available and allow software engineers, without a formal background in artificial intelligence, to build powerful machine learning solutions.

Take another look at the workflow in Figure 1 of this blog. Only one step is about artificial intelligence. We built our model using Keras, which means we could have easily deployed the same model on AWS or Microsoft Azure. Even if we wanted to move to another deep learning framework, APIs  are similar enough that we could probably create a comparable model with a different framework relatively easy. From a technical perspective, the tricky part is getting the data to the model and managing the entire experience end to end in one seamless workflow.

For the Internet of Things, the EVRYTHNG platform is clearly worth considering. Our model provides a high-level taxonomy for Internet of Things resources without being too dogmatic. We found this a healthy trade-off that facilitated data collection and transformation. The EVRYTHNG platform also uses open Web protocol, which means that most devices can talk to the EVRYTHNG platform “out of the box”.

You can read more about our API on our developer portal. Or if you seek a deeper understanding of the Internet of Things, the book “Building the Web of Things”, which was written by two of our cofounders, is really insightful.

What’s next?

We wanted to share our journey towards a more intelligent, user-centric Internet of Things. Hopefully this blog will help you avoid some of the challenges we encountered and give you a better understanding of the types of IoT problems that are present for machine learning. If you’d like to use EVRYTHNG to build your next machine learning and IoT project, check our detailed tutorial or contact us. We look forward to hearing about your projects!

Author Dominique Guinard (CTO & Co-founder)
Published

Continuing on our quest to provide our customers with a broad range of integrations they can use to make their products smarter, we are happy to announce a new integration with a new partner: OriginTrail. This integration will allow EVRYTHNG’s customers to push selected transactions from and about products to OriginTrail’s cutting-edge blockchain solution, leveraging some of the core features of decentralization such as tamper proof transactions or consensus based verification.

OriginTrail is building a specialized protocol for supply chains based on blockchain technology. Their goal is to augment the supply chain with some of the very things blockchains are good at. For instance, OriginTrail allows supply chain transactions to be immutable. It also allows several actors in a supply chain to validate data without necessarily having to trust each other. These are really exciting features and is a great continuation in EVRYTHNG leveraging some of the interesting aspects of blockchains while not reinventing the wheel.

How does it work?

To make this integration a reality both teams sat together and collaborated on a connector. The connector was built as a Reactor script allowing our (or OriginTrail’s) customers to build a scalable bridge between the two platforms. This bridge is essentially capable of converting EVRYTHNG supply chain transactions (called Actions in our world) tagged with a createOriginTrail=true custom field into the GS1 EPCIS standard that both platforms use to communicate. The transaction is then automatically pushed to a decentralized OriginTrail node via its API and made available in the OriginTrail platform built on top of the Ethereum public blockchain.


An OriginTrail verified transaction as seen in the EVRYTHNG Dashboard.

Making Barry even smarter!


Barry the Bear at the EU Commission in Brussels!

We tested the integration on Barry, the incredibly smart bear we created with GS1 for the GS1 Global Forum. Some of the key steps of Barry’s supply chain journey were sent through the OriginTrail – EVRYTHNG bridge to make them tamper proof, thanks to the immutability of blockchains. This basically allows building an authenticity certificate for Barry that can be verified on OriginTrail.


Barry’s Web app showing OriginTrail-EVRYTHNG certified transactions.


Verification tool to check the transaction via the OriginTrail protocol

These features were added to the Web app served via the EVRYTHNG platform when scanning Barry’s tag, closing the loop of making the verification features available to consumers.

Our firend’s at OriginTrail then took the new version of Barry to the European Commission where they were invited to present solutions that can help fight counterfeit-related crime in the digital age in frame of a memorandum of understanding pledging to fight the sale of counterfeit goods online signed by companies like Alibaba, Nike, Adidas, and Channel. We heard Barry made quite an impression on the EU Commission 🙂 .

Next steps

After this successful Proof of Concept both teams are now looking at packaging the connector, making it available to any of our respective customers wanting to connect to OriginTrail’s network to leverage some of the unique features decentralization has to offer.

This step will also include EVRYTHNG operating its own OriginTrail nodes to create a scalable and secure bridge between the two platforms. Stay tuned for updates on that and meanwhile check our other blockchain integrations.

Author Iker Larizgoitia Abad (Program Manager & Research Engineer)
Published

EVRYTHNG and recycl3R collaborate with Carrefour and The Circular Lab to pilot a new initiative leveraging Smart Products to improve recycling habits.

Logroño, 13th April.- A collaboration between EVRYTHNG, recycl3R, Carrefour, and The Circular Lab has developed a mobile application, Recicla Ya, that allows customers of Carrefour to scan products and receive information on how to properly recycle them. Customers are incentivized by a reward system to use the app and recycle their products.

The retailer has successfully launched the first trial of the project at the Circular Lab, the innovation center of Ecoembes, located in Logroño, Spain. During the pilot, 50 Carrefour customers had the opportunity to try the Recicla Ya app and give feedback.


Consumers in the supermarket area selecting products and going through the checkout process

In the pilot, consumers experienced shopping with a twist. They took a simulated visit to a Carrefour supermarket, selected some of their regular products, and went through the checkout process as usual.

The barcode on the receipt in this case had an enhanced use. By scanning it with the Recicla Ya app, the purchased products appear and are catalogued within the app. Consumers can then use the app to keep track of their products and get information on how to sort their waste, depending on the parts of the products and the corresponding recycling scheme in the area (plastic/metal, cardboard, glass or regular waste).


Consumers scanning their receipts and checking their products in the app

The app also geolocates the closest waste and recycling containers and displays information about the collection schedule. Additionally, street bins become “Smart Bins” by the use of Smart Tags deployed on the bins with QR Codes and NFC technology (this last one provided by our partner Thinfilm). Consumers can dispose of their garbage and use the Recicla Ya app to check and acknowledge which container they used.


Consumers interact with the SmartTags on the bins

All of the actions carried out by consumers in the application were virtually rewarded through a point system. As part of the pilot, we surveyed the consumers to test their reactions to the idea of being rewarded in different ways for improving their recycling habits. We’ll be sharing the results and technical aspects in an upcoming blog — stay tuned!

Next steps include deploying the system at scale in two cities in Spain — Logroño and Palma de Mallorca — in the following months.

EVRYTHNG’s Smart Products Platform provides the capabilities to execute and deploy such a system at scale, providing a digital identity to all the products from Carrefour and all the street bins deployed in the city. It also provides seamless integration with external data systems, such as the recycling information service developed by recycl3R and the NFC tagging technology provided by Thinfilm, contributing to the digital transformation of fast moving consumer goods — in this specific case with the aim of improving the recycling habits of consumers.

 

This initiative was carried out inside the TagItSmart project, a European Union’s Horizon 2020 research and innovation program under grant agreement No 688061. http://tagitsmart.eu/

Author Dominique Guinard (CTO & Co-Founder)
Published

There is no shortage of hype around how the blockchain can help the IoT and the supply chain. Yet, there are very few real-world and practical examples of how this can work.

Thanks to our work for the Blockchain Research Institute over the past year we had the chance to deep-dive into the subject with our labs team and look at the opportunities and challenges. This allows us to take a pragmatic and reasonable approach to introducing Distributed Ledger Technologies (DLT) into our products. Our 2018 agenda is packed with innovations in this space, and we are happy to report on the first integration here: building on the PoC we launched last year, we now make it possible for our customers to use the benefits of a public blockchain (e.g., Ethereum or Bitcoin) in conjunction with EVRYTHNG to verify supply chain transactions!

It’s all about trust!

The specific problem this solves is simple: it helps fix trust issues! As an example, consider the state of affairs in consumers’ trust around product provenance. It should be no big surprise that a number of data manipulation scandals around provenance have made consumers increasingly critical. Do you really trust your product is organic? How can you be sure its country of origin was not changed?

This is where public blockchains can help as they have one pretty important feature: they are immutable and publicly auditable. Meaning that the data you put on a public blockchain is available to everyone and cannot be modified by anyone, you included! To be 100% accurate, it could be modified, but this would pretty quickly be detected by the decentralized network and discarded.

Similarly, this system can help fix the lack of trust between the partners within a supply chain, leading to better transparency.

Don’t be naive: beware of the blockchain.

Great, so why don’t we put all supply chain transactions on a public blockchain? Well, here’s the bad news: blockchains alone won’t save the supply chain. Quite on the contrary, naively implemented blockchains are arguably the worst performing, most expensive, and least sustainable supply chain databases ever.

Implementing a supply chain information system with a blockchain where every transaction in the supply chain leads to a transaction on a blockchain is simply not going to work beyond a prototype: it would not be able to support the scale of any significant brand.

Without going into too much detail (our full research project with the BRI on the subject will be made public on May 22 2018) it is impractical to store large amounts of transactions with large amounts of associated data on a public blockchain primarily because the throughput of public blockchains is small: in the order of 50-1000 transactions per minute. Compare this to the 5 billion GS1 barcode scans in a day or the spikes of 1.5 million transactions per minute EVRYTHNG currently manages, and you quickly understand the issue. Beyond scalability, an important concern is the energy consumption of blockchain transactions as well as their cost. All these problems are being researched and worked on (see for example discussions on validation algorithms). However, because the benefits and security of blockchains are actually linked to these slow transaction processing times, solutions are not trivial.

As a consequence, blockchain based solutions for the supply chain (and generally the IoT) today only really make sense as extensions of centralized supply chain information systems.

In short: store most transactions and most of the data in a high-availability centralized platform but replicate some transactions in public blockchain when needed and valuable.


Figure 1: This shows the building blocks of an Active Digital Identity™ associated with a unique product item on the EVRYTHNG platform, extending the standard Web Thing Model to support blockchain constructs for certain transactions.

How does it work?

Our solution is based on a protocol called Chainpoint. Basically, Chainpoint provides two things: first, it provides a more optimized way of linking transactions to a public blockchain by allowing us to batch them. In short, instead of sending one transaction for each supply chain operation you wish to verify, it allows us to batch thousands of them in a single transaction on the blockchain.


Figure 2: Chainpoint protocol – source: https://chainpoint.org/

Second, it verifies the integrity and existence of data without relying on a trusted third-party. In other words it implements what we need: a system that can leverage a public blockchain to validate EVRYTHNG-stored supply chain transactions.

Our current integration relies on an implementation of the Chainpoint protocol provided by a decentralized platform, Tierion, combined with a new service that can be deployed in the powerful EVRYTHNG Reactor, our customizable rules engine.

Real-world example?


Barry the Smart Bear

The system is being piloted by a number of EVRYTHNG’s customers. It will also be used to power the provenance data of Barry the Bear, a demo we presented at the GS1 Global Forum to showcase our new product support for the web-enablement of GS1 identifiers. Barry the Bear is a teddy bear with an EVRYTHNG Active Digital Identity (ADI), offering a lot of powerful features.


Web application for Barry the Bear, providing provenance data verified by blockchain transactions.

One of these features is verified provenance data. All the steps in the manufacturing and supply chain of the bears have been recorded using EPCIS transactions in the EVRYTHNG platform. Two of these steps were selected to be validated on the Bitcoin or Ethereum blockchain as well. Because these blockchains are public and immutable, the blockchain validation process ensures that the provenance data was not tampered with or altered after being recorded in the EVRYTHNG platform.

Implementing this was as simple as enabling our open source Reactor integration of the Chainpoint protocol.


Figure 3: Reactor script for Chainpoint integration

This extension script fits in 100 lines of code and basically ensures that each EVRYTHNG Action (i.e., step in the supply chain) that uses a special parameter gets hashed and transmitted to the blockchain. The resulting blockchain transactions IDs (hashes) are then received by the EVRYTHNG platform via the Reactor script and the corresponding blockchain verification Actions created. From then on they can be audited on the blockchain, and verifiers can attest that the Action data was not modified.


Figure 4: The EVRYTHNG dashboard showing blockchain verification Action for several steps in the supply chain

Note for the tech-savvy readers: the transactions are not directly transmitted to the blockchain but first aggregated into a Merkle Tree whose root is then sent to the Bitcoin or Ethereum blockchain. This is done at the top of the hour via the Tierion service using the Chainpoint protocol.


Figure 5: Blockchain transaction (root of the Merkle tree) for one of the teddy bears. See it on the blockchain for yourself: https://blockexplorer.com/tx/f256da7cd4e1f85941593aaffb8e9163edd8ec3de478a07e3cac2dfb945d81ef

Want to get started with blockchain validations?

We are very excited to offer our customers the ability to integrate a blockchain validation process for supply chain transactions in a small number of steps. If you are interested in trying this for your use cases, check our detailed technical tutorial or contact us for more information. We are eager to hear more about your blockchain supported use cases!

Search