BLOGTHNG

Mapping the Last Mile

Maps have long been a killer app for smartphones, and are a core strategic asset for today’s Web companies. The high profile coverage of Apple’s own goal with iOS6 Maps , and more recently, the controversy around Google Maps on Windows Phone devices were a timely reminder.

A few years ago, digital maps were “GPS” style devices geared towards guiding you down motorways not helping you find your way to the nearest cashpoint machine, petrol station or wi-fi cafe.

What really drove the change was an increase not just in overall image quality, but in data fidelity. Suddenly, we moved beyond roads, and streets, buildings and points of interest came into play – making maps useful as we went about our daily lives.

Over the past few years, we’ve seen location develop into something of a search API in itself. Not just the ubiquitous “location-based-services” check-in apps, but apps which could help you find a partner or find a property nearby.

Over time, you could envisage a future where maps become a user interface for just about anything that we want to do in the physical world. There are already some interesting forays into this area – from OpenTable to Ban.jo. Yet whether through a lack of contextual understanding or imprecise natural language processing, we’re not there yet.

The potential for maps to be a powerful API for the real-world may be there, but there’s not yet enough granularity to model the real bits of data that matter to people, so we have to do the work finding that missing information ourselves.

If you’re looking for a pub, say, you’re often looking for one with specific characteristics. It might need to be showing a specific sports game at a certain time, not too crowded, and preferably stocking real ale, plus the drinks should be at reasonable prices. None of this information is readily accessible to us through maps currently, so unless we happen to stumble upon a particularly helpful review, we’re out of luck.

And while streets, buildings and points of interest are readily available, we can’t identify lamp posts, park benches or trees, other than perhaps visually through a grainy satellite image, let alone available parking spaces I can use right now.

That’s the real problem for the current generation of mapping solutions. We can’t really get the kind of precise, super useful information we need from maps, so we end up going that last mile ourselves to find the information we need.

At EVRYTHNG, we want to help move mapping along that last mile. We’d like to see a world where you can identify the individual chairs, tables, walls, pictures, books, bicycles, cars and so forth as active entities that can be followed digitally and dynamically.

In other words, we’d love developers to use the EVRYTHNG Engine to mash-up those kind of detailed, live data streams with current mapping APIs and create a new kind of fine-grained and real-time, real-word maps.

That map image is all the more useful, surely, if you know what is there right now, instead of what was there when the picture was actually taken – and will give us all a more intelligent and nuanced relationship with the world around us.

———

Street Light icon by SimpleScott, The Noun Project; Tree icon by Valentina Piccione, The Noun Project; Bench icon by Giorgia Guarino, The Noun Project

Leave a Reply