So, what is context really? In short, context can be seen as everything that describes a situation you find yourself in.
That’s still pretty vague, so let’s delve a little deeper through an example that describes my current situation:
- I’m a 35-year old data geek with beard and Basset Hound.
- I am located at the Argus Labs HQ in Antwerp (coordinates: 4.394252, 51.216356).
- It’s about 11:14 in the morning.
- I am – happily – writing a blog post on a WordPress site.
- I just poured myself a cup of coffee with a little bit of sugar, no milk. (I got only 4 hours of sleep last night. Maps MOOC and all that.)
- I am listening to Endless Night by Graveyard.
- Before I started, I sent out two tweets and a Facebook status update.
- I drove in by bike this morning, which took me 19:31 minutes (new record!) at an average speed of 22.2 km/h burning 126 calories in the process.
- I’m wearing a t-shirt I got off shirt.woot.
- Five of my colleagues are present: Ann (sneezing), Ben (on Reddit), Tim (typing), Svend (thinking) and Roel (who has his birthday today!).
- It is sunny, about 20° C with clear skies, a wind speed of 6 km/hour and 0% change of precipitation.
- I need to take the dog to the vet tonight.
All of these things describe my current context: who I am, what I am doing, what is happening around me… As you can see, the boundaries of what constitutes my context can be fuzzy. I might have driving the bike in the morning, but the effects of that exercise will linger for a while impacting my attention levels. As does Ann’s sneezing.
We see that a lot of the data that makes up a person’s context can be accessed digitally by computers by accessing APIs that are made available by companies such as Last.fm (for music), Foursquare (for location-based venues), Facebook (social media updates), Runkeeper (for fitness data) and many more. Additionally, the proliferation of sensor-based devices, most notably smartphones, enables us to capture contextual information more broadly and accurately than ever.
I’ve got Context. Now what’s the use?
Well, there’s tons, really…
Say you’re an app developer (or work for a company that makes apps), you can make your app context-aware by using a user’s context to drive the functionality of your app.
Tapingo, for instance, is an app that brings context-awareness to food delivery. It learns where you usually get your morning coffee so that, once it’s confident enough, it can ask you to order one on your behalf at your favourite place as soon as you leave you house. Once you get to the coffee shop, your venti cappuccino is ready to go. First, it monitored your context to find out when (in the morning) and where (the Starbucks a block away from work) you get your coffee. Then, it used your context to prompt you for ordering your coffee at right time (in the morning, when you leave your home).
Google Now will show cards, little singular data apps, automatically based on your context. Think a weather card that shows current and expected weather conditions based on your location and the time of day. Or a commute card that shows your expected time to get home from work as soon as you leave the office.
As you can see, context-aware apps and computing has the potential to open up a whole new world of personalized user experiences.
Cool! Can you help me get started?
Sure thing, that’s why we built our contextualization platform! (That, and I’d like a context-aware dog-walking app that notifies me where the heavy metal loving Basset Hound owners meet-up is.)
On our platform, you can link up a bunch of web services: Foursquare, Twitter, Instagram, among others. We import the data they produce as events into our system. We also have an Android app under development that sends additional events, such as fine-grained location data and activities.
All of these events go through a contextualization step where we analyze, process and aggregate them into a timeline of context-items, small blocks of contextual information that are interconnected. This is followed by an enrichment step where augment context-items with additional information, such as weather data for location or gig data for music bands, to create a rich knowledge graph of a person’s context.
This context timeline can then be queried to find out what a person’s current and previous context was at any given time. (We’re wondering what you will be querying it for! Tell us here.)
Great! So now I know everything, right?
Well… You now know enough to get started, but it doesn’t end here!
We talked mainly about people and their context, but other things can have context too. Cars, buildings, bus-lines, crowds, etc. can all be relevant subjects of context timelines. Additionally, contextualization and enrichment are just the first two steps that lead into profiling. Profiling is where we determine patterns in context and set up mood, habit and environment descriptions. More on all this – and more IQ points – in a future blog post!