We'll All Have a Personal Army of Specialized Smart Agents Soon

By November 26, 2013 Mobile, Opinion No Comments

“By 2017 your smartphone will be smarter than you,” Gartner titled its press release on the future of smart devices. I understand the need for a snappy headline, but there are should be boundaries to copywriting freedom. For starters, your smartphone, on itself, isn’t that smart. It is merely an impressive heap of hardware – sensors, camera, mobile data & wifi chips, .. . There is a little reasoning ability embedded in its apps and operating system, but most of the algorithmic machine thinking happens remotely – in the cloud.  Disconnect it, and your phone becomes outright dumb.

Add to that, that the human brain is incredibly complex. It instantaneously stores, processes, predict and adjusts information, over a wide range of subjects – from how to park, to anticipating what someone is going to say.  Algorithms, on the other side, are designed for a specific purpose.  You might develop an algorithm that can tell you exactly what to wear to work on a Tuesday when it is sunny and warm, but that same algorithm won’t know what to answer when your colleague texts you he’ll be late. We can off-lift certain reasoning processes to machines, but even when algorithms are specifically written to handle complex situations, they will only function within that one, well-defined complex situation. As an example, software that can win at Starcraft, will still get pawnd when playing Age of Empires. So if our mobiles can’t out-smart us by 2017, then what can they mean for us?

Its awkward headline aside, Gartner’s summary of the Gartner Symposium/ITxpo 2013 in Barcelona did contain a great summary of the near-future of mobile devices:

By 2017 mobile phones will be smarter than people not because of an intrinsic intelligence, but because the cloud and the data stored in the cloud will provide them with the computational ability to make sense of the information they have so they appear smart.

So let’s stop talking about how smartphones will out-smart us, and let’s start talking how technology can help us live more smartly:

Where has the intelligence gone?

A smartphone is a delivery channel. Not a brain. We love our smartphones, but we should recognize them for what they are: interconnected sensory devices with their computational power largely limited by the lack of long lasting battery or energy resources. Let’s take a step back and look at our own body. We believe that we “see through our eyes”, which is a flawed thesis. Human eyes are merely designed to capture light. The process of “seeing”, including recognizing objects, is purely the result of cognitive processes in our brain, yet we believe our eyes are responsible for me seeing my hands typing this blog.

Using the same analogy, looking at your smartphone as a smart agent, sentient device or cognitive system is a flawed assumption, when discussing the intelligence of these devices. Smartphones don’t have the computing power to compete with the processing capacity that a cloud infrastructure offers. As such, computational algorithms that provide a decent level of intelligence to machines will be cloud-based, while a predominant and popular delivery channel today is your smartphone.

Technology is able to interpret actions.

Technology will not only be able to know what you’re doing, but also can make a model-defined guess at why you’re doing it, using contextual data such as your whereabouts, your calendar and other personal data. So no longer “Ann is driving”, but “Ann is driving, because she needs to get from Home to Work”. Machines will also look at less personal information to explain certain actions. Weather, might influence your choice of transport. That it is a holiday, might influence if you’re going into work at all, even though it is Monday. When named-entity recognition and sentiment analysis are added, “Ann sent a Tweet” will become “Ann sent an agitated Tweet about the E313, which is a highway.” That makes for enriched context.

Mobile, Smart and Context-Aware

Technology is able to predict a person’s next move or purchase, real-time.

As machine learning algorithms gather more and more historical data on the what and why of our actions,  they’ll get better at predicting what we need and want under which circumstances. More and more data, and huge advances in the predicting technologies itself, makes that data collection, processing and responses will be happening in real-time. Once an application can predict I’ll leave by car to work around 6 am, it can start to preemptively monitor the traffic (or look at predicted traffic jams), and notify me if I need to leave earlier than usual to arrive on time.

Technology is able to assist with our day to day lives, from a social, knowledge, entertainment and productivity point of view

It starts out ‘as simple’ as Google Now looking up the address of your next appointment before you even enquire, but next up, your phone will help you out by automatically – automagically? – scheduling your appointments:

The first services that will be performed “automatically” will generally help with menial tasks — and significantly time consuming or time wasting tasks — such as time-bound events (calendaring) such as booking a car for its yearly service, creating a weekly to-do list, sending birthday greetings, or responding to mundane email messages. Gradually, as confidence in the outsourcing of more menial tasks to the smartphone increases, consumers are expected to become accustomed to allowing a greater array of apps and services to take control of other aspects of their lives – this will be the era of cognizant computing. (Gartner, 2013)

Gartner sees four distinctive steps towards this ‘smart agent’ future (infographic) where technology acts on our behalf – their four phases of cognizant computing:

Sync Me (store copies of my digital assets and keep it in sync across all end points and contexts) and See Me (know where I am and have been digitally & physically, and understand my mood to better align services) are currently happening. Know Me (understand what I want and need, and proactively present it to me) is the type of anticipatory computing making its first appearances in Siri and Google Now.

The last phase of four is not that far away either. Be Me (act on my behalf based on learned or explicit rules) is starting to emerge in combination with the Internet of Things. If you’re almost home, your phone will let the internet know, which will forward this information to your thermostat, which will then automatically turn the heating on. But who’s the smart agent here? Your phone? The thermostat?

A Personal Army of Specialized Smart Agents

Smartphones will play an important part in the evolution towards technology assisting us in every aspect of our lives, as a delivery and monitoring channel, and as a familiar interface to control technology’s automated behaviour when needed.  Yet, by 2017, your car will negotiate directly with your calendar (and, possibly, bank account) to schedule that maintenance appointment.

That smart agent we’re all talking about, it is not hidden somewhere in your smartphone. It’s all the underlying technology – mobile, sensors, data, cloud, analytics, prediction algorithms, ..  – and all devices connected to this, that will make for everybody having their own personal army of specialized, context-aware smart agents, with each its own expertise. At Jini, we’ll happily assist you with creating exactly that smart agent your service needs.

Leave a Reply

Contact us
×