Technology Frontiers, Part 2: IoT and Voice Recognition

We previously wrote about some of the technology frontiers we are exploring, and described three that are exciting:

  • 3D Printing
  • Clustered Computing
  • Latest Applications, Operating Systems, and Devices

But much like exploring a new area untouched before, we have two more that are both exciting and showing real promise for the future of technology and how it affects our lives.  These are the Internet of Things (IoT) and Voice Recognition, especially when paired with artificial intelligence and machine learning.   We describe both of these in this article.

Internet of Things (IoT)

The Internet was originally an environment where we hooked our computers to the internet provider and started using email or the world wide web (WWW). Humans were clicking links, watching a video, and sending an email. We initiated the majority of traffic by our explicit and direct actions, predominately in a web browser.

But the use of the Internet as a super-highway for information has changed: now devices and things are generating most of the traffic that is zipping through our data lines. In fact, Cisco did a study that estimated that “Data created by IoT devices will hit 507.5 ZB per year by 2019, up from 134.5 ZB in 2014.” (source: ZdNet Article: http://www.zdnet.com/article/cloud-traffic-to-surge-courtesy-of-iot-says-cisco/). In case you are wondering, a “ZB” is a Zetabyte, or 1 billion terabytes – and that is a lot!

So what is the Internet of Things (hereafter abbreviated “IoT”)?  It is the accumulation of the devices that are connected to the internet and generating (and sending) or receiving data.  It is sometime analogous to Machine to Machine communication (M2M, no humans involved).  Some examples:

  • Your cell phones’ GPS coordinates while you are using maps
  • A Nest thermostat in your home that you can connect to and raise the temperature, and which “learns” your life’s patterns to automatically start managing the system based on your history.
  • A location based tracking beacon to show you where your keys were left behind.
  • Public trash cans that use real-time data collection and alerts to let municipal services know when a bin needs to be emptied.
  • Wireless sensors embedded within concrete foundations to ensure the integrity of a structure; these sensors provide load and event monitoring both during and after construction.
  • Activity sensors placed on an elderly loved one that monitor daily routines and give peace of mind for their safety by alerting you to any serious disruptions detected in their normal schedule.
  • And so many more…

In every case it is some device that is communicating data, not a person directly doing so.

Based on the utility as well as the total data being collected, we can quickly see where this can explode.  Instead of you personally collecting and transmitting data, a device will do this for you. It is in effect what everybody dreams about when you think that your refrigerator will send a list to the local grocery store for items to replenish (and by the way Amazon now offers a “Dash button” that is designed to order some common household items at the push of a button).

Voice Recognition

We are using Voice Recognition more and more every day, in applications like Apple Siri or Google Now, or when we call into an automated messaging attendant at an insurance company and say our date of birth or policy # to a computer, or use voice to text capabilities. You have likely used one of these recently, but never really thought about it. It has become commonplace, but is expanding to be an option of choice for interacting with data.

Like most people, I interact with a lot of email; usually between 100-200 legitimate emails per day that are critical.  Although I am sitting at a PC, I tend to grab my iPhone and use the microphone key to answer emails using my voice.  A quick press and I am orally stating my response, or sending a new email.  I also use Dragon products on both Windows and Macintosh OSX to generate larger documents.  In fact, this article is about 95% voice generated on a Windows laptop with Dragon Naturally Speaking.  I use it to dictate the text, select text and apply formatting like bold or italics, and other advanced capabilities.  I confess that I do not type very well (if only I would have joined the mostly female typing class in my high school!), so the ability to use my voice is a tremendous advantage. It is not only a convenience; it is a huge productivity boost; I have generated documents of thousands of words in an afternoon.

And while I love the ability to simply state my words and see them appear in an email or Word document, when I see them combined with artificial intelligence such as Siri or Microsoft Cortana, it provides a truly personal digital assistant – one that knows what I am looking for. Here are some examples.

  • On my iPhone, I long press the home button and Siri pops up, and I say “When do the Cleveland Browns play?”, and Siri responds orally and on screen with the opponent and date/time of the next game.
  • Voice recognition in CortanaOn my Windows 10 PC I ask the same question and Cortana (the Microsoft voice persona) answers the same basic info, but on screen she also shows the probability of victory for the Pittsburgh Steelers at a 70.2% chance today. And by the way, Cortana has been 140-84 through 16 NFL weeks.
  • On my Windows PC, I can ask “what documents did I work on today?” with my voice, and see a list of everything.
  • On my iPad, I can ask “What is my schedule tomorrow?” and see and hear a list of my appointments.
  • On almost any device, I can ask, “what is the temperature over the next 3 days?” and get a nice forecast for the next three days (it is getting colder…brrr…).
  • On my iPhone, I long press the home button, and say “Remind me to let the dogs in in 10 minutes” and a reminder is created that dutifully goes off 10 minutes later.
  • On my Android tablet I say “Ok Google”. Then “email to John Smith”, “subject Client X need”, “Message We need to call them back today” and it sends an email with that info to John on my team.

In other words, I can ask questions that are personal to me (what is my schedule?) or from my world (“what is the temperature over the next three days?”) and get a context specific reply. Or I can give instructions to do something I need (“remind me in 10 minutes to let the dogs in”).  It seems like I am asking a human who knows what I want, and they give me a reply that is appropriate for the context in which I asked.

These functions are easy to use, and I highly recommend that you try them out.  If you want a place to start, try one of the following:

  • On your Windows 10 PC, click the Cortana microphone and says “Help Me Cortana”, she will show a list of suggested capabilities to get you started.
  • Try the same thing on your iPhone, hold down the home button until it responds, and says “Help me Siri” to get a list of suggested actions (you can also configure it to respond to “Hey Siri).
  • On an Android device, try saying “Ok Google”, then say “help”

What you can see is that your devices can interact with you on your terms.  It is not perfect, sometimes we see the famous and usually funny (and sometimes embarrassing) auto-correct responses when we use our voice, but overall it is really working quite well.

Summary of Technology Frontiers

There are waves of technology shifts that represent new frontiers for users and business organizations, and each represents some questions: What is this?  How can it help me?  What are the risks? We are looking at these so you know we have an eye on what may make a difference for you!

This week, some of us from Keystone will be at the Consumer Electronics Show (CES) in Las Vegas, which is the largest expo of technology directed at consumers and organizations that serve them.  We are excited to continue to dig in and see what is coming down the road that will affect all of our lives!