There has been much written about Machine Learning and Artificial Intelligence lately. Across virtually every industry, you can’t open a publication without reading about how AI stands to transform and make organisations more productive.
This focus on AI’s transformational power has also given rise to a fear that jobs — both menial tasks and highly skilled — will be replaced. I believe successful AI gives us additional tools. Tools that can be used to great effect in myriad ways, enriching our lives and making our occupations more interesting.
A lot of the talk about automation is around unskilled labour, however automation is also replacing highly skilled labour. From JP Morgan Chase and Co using machine learning to parse financial deals that used to keep legal teams busy for thousands of hours, to the automation of 2500 McDonald’s kiosks, to research by Stanford and startup iRhythmTech showing that AI can be used to identify heart problems more efficiently than trained doctors, examples of AI improving efficiencies are everywhere.
Healthcare is one area where the potential applications of AI could vastly improve both practitioner workflows and patient outcomes. Google has trained an AI system to identify tumors in images of potentially cancerous breasts, and claims accuracy is 89% compared with 73% for a human pathologist.
Related to this is Tel Aviv startup Zebra Medical who claims it can use AI to detect some types of cancerous cells with 91% accuracy, versus 88% for a trained radiologist. Founder Elad Benjamin says “In five or seven years, radiologists ……are going to have analytics engines or bots like ours that will be doing 60, 70, 80 per cent of their work.”
It’s fascinating, but the gaping hole in this coverage is the user experience. It’s important given that we’ve become so reliant on mobile user experiences for most aspects of our lives. From checking the weather, to ordering products via a voice assistant, and streaming podcasts from our smart car dashboard.
What will this AI revolution mean for an end user experience? In what follows, I’ll present some examples of how machine learning can be used to power rich user experiences that provide strong value to users.
But before we get to that, why is all this happening now?
AI is not new. It’s been around as a field since the 1950s. Machine Learning, or ML, is the latest and best approach to AI. ML is where a system learns by itself rather than being provided with explicit, hand crafted rules.
We’ve been surrounded by systems that rely on ML for decades now, but most of us have been unaware. They exist in applications such as spam filters, search engines, and character recognition systems, amongst others.
ML started gaining mainstream attention about five years ago with stunning results achieved by a technique called deep learning, triggered in large part by the results of a Google Brain project.
Since then, deep learning — a type of artificial neural network — and recurrent neural networks, have continued to blow away previous records of performance at tasks ranging from image captioning to speech recognition, handwriting recognition, and natural language translation.
Reinforcement Learning (RL), a technique for bots to learn effective actions, has also had newsworthy successes recently. This has been famously demonstrated with video games, where an agent learns to play the game proficiently with only the same input a human player would have; vision of the screen and without being provided with domain knowledge (i.e. the rules of the game). It started with DeepMind’s application for Atari video games (see video demo here) at the end of 2013, and has seen a lot of attention and improvements by DeepMind and other groups since.
The recent success of ML has also been the result of the availability of huge datasets and modern compute power. It takes a lot of data, and a lot of time, to train a model effectively. Graphics Processing Units (GPU’s) and custom chips are being used to make training deep models practical (examples of GPUs and custom chips).
Maybe the most critical factor in bringing ML to the mainstream has been increased accessibility. Open source tools such as Tensor Flow, Torch, and Spark allow developers to build ML programs such as deep networks capable of tasks such as digit recognition, using only a few lines of code.
Machine Learning has also been converted into a service by the major tech companies and others, with APIs that allow you to train and use a predictive model cheaply, with a few web service calls and a dataset. You can find examples of these from Google Cloud AI, AWS ML, Microsoft Azure, IBM Watson, and BigML.
I’ve really only touched the surface here on ‘why now’. You can find more information in the popular articles, industry reports, and white papers that have been published lately. A good start is this Deloitte report and the very comprehensive Obama Administration US government report on “opportunities, considerations, and challenges of Artificial Intelligence”. I also recommend this interview with Obama about AI — he seriously gets it. These publications give comprehensive descriptions of the history of ML and its implications.
So much for the background on why ML is so hot right now. What I really wanted to cover in this post is how ML is going to fundamentally transform the user experience (UX) as we know it. At Melbourne IT, we’ve been actively experimenting with ML for some time now and have unearthed some fascinating insights into how ML can be used to alter the way users experience a product or service.
Before I share the use cases we’ve been working on, I want to provide a brief description of some of the core components. Most ML fits into two very broad techniques. The first is clustering, or finding structure in your data: for example, segmentation of user data into a number of groups distinguished by a given set of features.
Then there is classification. With classification, predictions can be made about new data points, based on previous examples. This approach can be applied to continuous values such as predicting the value of a house based on previous house sales, and the features of the house, such as number of rooms, size of land, etc.
Classification can also be applied to categories, such as predicting which number is being represented by a handwritten digit. In this scenario, the classification would be based on an existing body of handwritten digits where the number they represent is known to the system.
Another major area of machine learning is reinforcement learning, which is important in agent-based systems that act in their environment. They learn an effective policy for choosing actions that give them the best possible reward. The reward itself is chosen to achieve the desired outcome.
These fundamental approaches to ML come in many forms, and are used, either alone or in combination, to perform higher order specific functions. Some well established areas are:
Speech Recognition: A more familiar application of ML, where spoken natural language is converted into text. Speech recognition is required for dictation, and most voice based assistants will first convert speech to text before applying Natural Language Processing.
Natural Language Processing (NLP): A very broad area, covering the interpretation of natural language (speech or text) to understand its meaning. For example, what was the intention of a spoken utterance, what was the request or command, and what was the topic and sentiment. The next 3 items are branches of NLP.
Topic Modelling: The identification of the salient topics discussed in a body of text.
Sentiment detection: The identification of the sentiment of a body of text. For example, is the content of a review ‘happy’, ‘angry’, ‘aggressive’? Sentiment detection is often used for interpreting the meaning of reviews and tweets.
Conversational User Interface: Also referred to as Dialogue Systems and more recently Chatbots. Conversational UI is the combination of NLP and business logic to create a system that can carry a conversation.
A conversational UI can be simple, with pre-configured responses to a limited range of utterances, or more complex, capable of accepting open conversation. Digital assistants such as Siri, Alexa, and Cortana, as well as Chatbot apps like Quartz and Lark are examples of Conversational UI. Dialogue systems have been around for a long time, but have become very popular in the last few years.
Computer Vision: Is the understanding of the content of images. Humans are so expert at interpreting images that we take for granted the level of processing required to recognise even a simple object.
SLAM (Simultaneous Localisation and Mapping): Building a map of the environment as you move about, within the environment, whilst figuring out where you are in that map. This has been a staple of robotics researchers for decades, and is now emerging in mainstream consumer devices such as Google Tango compatible devices.
Predictive Analytics: Predicting future results based on past examples. Predictive analytics underpins most of the mainstream commercial applications of ML.
Recommendation Engines: Making appropriate and relevant recommendations to a user. Usually the recommendations are based on similar users and/or similar products/services to those that have been previously purchased or consumed. This is most famously used by companies such as Amazon and Netflix.
So now we understand the key building blocks or components of ML, how are these components used to transform user experiences? How do they help users with a ‘job to be done’, and how do they make that job easy and enjoyable?
Below are some examples of applications that are in the market today, as well as ideas that we’ve come up with in our Emerging Tech Workshops. Some examples involve cases where ML is performed on a significant body of data in the cloud, with insights surfaced via the User Interface. Other examples illustrate how ML is being used on a device to enhance the experience or create a previously impossible one.
Smart budgeting is a system built into banking. It automatically analyses a user’s spending by type (e.g. entertainment) and advises when a budget is being approached or exceeded. It provides insights into a user’s spending patterns and provides suggestions on how spending could be changed to meet budgetary goals.
The application could provide notification of an upcoming bill, or suggest good times to buy or not to buy an item. Accounting packages such as Mint.com are already starting to implement these kinds of features, and we’re likely to see that trend continue with extensions focused on text or voice operated virtual chatbot assistants. I’ll be surprised if major banks aren’t offering this functionality soon.
Quartz and Lark are a couple of good examples of well thought out chatbots. Quartz is a news outlet that delivers content on the web but also as a text based chat conversation on their smartphone apps complete with GIFs and emojis. It’s a familiar way to interact with your device, and it makes consuming news easy and fun. The user is prompted to indicate if they are interested in more detail or not so the conversation is directed according to the users interests.
News chatbots could be extended further to do such things as monitor social media in a user’s networks, detect topics and sentiment, and then use that content to engage the user in highly relevant topics.
Lark is a health app that engages you via a conversational UI that works like a lifestyle coach. You tell Lark about a meal you’ve had and it will give you tips about how you can make smarter food choices. It can also give you feedback on your drinking, exercise, and sleep patterns, all helping you achieve your goals.
We see passwords increasingly being replaced by biometrics for user identification and authentication. In addition to fingerprint verification, this is extending into voice biometrics and facial recognition. As well as enhanced security, the benefit for the user is that remembering passwords will soon be a thing of the past.
Personalisation has, for some time now, been a focus for businesses seeking to provide more relevant and contextual experiences for their customers. AI will help to transform the personalisation of experiences by moving beyond what are predominantly rule-based systems presently to ML-based systems that learn from user behavioural patterns. For example adjusting the positioning of UI elements so that favourite or related elements are more prominent.
ML presents a fantastic opportunity to transform the store experience for shoppers, allowing brands to provide a more immersive and differentiated retail experience. One application of AI in-store involves customers holding up their smartphone to a product to receive a dynamic 3D display. In a supermarket, this could be recipe suggestions, or nutritional information. In fashion retail, complementary suggestions could be provided — this goes with that — or suggestions around what other buyers who purchased a given garment also purchased.
eBay have built a chatbot using Facebook Messenger that allows users to take a photo of anything and then uses image recognition to present similar items for sale. The application takes comparison shopping to a whole new level.
With so much content being published these days it can be a chore to trawl through mountains of articles published online. A chatbot could aid this process by selecting only relevant articles — based on previous reading patterns — and summarising the content in a few brief paragraphs.
Taking data from a range of sources — your social media accounts would be a great place to start — a chatbot could form holiday packages based on historical patterns, friends, and trending holidays. Looking at your photo collections and ‘likes’ of other’s photos could help form a view as to suggested destinations and things to do. A conversational UI could then be used to engage you in a conversation about these packages to further narrow and refine suggestions.
As you can see from the selection above, there are a wide range of use cases for ML to enrich and re-imagine the User Experience; in reality, the limits of this technology are bound only by imagination. Understanding the building blocks and some of the use cases should fuel your thinking about how ML could be relevant in your business and how it could be applied to improve the User Experience.
In recent times, the technology has reached a level of maturity and accessibility that means running an experiment is relatively straight forward if you know what you’re doing. So think about about running a ML experiment in your business that enhances the User Experience. The technology is readily available and there are experts out there who can pull together a prototype quickly to help test your hypothesis.
For 24x7 Support Call 1800 000 691
For Sales Enquiries Call 1800 664 222