When deblackboxing a speech-activated virtual assistant application like Google Home, you begin to see some parallels between that and other virtual assistant applications (like Siri). Using a mix of structured and unstructured data, Google Home’s machine learning processes takes note of the information we provide to it, and through machine learning/convolution neural networks, Google Home begins to accommodate and adapt to the primary user of the virtual assistant.
The structured data can come from direct sources of information – Google Home has a functionality where users are able to use typed input for commands and visual responses (Google Assistant, Wikipedia), which can constitute as direct data. Additionally, the information Google Home collects through direct verbal actions are direct forms of data which would then be logged for both machine learning purposes and future predictive interactions on behalf of Google Home. In regards to unstructured data, Google Home surely collects data from indirect forms of communication that the user conducts in with any account linked to the Google Home. This could mean your email, texts, contacts, Spotify, YouTube… essentially any device or application that you link with your Google Assistant (Google Assistant, Google). Based on the patents for intelligent automated assistant, the two inputs – user input and other events/facts – supports the direct and indirect, structured and unstructured data inputs that Google Home both listens too and records information on. From there, the virtual assistant application begins to break down the requested input/command and breaks it up into groups to determine what is being said, what needs to be done (in the most efficient matter based on action patterns), how it will be done, and what will be said (Intelligent Automated Assistant, Google Patents). Once the virtual assistant determines all of that within seconds, the initial requested input is then outputted into the form of words and actions. The patent application also describes the “parts” of a virtual assistant: input, output, storage, and memory – which are the four core “interactions – followed by the overall processor that decodes and recodes the input, and lastly the overall machine itself which is the intelligent automated assistant. It’s important to recognize that all parts of a virtual assistant work together in a network to achieve the common goal at hand. That’s what makes it an intelligent machine learning service.