First of all, I am not saying Echo is a great product and I actually don’t have one because personally I don’t really see the point. But I have to note that Echo is definitely something I did not perceive as the next step in technical advancement. In my mind, the so called “next step” looks more like VR technology where humans directly interact with the symbolic environment, touch, draw, and create new layers of abstractions without any physical constraints like this video:
Voice control, on the other hand, is somewhat imperfect in my opinion because there are so many memes about “Siri fails,” and frankly it looks kind of dump when people are using it… like echo:
Anyways, back to this week’s topic. The readings reveal that the evolution of interface and interaction is not a matter of one way of mapping replacing another, rather a process of softening the divide between hardware and software. For instance, the keyboard we use to type on our smartphones are based on real life keyboard, and that keyboard is just a step forward from typewriters.
In the case of VR and voice control, both technologies fit the description yet have the opposite approach. VR maximizes its affordance by creating a neutral language between human and machine via a “friendly” interface. Voice control technology maximizes its affordance through making the machine function more like human by teaching them syntax and semantics. In my opinion VR is better than voice control because it’s more friendly and direct, which allows me to do more things. After this week’s readings, however, I think voice control might be the future.
Dictation technology exist long ago and I remember my old desktop PC has Dragon Dictation installed when I was learning English. It wasn’t a big deal for commercial purposes until Apple introduced “Siri,” then comes “Cortana” and “Alexa” (All have default female voices and feminine names btw). It seems that modern technology is able to make voice control software more reliable through a lot of engineering and semantic training. And Apple is pushing this idea by implementing Siri to laptop OS updates (Sierra) and creating Apple Watch, which has a small interface so that users are encouraged to give voice commands. Amazon’s Echo pushes this concept further by elimination a visual interface all together, forcing the users to directly “interact” with the machine.
This is where things get interesting, human language has historically been one of the reasons for the separation between hardware and software, because the human language is too nuanced for machine, and machine code is incomprehensible to regular humans. The voice interface between machine and human makes the dynamic between human and machine more natural because it allows machine and human to directly interact at the symbolic level without any physical constraints. With further development in voice control, we might be able to do so much more because we can offload all the complex syntax building labor to machine and directly engage new ideas. So imagine one day we can code using just our voice rather than earning a degreed in computer science or intensive training on Codeacademy.