The announcement of Amazon’s new wall-mounted Echo Show 15 also revealed its new CPU that comes with some interesting (or potentially scary) applications. The AZ2 chip builds on the machine learning interface that premiered with the AZ1, which allowed Amazon devices to better recognize your voice, but has extended this capability to facial recognition as well. This comes with Amazon’s new focus on what it calls “Ambient Intelligence.”
Let’s explain what the AZ2 does on paper before we dive into the implications of this hardware. The AZ2 is capable of 22 times the amount of operations per second compared to the AZ1, which means it can simultaneously process speech and facial recognition locally. The information the AZ2 learns about your face will be part of what Amazon is calling “Visual ID” and requires users to specifically enroll in this feature. This enables the Echo Show 15 to recognize you and display custom content based on your Alexa profile.
Much like its predecessor, the AZ2 is a neural edge processor, which means that it utilizes machine learning to cut down on the amount of data it needs to send to or receive from the cloud. This has the benefit of not only reducing latency but also cutting back on the amount of data that gets stored on the cloud.