The Voice Assistant Battle! (2017)
Science & Technology
Introduction
Marques Brownlee, better known as MKBHD, delves into the comparison of the four most frequently used voice assistants as of 2017: Google Assistant, Siri, Amazon Alexa, and Bixby. He equips each with the latest software updates and connects them to the same WiFi to minimize variables. The tests span across purely factual questions, conversational interactions, and complex command executions, ultimately revealing their strengths and weaknesses.
Factual Questions
The voice assistants are first subjected to basic factual questions, which should ideally be easy to answer. Here's a summary of how they performed:
Google Assistant:
- Weather in Carney: "82°F and Sunny"
- 20 * 15 - 8: "292"
- The Home Depot closing time: "Open until 10 p.m."
- Tesla's stock price: "$ 348"
- Height of the Empire State Building: "1,250 ft"
- Distance to the Moon: "238,900 miles"
Siri:
- Weather in Carney: "79°F, partly sunny"
- 20 * 15 - 8: "292"
- The Home Depot closing time: "10 p.m."
- Tesla's stock price: Didn't know the answer
- Height of the Empire State Building: "1,250 ft"
- Distance to the Moon: "239,000 miles"
Amazon Alexa:
- Mixed results; failed to understand some commands.
- For example, couldn't set a timer or alarms.
Bixby:
- Showed inconsistent results.
- Accurate answers included Empire State Building's height and alarm setting.
Conversational Interactions
This round measures how well each assistant handles conversational queries:
Google Assistant:
- Accurately handled chain questions about Barack Obama, NBA Championship, and more.
- Maintains context in queries.
Siri:
- Good with initial questions but struggled to maintain context.
- Failed to identify the Golden State Warriors' point guard.
Amazon Alexa:
- Struggled significantly with context. Couldn't answer chaining questions well.
Bixby:
- Delivered fragmented results with random, sometimes inaccurate responses.
Complex Commands
This involves chaining commands and completing deeper tasks within the phone:
Google Assistant:
- Successfully opened Instagram, set timers, showed photos, and took selfies.
Siri:
- Successfully set timers, alarms, and directed to the App Store for Uber.
Amazon Alexa:
- Struggled to perform mobile-integrated tasks like setting timers and opening apps.
Bixby:
- Mixed results with successful basic tasks but failed more complex commands.
The Rap Battle
To conclude, each assistant attempts a rap, humorously showcasing their characterization:
- Google Assistant: Provides a simple poem.
- Siri: Attempts humor but is somewhat cringey.
- Amazon Alexa: Self-proclaims as the "baddest AI" but fails humorously.
- Bixby: Tries too hard, revealing its robotic essence.
Keywords
- Voice Assistants
- Google Assistant
- Siri
- Amazon Alexa
- Bixby
- Factual Questions
- Conversational Interactions
- Complex Commands
- MKBHD
- Comparative Review
FAQ
Q: Which voice assistant performed best with factual questions? A: Google Assistant performed the best, consistently providing accurate answers.
Q: How did Siri handle conversational interactions? A: Siri managed the initial questions well but struggled to maintain context over multiple queries.
Q: Were Amazon Alexa and Bixby capable of handling complex commands efficiently? A: Both assistants had mixed performances. Alexa particularly struggled with mobile-specific tasks, while Bixby was inconsistent.
Q: What was the purpose of the rap segment? A: The rap showcased the assistants' characters humorously, highlighting their strengths and weaknesses in a light-hearted manner.
Q: Which voice assistant emerged as the overall best? A: Google Assistant came out on top, especially in areas like factual accuracy and handling complex commands, but all assistants have their own strengths and weaknesses.