HomeKit is Apple’s operating system for controlling and communicating with connected accessories in the house.
Amazon even released a software development kit for its Alexa assistant, so your third-party connected light bulb can be adjusted by telling the Echo to dim the lights.
Viva la voice-enabled home!
Well, not so fast.
As reviews of voice-enabled technology surface, from first-party products like the Echo to third-party products integrated with HomeKit, a central point resounds: That voice enabled tech is, in fact, “quite stupid.”
While nearly all reviews qualify disappointment with the suggestion that the potential is great, the ability of developers to make great voice-controlled tech is being suppressed by the limited creative freedom afforded to them by Apple, Amazon and Microsoft.
The name-brand smart voice products we know are not made to be the unifying voice interface glue in the smart home. Here’s why.
One size does not fit all
According to Gartner, the average household will have more than 500 connected devices by 2022, but as it stands right now, developers using HomeKit cannot design unique scenarios — specific voice commands and resulting actions — for their products. This means developers are not able to implement voice by themselves and are using a “one size fits all” voice-command approach when building a connected thermometer, light bulb or electrical socket.
And IoT developers using HomeKit are not provided with relevant tools to innovate and create solutions that surpass what Apple has cooked up in-house.
To put things into perspective, imagine your iPhone with just Apple-made apps like Newsstand, Mail and Notes, and no Flipboard, Gmail or Evernote. Or put another way, imagine if Apple told developers they had to use one specific graphical user interface for their apps.
This is what they are doing with voice user interfaces.
However, Amazon and Microsoft are taking a different approach, opening Alexa and Cortana somewhat to developers. As it stands now however, the tools can still be time-consuming to implement and limited in functionality.
For instance, developers can only have users issue a voice command to tell a music player to start a song and, once inside the app, there is either limited or no support for an ongoing conversation to clarify what someone wants. This is important because, as we all have experienced, we are rarely understood correctly on the first try. An ongoing conversation is one of the things that will make using the massive amount of devices in our connected home fluid and empowering.
Confusion in a fragmented ecosystem
With Siri, Alexa and Cortana all vying for space in the smart home, developers must make decisions about what voice technology to support now and what to put on the roadmap for later.
Precious time that could be spent adding new features, fixing bugs and making other improvements has to be allocated to simply supporting different voice platforms.
This fragmented voice interface ecosystem creates confusion not only among developers but also with manufacturers that are confused about which standard to use.
Do what you do best
By creating operating systems or platforms like HomeKit and Google’s Brillo, Apple and Google are taking the proper role in the ecosystem and providing the framework to make devices work together using unified ways to communicate.
Developers should have access to fully stocked toolkits to unleash their creative freedom, as it’s what they do best. If given the chance, developers will make killer voice interfaces designed specifically with their creations in mind, but with limited options available, developers will remain stifled, and products will remain subpar.
Are you an IoT developer? Have you created a voice interface for your app or device? What do you think of the options available now and what would you like to see?
Ilya Gelfenbnyn is the founder and CEO of Api.ai, the industry leader in intelligent voice interfaces for connected devices and software. He has a BS in mathematics from Novosibirsk State University (Russia) and an MBA from the University of Brighton (U.K.). His areas of interest include artificial intelligence, computational linguistics, human-computer interaction and conversational agents. Reach him @gelfenbeyn.
This article originally appeared on Recode.net.