Local Technologies for the Smart Home (Google I/O’19)

As an Amazon Associate I earn from qualifying purchases.

Woodworking Plans Banner

[MUSIC PLAYING] [APPLAUSE] CARL VOGEL: I hope,
today, you were able to listen to some of
the great talks we've had and explore the
different sandboxes. And if you haven't
seen the Smart Home section in the
Assistant sandbox, I highly encourage you to do so. Thank you. We have a great team there
and some really cool demos, including showing you how
the Smart Home API can help you grow a better garden. On behalf of our
team, we're really excited to share with you some
of the Smart Home technologies that we've been
building since we last spoke to you at I/O 2018. My name is Carl Vogel. I'm a product manager
on the Smart Home team. MANIT LIMLAMAI:
My name is Manit. I am a software engineer. GAURAV NOLKHA: I'm Gaurav. I'm a solutions
engineer on Smart Home. CARL VOGEL: I'd like
to begin with a story. One of my friends recently
purchased some Smart Home devices, in
particular, light bulbs from one company and smart
plugs from a different one.

And he put them in an area
that he calls the downstairs. And when he gets ready for
bed, he walks upstairs, and he says, hey, G,
turn off the lights. Well, he was telling me
that oftentimes, they don't respond together. One set of lights often responds
a half second or a second faster than the other. Or sometimes they just
take a really long time to respond in general, or
they don't respond at all. As Google and our
developer community work together to
grow Smart Home, we need to work together
to solve these challenges. We believe one method
is to shift processing from the cloud to the
local environment. We've taken our first
step in this direction to improve the experience
for users like him and the millions of
other users that use our products on a daily basis.

We're happy to introduce that
first step is the Local Home SDK [LAUGHS]. [APPLAUSE] The Local Home SDK enables you
to locally process and fulfill Smart Home commands received
from the Google Assistant. We do this by inviting you to
build and run your Smart Home business logic locally on Google
Home speakers and Google Nest displays. Then we securely give you
access to the lower level radios to communicate over
the local area network with your smart devices. Through this, we
can together deliver substantial improvements
in latency and reliability. But first, before I talk
about the Local Home SDK, let me give you an intro
to the Smart Home API and then show you how the
Local Home SDK layers on top. The Smart Home API
is the foundation of our Smart Home program.

Let's begin with how devices
are defined and integrated into the Assistant. First, developer-specified
device type. This is really the
what is it factor. Is it a light bulb? Is it a microwave, a camera? Device type is our
method to classify the overall essence of the
device in the user's home. Second, developers
specify a device trait. And this really describes
the overall functionality of the device. What can it do? How can users control it? Devices oftentimes
have multiple traits. For example, a light bulb
may have the traits on/off, brightness, and color setting. Once the device type
and trait are specified, we can then bring these
devices into the Assistant and specifically into our Home
Graph using the same intent. Home Graph is our
database that enables us to build a topology of the
user's devices in their home. Now that the devices are
defined and in our Home Graph, let's see what happens
when a user issues a command for your device.

When our user says
a command, such as, for example, hey, Google,
turn on the lights, we send that to the Assistant
server as a wave form. And then Assistant server
process this wave form and ultimately determines which
device or devices the user is trying to move into what state. Then we send that
as a structured JSON payload to the developer to
be processed and ultimately fulfilled. And as we see, the
developer communicates with the end device. Through this very simple
method to integrate devices into the Assistant and
have users control them, we have grown
exponentially since 2017. In fact, we work with over
30,000 unique devices, among 3,500 brands. We encourage you to
join us and to integrate if you haven't done so already. And a great place to start
is the Smart Home 101 talk that some of our friends
gave earlier this morning. Although successful by
almost all accounts, cloud-to-cloud integrations
have inherent limitations. We may have a Google Home
speaker and a smart device within 10 feet of each other. Yet this command needs to
travel hundreds if not thousands of miles before returning
back to the same room.

This takes time and, of
course, provides opportunities for dropped commands. So we started to
think at Google, can we leverage the
fact that these are on the same local area network? The Local Home SDK is
our method to do so. At Google, the
developer experience is core to our thinking. And this deeply influenced
our design tenants. With the Smart Home API
being the foundation, we wanted to leverage
its capabilities and layer the SDK on top as
a deeper integration, not as an either/or.

We also implemented what
we call a "come as you are" philosophy, in that we want
it to work with your devices as is today without requiring
any further modifications. Lastly, we've heard from you. Thousands love the
Smart Home API. And so we wanted to mirror
that familiar interface onto the SDK. So with these tenants
now understood, let's go ahead and
revisit what happens when a user issues a command
using the Local Home SDK. As we can see, the user
still says a command. And it still goes to
the Assistant server to be processed. However, if we know that this
device is locally controllable, will it actually
send that command as a JSON payload down to the
Google Home speaker or Google Nest display. We'll then pull up the
appropriate developer's JavaScript file that has their
Smart Home business logic on it to process the intent. And then we'll provide access
to the lower-level Wi-Fi radio to ultimately communicate with
your device over the local area network. And notice, the developer
still controls the smart device but can now process and
fulfill that command locally.

And since we still have this
cloud-to-cloud integration, we now have the
cloud as a fallback. So what is this app– what
does adding this local path do? Well, first, it allows
us to reduce the latency after a payload leaves
the Assistant server to less than 300 milliseconds. This will provide a very,
very noticeable benefit for your users. Second, it allows us to
drive the reliability that a Smart Home command
reaches the device to well above 99.9%. And we primarily
achieve this by having redundancy in the system. So the next natural
question is, it sounds cool. But will this work
with my device? Well, I'm happy to say this
works with all the device types, including the 16 new ones
that we just launched at I/O. And, in particular, this is
not just for Wi-Fi devices. If you use a hub or a
gateway for your Smart Home integration, for example,
a BLE hub or a Zigbee hub, you can talk locally
to that hub as well.

In addition, we
support all the traits that you use today
with the one exception of two-factor authentication. Recall from the tenants
that our goal was not to require any firmware changes
on the part of the developer. And so we set out to support
the most popular discovery and control protocols as
shown on the slide behind me. And lastly, which devices can
host or run this JavaScript? We're really happy
to say here that it works with all our Google
Home speakers and Google Nest displays, including the
new Google Nest Hub Max. So we've talked a lot about
the experience for developers.

And this is I/O, of course. But what about users? What do users have to
do to gain this benefit? And the answer? Absolutely nothing. Once you integrate
the Local Home SDK, we go ahead and
establish the local path for all the users
that are already on your Smart Home project. So with that, I
want to turn it over to Manit to talk more about
the technical architecture. AUDIENCE: Woo! [APPLAUSE] MANIT LIMLAMAI: The Smart
Home ecosystem today relies on developers bringing
in devices better discovered and controlled on the local
network in different ways.

In many cases, the application
layer or the business layer logic is openly documented
but nonetheless custom. Early on, we decided to make
the Local Home platform flexible enough such that our Smart
Home developers would be able to leverage the
platform without compromising the ability to bring out
their unique features. At a high level, let's
look into the elements to make this flexibility happen. We'll talk about
the Google device and what's happening there. We'll talk about the overall
Google infrastructure and how the Google
device fits into it. And then we'll go
into the specifics of discover and control. To give a bit of
background, a few years ago, we built Chromecast.

Chromecast runs
on Chromium, which is the open-source project
behind Google Chrome. Chromecast, or more specifically
the Google Cast SDK, created a way for
developers to run their code on a Google-built device. This code doesn't run
natively but rather as an app in its own container. Since we're running on
something like Google Chrome, this container happens
to be a browser window. And the app is JavaScript. Leveraging this
browser technology, we're able to run multiple media
apps in their own sandboxes securely and simultaneously. Building on the
Cast foundation, we created the Local Home SDK
and the Local Home platform. The Local Home SDK
and the platform together are the interface
between developer Smart Home apps and
the low-level radios used to talk to smart devices. Local Home platform
has two important tasks to take care of. First, interfacing with
the Google Assistant, such that we can leverage
the Smart Home API as is, and second, to provide
controlled access to socket communications
using TCP, UDP, or HTTPS, HTTP, and HTTPS protocols.

I just spent a couple minutes
talking about the Google Home device and how you can soon
run Smart Home JavaScript apps on it. In order to communicate with
your devices and the user's local network, we also had
to build up some components in the larger Smart Home
ecosystem with the Google Assistant and with
some help from you. If you already have a
cloud-to-cloud Smart Home integration, you'll be
familiar with the actions on Google Console
and Home Graph. There are some additions to
both of those systems in order for you, the developer,
to help your users benefit from this new local path. Let's look at the
high-level flow to discover and
control your devices in the user's local environment
in the next two slides. Before we talk about
the code that you'll write to run Google
Home, you'll need to add some data via
the Actions on Google Console and the
cloud-to-cloud integration. In the actions of
Google Console, you'll tell the Google Home
how to find your devices in the user's local network.

We implemented the common
discovery protocols, like MDNS, UDP broadcast, and UPMP. Next, you'll update
the SYNC response to include a hint to the
Local Home platform that will help with identifying a
locally-discovered device is the same device that appears in
the cloud-to-cloud integration SYNC device list. Once the discovery
information is Added via actions
on Google Console and you've added this
hint to SYNC response, Google Assistant will
send this information to all Google Home devices
that a user is linked to. If the Local Home
platform can match a locally-discovered
device to a device in the list from your cloud,
we've established a local path. Great. So now we have a local path. And the user says, hey,
Google, turn on the lights. The request from
Google Assistant is dynamically routed to
a Google Home device that has claimed that local route. The JavaScript app
running on the Google Home can now handle this
request and can communicate with the device using
application-layer protocols, like HTTPS or TCP
and UDP sockets. GAURAV NOLKHA: So
when we set out to build the Local
Home SDK, we wanted to make sure that developers
have best experience.

So let's look at
the developer flow. Starting with building
your application to debugging it for
certifying your application and even launching, we've taken
care of the complete spectrum. And in the next few
sections of the slides, we'll look into these each. So first, let's
start with developing your types of application
that will help us discover and control
the devices locally. To help discover and
control devices locally, you, as a developer, need
to do three key things. First, like Manit mentioned
earlier, the scan config on Actions in Google Console. Second, the little bit
of help from your works with integration, where you
update your SYNC response to give us a hint
that these devices may be locally controllable.

And third, your TypeScript app. This is the app that will
run on Google Home devices. Now a quick note about
the app itself, we've been talking about
JavaScript as the app that runs on the devices. But we highly recommend
developing your app in TypeScript. It's just a better
developer flow. So what does this
app do locally? It needs to handle
two key events. First, when Google
Home devices discover a device in the
local network that belongs to you as a provider,
we fire and identify intent. And that needs to be
handled by your app. The second one is
REACHABLE_DEVICES, which is a special case
of the first one where if we have discovered a
hub or a bridge device. And we'll go into the
details in coming sessions. And finally, when the user
wants to control the device, the platform fires
EXECUTE intent.

And for those who are familiar
with the Smart Home API, it's the same EXECUTE intent
that your JavaScript receives, like your cloud
endpoint receives. MANIT LIMLAMAI: Let's go
into the details of what the discover flow looks like. We talked a lot about this
Actions on Google Console. So what does it look like? In a few days, you'll be able
to see this new UI, which allows you to update how we
find your devices in the user's local network. In this particular
example, we've added the ability to upload
the UDP broadcast packet along with the in and output ports
required for UDP broadcast.

Next, we'll take a
look at exactly what you need to do to update
the SYNC response. We've added a new field
called otherDeviceIds, which hints to the Google Home
to start looking for a device. And we'll use the
information in this field to help deduplicate a
device that we find locally to a device that you told us
about via the cloud-to-cloud integration. Here's a sample of
what that looks like. You'll notice that this
otherDeviceIds field appears at the device level. So you, as the
developer, can choose which devices you want to be
locally controlled or not. GAURAV NOLKHA: So
now, we're going to jump into the TypeScript app. And this is the app that
has the business logic that can control your devices. It is a simple, but it's
a critical component in this whole process. And just a quick reminder, we
need to handle two intents. Let's quickly put these
intents in perspective. So for those who are familiar
with the Smart Home API, we have SYNC, QUERY, and EXECUTE
intent that Google servers send to your cloud services.

For local, we are adding
two new intents, IDENTIFY, which is fired by platform
when we have scanned a device that belongs to you. And second is the
REACHABLE DEVICES intent, which is optional
but required if we have scanned a bridge or a hub. Let's get started with
the TypeScript app. But before we do that, let's
look at the SDK interface. And when this launches
in June, you'll be able to download a sample and
a boilerplate code from GitHub. So the interface exposed by the
SDK is pretty straightforward. It has two main classes,
first, the DeviceManager class. The DeviceManager class
provides methods to communicate with your devices. And like Manit
mentioned earlier, it could be TCP,
UDP, or HTTP/HTTPS. The second is the app class. This provides the methods to
attach the intent handlers. So let's look at the typings
for DeviceManager class. Here's the send, which
takes in an input for CommandRequest
type object and return a promise, which is resolved
when Command is completed.

And like I said earlier, it
could be a HTTP, TCP, or UDP. Let's look at the typing
for the app class. So for those who are familiar
with the actions of Google node library that you use for
the cloud side integration, you will realize that there
we have onExecute, onSync, and onQuery as the
handlers you can attach to. Here we have onIdentify,
onReachableDevices, and onExecute
methods, which you can call to attach the
handlers for your app. After your app is
attached to the handlers, you call the listen API. And that's an
indicator to the SDK that the app is now ready
to process these intents. And notice that these
methods are changeable. Finally, when you are ready to
communicate with your device, you'll call the
getDeviceManager API to get the singleton
object for Device Manager and use the same API. Let's put this interface
into perspective by looking at the
skeleton of a sample app.

So in the sample app,
I have identifyHandler and executeHandler. And in the constructor
for this class, I create the instantiation
of a Local Home app, get the Device Manager object,
attach the two handlers, and call the listen API. Now let's start looking at the
events that happen at runtime and what your app does
to handle those events. MANIT LIMLAMAI: Gaurav,
I'm a visual learner. So let's take a look
at this in pictures. You've already updated
scan config via Actions of Google Console. You've updated
the sync response. And now the information has
been set down to a Google Home. The Google Home starts
a state machine where we look for local devices. This process repeats. So whenever a user
plugs in a new device, we'll find that too. When the smart device
responds to one of our scans, the Local Home
platform generates an intent called IDENTIFY, like
Gaurav has been mentioning. And we then call your
apps Identify Handler. GAURAV NOLKHA: So let's
look at Identify Handler. So here's the signature
of the Identify Handler.

The input is the object
of type IdentifyRequest. And we expect the response
to be a promise that resolves to IdentifyResponse. The key information in
IdentifyRequest object is the scanData. And this depends
upon the scan that we use to scan for your device,
as it could be UDP, MDNS, UPNP. IdentifyResponse. The key information we look from
IdentifyResponse for the device that we just found is
the verificationId. And this must match one
of the otherDeviceIds that we got from
your SYNC response. And if we find a match, we would
have established a local path. Now there are two
other flags that are also important,
isProxy and isLocalOnly. And they are set to
false for this device if this device was an end device
that we wanted to control.

But what if we find a hub? MANIT LIMLAMAI: Great
question, Gaurav. Similarly, when we
find a hub, we'll trigger an IDENTIFY intent
and is the field's isProxy and isLocalOnly
will be set to true. And that will tell the Local
Home platform to then trigger a REACHABLE_DEVICES intent. GAURAV NOLKHA: As the
name kind of suggests, REACHABLE_DEVICES
intent is supposed to return all the devices that
are reachable from this hub. The signature for the
handler looks very similar to the IdentifyRequest. The key information to look for
in the ReachableDevicesRequest object is the proxyDevice. This is your hub that you
told us about in response for identifier. Now the response object, we
expect an array of devices. And again, the key information
for each one of those device is the verificationId. And that has to match
one of the otherDeviceIds from your sync response. MANIT LIMLAMAI: Like you
know, I'm a visual learner. So let's take a look at
this lovely animation. We start establishing the
local path with information from you via scan config via
the actions of Google Console and the updated SYNC response
from your cloud-to-cloud integration.

Once the Google Assistant
receives this information, it will send it down
to all Google Home Devices a user's linked to. Once the Google Home
receives this information, it begins looking for
devices on the local network. When a smart device
responds, the Google Home generates an IDENTIFY intent
to the appropriate JavaScript. The JavaScript responds
with a verificationId. And the Local Home
platform does some magic to determine if there
is a local path. If there is, Google Home will
update the Google Assistant with this optimized route. Let's dive into the details
of what that magic is. If you take a look
at the first step, we've updated SYNC response. And you've told us via
the otherDeviceIds field that we should start looking
for a device locally.

Once we find a
device locally, we'll ask you to give us
a verificationId via the JavaScript. If we find a match between
the verificationId and any one of the otherDeviceIds
field, we'll call that a Deduplicated match. And we'll tell the Assistant
that that device can go local. If not, that's OK. We'll still go to your
cloud for integration.

GAURAV NOLKHA: Great. So now that local path is
available thanks to the SYNC response, IDENTIFY,
and REACHABLE_DEVICES, let's make sure when the user
saves a command, it goes local. So user says, hey, G, lights on. Assistant then sends a message
to Google Home, in this case, because we have
established a local path. At that point, the platform
generates EXECUTE intent. And EXECUTE intent handler
in your JavaScript app gets called. The key information to look for
in the ExecuteRequest object is the list of
devices that the user wanted to control and the
command and the control that user really wanted. Could be on/off,
brightness, whatever. So your app is going
to create a command for each one of those devices or
a series of commands, actually, and then use the
deviceManager send API to communicate with the device. And for your help, we have
a utility builder function available that helps you
create the ExecuteResponse. And you can specify the
success or the failure state for each one of those devices. So one thing to note here
is that your app does not have direct access to IP
address of the device.

And we expect your app
to use the CommandRequest object to communicate
with the platform and eventually to your device. And so we are
showing here again, you could use TCP or UDP
socket or HTTP/HTTPS request. MANIT LIMLAMAI:
So to recap, what you need to do as a developer
to develop your local Smart Home app that will run under Google
Home device are these steps. You'll tell the Google Home
how to find your device on a user's local network via
the Actions on Google Console.

You'll update the SYNC
response with a hint to establish this local path. And you'll write an
app that will handle IDENTIFY and EXECUTE
intents and, optionally, a REACHABLE_DEVICES intent. GAURAV NOLKHA: Great. So moving on now that the app
is written and it's TypeScript, so let's start right building
and running this app. So TypeScript,
simple, you're going to use a TypeScript compiler
to generate the JavaScript app. And the good thing is
you can use whichever module system you want. And as long as the
target you choose is supported by Chrome
browser, you're good to go.

Because remember, this is
an app that conceptually is running in a browser tab. So far we've talked
about that a JavaScript app is running in the browser. But technically,
it's an HTML page. So look at the sample HTML. It really doesn't do much, only
loads the SDK and your app. And during development,
you can actually host this HTML page
in your local machine or on a hosting server. And once you have that
URL, go back to Actions on Google console. And on device
testing page, there's an input box for you
to enter this URL. Once you save this URL, give
30 minutes for our servers to propagate this information.

And at that point, if you
reboot your Google Home devices, then you can imagine
a tab coming up. And it's loading
your JavaScript app. And if all of that
works, moment of truth. Hey, G, lights on. Did that work? And did it go local? Well, that question brings
us to our next section. MANIT LIMLAMAI: Testing
and debugging your app can be a little complex
because the app is running on the Google Home device. But we've leveraged
a few familiar tools to make it easier. Open chrome://inspect on a
new Chrome tab on a machine that's on the same Wi-Fi network
as the Google Home device. Make sure your network doesn't
block packets between devices on the same Wi-Fi. And you should see your
app listed, like this image on the screen behind me. Click the inspect link
underneath your JavaScript, and you can open up DevTools
to remote debug your app. But what if your app
isn't in this list? Because your Actions on
Google Console project is not yet in
production, a few things need to be right before
your code kicks into action.

Let's go through that checklist. Make sure the linked user
on the Google Home device has access to your
Actions on Google project. Second, make sure
your SYNC response is updated and contains at least
one otherDeviceIds field filled in. Finally, the scan
config and your app URL should be correctly entered in
the Actions on Google Console. Now let's assume that all those
worked and your app is loading. Let's make sure it
loads without errors. To ensure that, you can
look at the console section of the DevTools page. It will look something like
this if there is a problem. For IDENTIFY handler, make sure
that verificationId is correct and it matches one of
the otherDeviceIds field so that we can do the
magic to go local. Next, for the
EXECUTE handler, make sure that the
commands are working, either TCP, UDP, or HTTP/HTTPS.

And finally, make sure that
you are returning a promise from each of your handlers. GAURAV NOLKHA: So now, to ensure
the great user experience, it's important that the Smart
Home integration that you just did is complete and all
the golden queries work. So how do you do that? Smart Home Test
Suite is your friend when it comes to testing
your integration. And we're going to talk
more about Smart Home Test Suite in detail in tomorrow's
talk at 9:30 AM on stage 5. So join us. Finally, let's look at
quickly the remainder of the app lifecycle. So your app works. Smart Home Test Suite
says it's working. All the tests pass. And at that point, it's time
to upload your JavaScript. So go back to the Console,
upload your JavaScript, hit Save. After you feel ready, you
hit the Submit button. And that starts the
certification process on Google's end to certify
this new JavaScript and the integration. Once certified, your project
launches to all the users.

Once it's launched, you can
manage your integration again. And Actions on Google Console
is your window to doing that. You can monitor
the ops dashboard. You can look at the
Stackdriver logging for all the error logs
that are happening in production on Google's end. And if you find issues or
errors with your JavaScript, go back to the Console and,
again, upload the version 2 of JavaScript. And hit Submit. If you see that in production
there is a JavaScript bug and you need to roll back
to a working version, like V1 of your
JavaScript, you can, again, work with us through the
Console to kind of help you roll back your JavaScript. So we've covered
a lot of details about the complete
developer flow, from writing your app
to launching your app. If you want to learn
more about the tools available and for faster
approval and submission process, join us at the Tools
For Creating Better Smart Home talk on Thursday
morning at 9:30 AM.

And now to know what's
next, I'll invite Carl. CARL VOGEL: Thanks, Gaurav. So we have a busy couple
months ahead of us. The link behind me,
g.co/localhomesdk, is now live. So you can go ahead, visit
that link to learn more. And we'll also be
posting updates throughout the next couple
of months to that page. In just a few weeks,
we'll launch the SDK into developer preview. And at this time, you'll
be able to build and test your JavaScript app in
the local environment and complete the
self-certification program that Gaurav was talking about. And although we don't think
it will take you a long time to complete this
integration, we wanted to make sure we gave
you plenty of time. And so we began
launching projects to production in October and
bringing this amazing speed to users. We'd be remiss if we
didn't give a big thank you to some of the partners
on the slide behind me for providing engineering
time and energy to test out the platform and SDK to make
sure we deliver a rock star product to you in June.

So when we've talked about
going local this whole talk, local execution is just
the tip of the iceberg. We have much, much more in mind. I want to briefly talk
about two technologies we're building that leverage
local communication to improve the device setup experience and
to extend the Assistant even further. One of the things
we've heard from users is that setup and account
making of smart devices is hard. In fact, it can take
upwards of 10-plus steps for users, including
downloading a new app, creating a new user
name and password, setting up the
device, taking an OTA, going back to the Google Home
app or Google Assistant app to link, reentering
those credentials. It's not easy for users. And one of the other
things we heard is that– and it seems that users have
a lot of apps on their phones to manage their smart home.

And for all you smart home
enthusiasts out there, you'll recognize this
phone on this screen behind me, that you need a
folder to actually manage your smart home. So we took our first step
towards solving this device setup problem with GE lighting
and developing a seamless setup experience that we delivered
first in the Google Smart Light Starter Kit. We gave users the ability to
natively setup C by GE Smart Lights in the Google Home app
without needing to download any additional apps. And instead of this
10-plus step process, we reduced it to three
steps and about 30 seconds. Let's take a look and
see what it looks like. So when a Google Home device
discovers a C by GE light, we prompt the user,
would you like to set up your smart light? Then we go ahead and
connect to the bulb and discover services,
at which point then the bulb will
begin to blink. And this will let the user know
which bulb they're setting up.

They click Setup and
choose which room they want it to go in
and give it a name. At this point, we're
provisioning the bulb to the local network and
registering it with Home Graph. And in just about
one second, you'll see that the smart light is
now set up and at which point it can go ahead and start
taking Google Home commands. And so 30 seconds is really
incredible for our users. And we've heard really
great feedback so far. And we accomplish
this by allowing GE to run their code
on Google Home devices. And yes, as you
may have guessed, they also used the
Local Home SDK. However, to do
seamless setup, there's more intents to write
than just the IDENTIFY and EXECUTE we
talked about today, including INDICATE, PROVISION,
UNPROVISION, et cetera. And those are all
part of the SDK. And also, this SDK can be
used for more than just the Wi-Fi radio. As part of early access,
we allow this SDK to also leverage the Bluetooth
radios for direct connection to BLE devices.

And through this seamless setup
experience with BLE devices, we use the Google Home as a hub. So you don't need to go out
and buy an additional BLE hub or gateway. We're growing the
seamless setup program now and focusing on BLE
devices in the near term. So if you're
interested, let us know by visiting the link on
the screen behind me. Next, I want to talk about
a Assistant Connect, which is something that you may
have heard about at CES and we've been continuing
to invest in since.

It leverages the same
Local Home platform that Manit talked
about to extend the reach of the
Google Assistant, which we call Assistant extensions. And we've classified
these into two categories. The first is input
extensions, which enable a simple
method for a user to activate the Assistant to do
everything from simple queries, such as, what's the weather, to
triggering advanced smart home routines. Here, we have a simple,
programmable button that instead of always
requiring users to say, hey, G, they can go ahead and
just push the button. It's really great for some
of those frequent queries.

In addition, we also
have output extensions that enable devices to
show Assistant responses, such as, what's the
weather, or their schedule from their Google Calendar. So this is currently in
early access right now. And throughout 2019, we have a
really busy year ahead of us. We have some product launches
coming up later this year. So stay tuned. And our teams are finalizing
the reference design and preparing the Assistant
Connect SDK for public release later this year. And by 2020, we
expect developers to have self-service
access and the ability to even more easily and
deeply integrate the Assistant into their products. So to recap, we believe that
driving logic from cloud to on device is
central to our strategy to create even better
experiences for users. And we believe that
by going local, we can also invite
developers to integrate more deeply with Google. Secondly, the
developer experience is key to building a great
smart home ecosystem.

Our ecosystem is only as strong
as our developer community, you all. We've taken many steps
to make onboarding as simple as
possible, for example, by not requiring firmware
updates to garner the benefits of local execution. And we always welcome feedback
on how we can further improve. So definitely let us know. And lastly, a big
focus for us in 2019 is reducing friction and
making the device setup and linking more seamless. I encourage you to explore
our programs and learn more. So with that, I realize
at 5:30 PM talk, you're foregoing happy hour. So we thank you for coming here
today and listening to learn more about local technologies. If you have any
additional questions, check out the links in
the slide behind me, visit us in the sandbox, or
swing by our office hours.

And with that, enjoy the
rest of your I/O. Thank you. [MUSIC PLAYING].

As an Amazon Associate I earn from qualifying purchases.

You May Also Like

About the Author: tech