Hacking Amazon Alexa at AWS re:Invent 2017

People attend AWS re:Invent for a number of reasons, from the sessions, to hearing the keynote speakers, to finding the best partners and software for their cloud needs and of course attending the infamous re:Play party. There are more than just speaking and listening events at re:Invent, some of the other activities are a little more hands on than the rest.

To put this into context for you, we are speaking about AWS re:Invent 2017, which took place from Nov 27th - Dec 1st in Las Vegas, Nevada.

When signing up for the sessions at this event, it's hard to choose which ones to attend. This year was no exception in that it was especially tough for a few reasons:

  • The sessions seemed to fill up in a matter of hours

  • The whole conference was spread out along the entire Las Vegas strip, which at times made it difficult to get to a certain session on time.

Knowing that an event like a Hackathon can take up the whole day, it can seem like you are losing valuable session time if you want to participate in one. Instead of looking at this as a downfall, I took another approach. Knowing that Relus Cloud brought just a handful of technical staff to re:Invent, I knew I would have minimal time to go to sessions. So, I figured attending a Hackathon and using hands-on skills could be one of the best ways to come out of the conference with a few more skills and had the potential to bring great attention to Relus Cloud by showcasing what we can do for our customers.

The premise behind a Hackathon is that you have to complete a code and submit it in three hours and forty-five minutes. Three hours and forty-five minutes. That's it! And it's not just about the code being complete, you also have to have the description of your project and a presentation put together (many of the teams had put together demo videos in this time). In the Alexa Smart Cities Skills Hackathon, unlike some of the others, there were no set tasks required to be completed, you just had to develop a concept, architecture and design in that three hour and forty-five-minute time restriction.

SpotMe.PNG

The Alexa skill that my team developed is called SpotMe, and has two separate parts- one for consumers and one for public sector officials. The consumer side would look at pictures from a parking lot, for example feeds from the Apple Store parking lot, and figure out from an Alexa enabled car or the Alexa app on your smartphone if there was a parking spot available near you as you are on your way to go shopping. The whole premise behind this was to help keep up the holiday cheer and, in turn, keep you from becoming a Grinch while navigating the always packed parking lots around this time of year. This Alexa feature will help ease parking pain. By using mocked pictures, which could be fed from a CCTV parking lot feed, we calculated the confidence scores from the pictures gathered and then returned any free spaces available back to the end user.

The second skill was to use the text recognition piece of the AWS Rekognition service. It would allow you to search for licence plates in the lot by asking Echo if it had seen the license plates.

I did notice in our presentation that we were the only team that walked through some of the technologies and how we looked at the different pictures and developed our different confidence scores through the AWS Rekoginition service. These confidence scores were the whole basis of how the Alexa skill was going to determine if there was an open parking spot or not.

In addition to showing the confidence scores of the pictures we used in the Rekognition service, we also were the only team to display a full architecture of our feature.

Alexa - Parking.PNG

There were several great projects that were presented at the Alexa Smart Cities Hackathon. Some were a bit more imaginary than others, but a few would be able to be used today with a bit of tweaking.

Some of the ones that I remember the most from the presentations consisted of skills that would, for example, have the Alexa enabled inside of street lamps to help the homeless population find assistance or the nearest shelter. Another skill showcased was for consumers that allowed you to find a doctor when you ask and then send a text message to your phone with the Google Map coordinates of the doctor's office. There was also a skill that would help control stage lighting by just telling Alexa which lights to turn off and on and which colors to turn the lights to, hence making a lighting control board unnecessary anymore. This one, which I found particularly impressive, gave a full working demo with 3D rendering virtualized. One demo didn't use the Alexa skills as much but did use facial recognition of the camera and would determine if you were happy or sad and then recommend local movies for your mood based on geographic location.

All of the things that people created were quite impressive for the Alexa service, especially given the short time frame to complete. The funny part of this was that we had no idea Amazon CTO, Werner Vogels, keynote was going to talk about how this was the future. The potential future use of some of these skills was recognized by AWS CEO Andy Jassy in his awarding of the winners of this hackathon. My team was fortunate enough to win second place in the Alexa Smart Cities Hackathon- a big win for Relus Cloud. Taking part in this hands-on event provided me the opportunity to show the AWS world what Relus Cloud can do for companies of all sizes, and it proved that we are a force to be reckoned with on the cloud front. So, keep in mind that when planning out your next re:Invent schedule, it might be worth missing a couple of talk-track sessions for a hackathon in order to showcase your skills to the world.

Andy Jassy, AWS CEO.PNG