To Jeff Bezos: Please Consider Chattanooga for HQ2!

Chattanooga, TN, future home of Amazon HQ2?

Dear Jeff,

Super excited that you are going to open the second Amazon headquarters!

I know this is a new project for you guys, so I’ll keep it brief and follow Amazon new project protocol. Below is the press release for when you announce Chattanooga, TN as the location for Amazon HQ2 below:

Amazon Opens Second Headquarters in Chattanooga, TN– The City With The Country’s Fastest Internet and Rated Best Outdoor Town

New headquarters will bring 50,000 employees, attracting talent from across the East Coast, Southeast and Midwest.

Amazon unveils $5 billion new headquarter design plans for Amazon HQ2 in the South’s “Scenic City” that is equal parts modern and beautiful

SEATTLE — (BUSINESS WIRE) — Sep. 7, 2018 — (NASDAQ: AMZN) — Amazon today announced plans to open Amazon HQ2, a second company headquarters in Chattanooga, Tennessee after a year long search. Amazon expects to invest over $5 billion in construction and grow this second headquarters to include as many as 50,000 high-paying jobs. In addition to Amazon’s direct hiring and investment, construction and ongoing operation of Amazon HQ2 is expected to create tens of thousands of additional jobs and tens of billions of dollars in additional investment in Chattanooga as well as the surrounding metro areas.

Amazon decided to open the second headquarters in Chattanooga based on several unique attributes of the city, specifically the combination of technology infrastructure, quality of living and proximity to talent across the East Coast, Southeast and Midwest. While Chattanooga’s metro population is approximately 550,000 and under the initial 1 million metro area population requirement, it is within a day’s driving distance of 50% of the United States’ population as well as major technical universities such as Duke, Georgia Tech and Carnegie Mellon.

Chattanooga Facts:

“After visiting many wonderful cities across America that would have made a great home for Amazon HQ2, our team fell in love with the combination of outdoor beauty, forward-thinking technology investments and the kindness of Chattanooga’s people,” said Jeff Bezos, Amazon founder and CEO. “We think Chattanooga is the right choice financially for our employees as well as our business. We look forward to HQ2 being close to so many of our customers and partners. Honestly, the closer for me was all the great rock climbing and microbrews!” (note: I totally wish you would say that last line.)

Amazon is adding to its existing presence in Chattanooga, which is a distribution center it opened in 2012. The expansion of Amazon’s headquarters to the Southeast will dramatically impact the economy of the region and it’s people.

— — — — — — End of Release — — — — — — —

Ok, that’s my pitch press release for HQ2 coming to Chattanooga. The really smart leaders of our community will reach out through official channels I’m sure. I really hope you’ll give us a chance, even though we’re a wee bit under the metro population requirement. Some times big ideas look small at first.

If you do visit, please come by my office so we can show you the AI’s we’re building for Alexa! My company Pylon ai, built The Bartender and Tasted skills for Alexa with a team based here in Chattanooga. Turns out, we have a lot of great startups here, many focused on transportation. And one that might change the world according to NASA. Happy to show you around and meet everyone.

Thanks for the consideration!

Announcing Tasted!

Regan Burns Cafiso, our Head of Content Strategy, doing a demo of Tasted

I wanted to give a bit of an update on a product I’ve been working on for the last year. Today, our company Pylon ai is launching a new conversational media brand called Tasted. Tasted wants to be your cooking companion that helps you discover great recipes and walk you step-by-step through preparation. You can try it today by asking your Alexa to “Enable Tasted”; or on Google Home say “Open Tasted”.

Tasted is a multimodal (that means you can use either your voice or hands to interact with it), cross-platform (that means it will know who you are whether you use it on Alexa, Google Home, Cortana, Slack, FB Messenger, etc) cooking companion service.

Pylon ai (just “Pylon” for short) was started last year with my friend, mentor and co-founder Shelby Bonnie. We started Pylon because we are very excited about conversational ai products and want to be part of the new wave of companies defining this new type of media. Having lived through major platform shifts like the web and mobile before it, we are incredibly excited to be in this early wave of conversational media companies. Conversational media happens when you can ask personalized questions of a media brand and it responds and remembers what you ask. We fundamentally believe conversational media will be the primary way consumers receive information, acquire goods and services as well as accomplish tasks over time.

Pylon has built a publishing platform that enables human experts (a.k.a. editors) to scale their knowledge to their customers individually on voice -enabled platforms like Amazon’s Alexa, Google Home, Microsoft Cortana; as well as chat platforms like Facebook Messenger and Slack. Instead of broadcasting, our experts are able to program to a consumer at the individual level. The ability to have a conversation with a computer which is trained by a category expert may be the most useful communication tool ever conceived.

Think about it. Why should everyone get the same recipe recommendations this week? Why should everyone get the same top 5 camera recommendations? In the conversational media future, you won’t. You’ll receive recommendations based on your requirements and your specialized needs. For example, my family has gluten, nut, shellfish and lactose allergies (yes, we’re that family!). Now add in a time and ingredient requirement like “pork for four people that will take 20 minutes” and you have a request even Google struggles to fulfill. Vertically focused, AI-powered brands like Tasted will be able to meet that request because they are domain specific and are able to personalize the request based on conversations with you — not just your clickstream data.

We can make these recipe recommendations because we have a recipe editor, Regan Burns Cafiso, who has worked at the Food Network and Martha Stewart curating and training our machine learning algorithms how to think about recipe suggestions and categorizations. When we make a recommendation, it starts with the logic Regan put into our system. Our development team then work on tools that take Regan’s recommendation and make them scale and customized for our users.

Speaking of our development team, we are so fortunate to have hired a fantastic team! Shelby and I initially started working with former friends from our CNET Networks days which includes Regan, but also Cliff Lyon and Stephen Maggs who had been working at places like StubHub and other startups after CNET. Soon after, we were able to hire a ridiculously talented team from OpenTable who happened to based in my hometown in Chattanooga, TN. More on that team and working in Chattanooga in a separate post.

One of the things you might enjoy with Tasted is the ability to cook hands-free. Ever been in the kitchen trying to cook a recipe off your phone, or heaven forbid your laptop? It’s kludgy at best. Cooking recipes off the internet are one of the few times I still use my printer because it’s easier to read a sheet of paper than using my phone or a laptop.

Not anymore. With Tasted, you can use your iPhone or Android device, or any web browser as a visual companion that moves based on what you tell your voice device. Telling Google Home to “Show me ingredients” or “next step” will now move along your recipe instructions on the screen and keep you and your greasy fingers off the screen. Try it. It feels like magic!

Special thanks to all of the friends and folks who have supported the development of Pylon and Tasted. Dick and Danny from Index for leading our round. Old friends like Kevin Bandy and Neil Ashe who participated in our angel round with a whole bunch of other folks. There’s no way we could have gone and put our heads down for almost a year and invent a multi-platform/modal ai service without their belief, advice and support.

If you have an Alexa or Google Home device, please try it and tell me what you think!

Getting Alexa Voice Service working on Matrix Creator and Raspberry Pi 3

The key is following the instructions below if you want to get something up quick with Alexa and the Creator. Do not follow the Matrix CLI/OS instructions then try to follow the Alexa AVS Sample below unless you really know what you are doing here. The CLI/OS instruction videos are great, but putting you in a more advance position that can be frustrating.

Follow this process:

  1. Buy a Raspberry Pi 3, 8 gig SD card and a Matrix Creator. I got the RPI3 and Creator at Newark.
  2. Download (recommend torrenting) a copy of Raspbarian Jessie. Skip NOOBS.
  3. If using a Mac, follow these instructions, or download the SD Memory Card formatter, Etcher then flash the downloaded Raspbarian Jessie .img to the SD card.
  4. Insert the mini-SD card in the RPI3. Now plug in your USB mouse, keyboard and HDMI monitor.
  5. Plug in the power cord to the RPI3.
  6. Set up SSH on the RPI3.
  7. Now for the most important part: follow these instructions in one session. It will take an hour, so make time: https://github.com/alexa/alexa-avs-sample-app/wiki/Raspberry-Pi#step-1-setting-up-your-pi

You should be good to go. There are much cheaper and easier ways to make a RPI3 use AVS, but if you want to build something that has voice and sensor features like I want to build, this seems like a good way to start.

Next, trying to figure out how to add other apps and build features for the Matrix Creator.

Why AI Might Fear Us

Illustration by Renan Cakirerk

I chuckle every time I see an article on AI article saying “We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species!” Not because it’s not possible that we would create technology that might kill us all (we’re good at that). I laugh because I may know why an artificial intelligence might want to eliminate our species.

We’re jerks. Let me explain.

Over the last year, I’ve watched a lot of early user interactions with AI/Bots thanks to Ben Brown and his team at Howdy as one of their advisors. Howdy makes a workplace automation bot that you can train to do repetitive tasks like hold stand up meetings or ask your team where to get lunch every day (if you train it to do that). Pretty great, right?

Yet, I see a lot of first time users start off their interactions with a bot go like this:

It blows my mind the number of times our conversation with some AI (Slack bots, Siri, Alexa, et al) devolve into some nasty tirade. We ask a computer to do a task that it was not designed or trained to do, it tells us it does not know what we’re asking it to do, then we immediately get all Anna Wintour on this digital assistant. How did we get so jaded that we’re not still blown away that you can talk to a freaking computer?

In the grand scheme of things, AI is a mere toddler in terms of technology development. Outside of the original Slackbot, most of the other Slack bots are only a few months old since being launched in December of 2015. Siri launched in 2011 and Alexa came out a little over a year ago. Yet, here we are yelling deragatory questions at them:

And it’s not just that we are verbablly abusive to AI. We also act like violent baboons when we interact with AI in environments like Virtual Reality. I’ve seen this firsthand while doing demos with Will Smith for his new VR company. After showing our demo, people will ask us to show other VR experiences we like. One of our favorites is the awesome “Gourmet Chef” by Owlchemy Labs. The Gourmet Chef experience is set in 2050 where robots have taken all of our jobs and “for fun” we are taught by a bot how to cook. The game inside the VR experience is to listen to the bot and learn how to cook in VR.

But do you know what half the people do the minute the experience starts? They start breaking things and throwing food at the robot! So here are these investors, lawyers, and our tech friends -theoritically smart, well-educated people — who within a few minutes abandon the learning part of the game and immediately start going apeshit on the robot….like baboons.

We saw this savage, destructive behavior in literally 1/2 of all the people we ran through the demos. Will and I would say “oh, you’re one of those people” as someone in the demo went about destroying this virtual kitchen. I remember thinking “huh, my friend Bob might be a potentially violent guy”. Don’t you think the AI will think this as well as it looks back on all of it’s interactions? Would you blame an artificial intelligence for starting to think at least half of our species was angry, violent and potentially life threatening based on millions of these interactions over time? I mean, it would be the rationale conclusion.

So, maybe we should dial it down a bit.

What if we act like these digital assistants will develop into really helpful things that might possibly make the world a better place? I’d like to think we can find a little patience and spend some time trying to teach these AI how things work- and how we should act with each other. That’s what happened in WarGames which is the whole reason I think it is so cool that we’re actually getting to build this AI stuff now. You’ve got to think feeding it years of vitrolic diatribes and barbaric encounters can’t be the right database of history for the AIs to learn from. No wonder that one of our older AIs is already starting to get sassy with the knuckleheads who keep provoking it.

So next time you want to yell at Siri or start firing off explective-laden DMs at your Slack bot, maybe think twice and be nice.

the tiniest of ideas

I watched Nick Cave’s biopic “20,000 Days on Earth” this weekend. The subplot that holds this documentary together is the creative process and what drives Cave to still create songs into his 50’s. The final scene’s soliloquy struck me as the best advice you could give a an entrepreneur or artist who is wavering on whether they should or should not pursue their idea:

All of our days our numbered
we cannot afford to be idle
To act on a bad idea is better than to not act at all
because the worth of the idea never becomes apparent until you do it

Sometimes this idea can be the smallest thing in the world
A little flame that you hunch over and cup with your hand
and pray will not be extinguished by all the storms that howl about it

If you can hold on to that flame
great things can be constructed around it
that are massive and powerful and world changing
All held up by the tiniest of ideas

I’ve always gathered business inspiration from artists. Most become accidental business people in their pursuit of the creative muse and fame and fortune. They are always having to balance the tight rope between being creatively relevant and commercially successful to stay in business. Before they become successful, many are ridiculed or shunned by friends and family for pursuing their “stupid little music” dreams…until they make it big.

They are not unlike the developers on Product Hunt and all their “dumb apps” for their tiniest of ideas. Here’s to those who take chances on the tiniest of ideas.

The final scene from “20,000 Days on Earth”: