Announcing Tasted!

Regan Burns Cafiso, our Head of Content Strategy, doing a demo of Tasted

I wanted to give a bit of an update on a product I’ve been working on for the last year. Today, our company Pylon ai is launching a new conversational media brand called Tasted. Tasted wants to be your cooking companion that helps you discover great recipes and walk you step-by-step through preparation. You can try it today by asking your Alexa to “Enable Tasted”; or on Google Home say “Open Tasted”.

Tasted is a multimodal (that means you can use either your voice or hands to interact with it), cross-platform (that means it will know who you are whether you use it on Alexa, Google Home, Cortana, Slack, FB Messenger, etc) cooking companion service.

Pylon ai (just “Pylon” for short) was started last year with my friend, mentor and co-founder Shelby Bonnie. We started Pylon because we are very excited about conversational ai products and want to be part of the new wave of companies defining this new type of media. Having lived through major platform shifts like the web and mobile before it, we are incredibly excited to be in this early wave of conversational media companies. Conversational media happens when you can ask personalized questions of a media brand and it responds and remembers what you ask. We fundamentally believe conversational media will be the primary way consumers receive information, acquire goods and services as well as accomplish tasks over time.

Pylon has built a publishing platform that enables human experts (a.k.a. editors) to scale their knowledge to their customers individually on voice -enabled platforms like Amazon’s Alexa, Google Home, Microsoft Cortana; as well as chat platforms like Facebook Messenger and Slack. Instead of broadcasting, our experts are able to program to a consumer at the individual level. The ability to have a conversation with a computer which is trained by a category expert may be the most useful communication tool ever conceived.

Think about it. Why should everyone get the same recipe recommendations this week? Why should everyone get the same top 5 camera recommendations? In the conversational media future, you won’t. You’ll receive recommendations based on your requirements and your specialized needs. For example, my family has gluten, nut, shellfish and lactose allergies (yes, we’re that family!). Now add in a time and ingredient requirement like “pork for four people that will take 20 minutes” and you have a request even Google struggles to fulfill. Vertically focused, AI-powered brands like Tasted will be able to meet that request because they are domain specific and are able to personalize the request based on conversations with you — not just your clickstream data.

We can make these recipe recommendations because we have a recipe editor, Regan Burns Cafiso, who has worked at the Food Network and Martha Stewart curating and training our machine learning algorithms how to think about recipe suggestions and categorizations. When we make a recommendation, it starts with the logic Regan put into our system. Our development team then work on tools that take Regan’s recommendation and make them scale and customized for our users.

Speaking of our development team, we are so fortunate to have hired a fantastic team! Shelby and I initially started working with former friends from our CNET Networks days which includes Regan, but also Cliff Lyon and Stephen Maggs who had been working at places like StubHub and other startups after CNET. Soon after, we were able to hire a ridiculously talented team from OpenTable who happened to based in my hometown in Chattanooga, TN. More on that team and working in Chattanooga in a separate post.

One of the things you might enjoy with Tasted is the ability to cook hands-free. Ever been in the kitchen trying to cook a recipe off your phone, or heaven forbid your laptop? It’s kludgy at best. Cooking recipes off the internet are one of the few times I still use my printer because it’s easier to read a sheet of paper than using my phone or a laptop.

Not anymore. With Tasted, you can use your iPhone or Android device, or any web browser as a visual companion that moves based on what you tell your voice device. Telling Google Home to “Show me ingredients” or “next step” will now move along your recipe instructions on the screen and keep you and your greasy fingers off the screen. Try it. It feels like magic!

Special thanks to all of the friends and folks who have supported the development of Pylon and Tasted. Dick and Danny from Index for leading our round. Old friends like Kevin Bandy and Neil Ashe who participated in our angel round with a whole bunch of other folks. There’s no way we could have gone and put our heads down for almost a year and invent a multi-platform/modal ai service without their belief, advice and support.

If you have an Alexa or Google Home device, please try it and tell me what you think!

Getting Alexa Voice Service working on Matrix Creator and Raspberry Pi 3

The key is following the instructions below if you want to get something up quick with Alexa and the Creator. Do not follow the Matrix CLI/OS instructions then try to follow the Alexa AVS Sample below unless you really know what you are doing here. The CLI/OS instruction videos are great, but putting you in a more advance position that can be frustrating.

Follow this process:

  1. Buy a Raspberry Pi 3, 8 gig SD card and a Matrix Creator. I got the RPI3 and Creator at Newark.
  2. Download (recommend torrenting) a copy of Raspbarian Jessie. Skip NOOBS.
  3. If using a Mac, follow these instructions, or download the SD Memory Card formatter, Etcher then flash the downloaded Raspbarian Jessie .img to the SD card.
  4. Insert the mini-SD card in the RPI3. Now plug in your USB mouse, keyboard and HDMI monitor.
  5. Plug in the power cord to the RPI3.
  6. Set up SSH on the RPI3.
  7. Now for the most important part: follow these instructions in one session. It will take an hour, so make time:

You should be good to go. There are much cheaper and easier ways to make a RPI3 use AVS, but if you want to build something that has voice and sensor features like I want to build, this seems like a good way to start.

Next, trying to figure out how to add other apps and build features for the Matrix Creator.

Why AI Might Fear Us

Illustration by Renan Cakirerk

I chuckle every time I see an article on AI article saying “We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species!” Not because it’s not possible that we would create technology that might kill us all (we’re good at that). I laugh because I may know why an artificial intelligence might want to eliminate our species.

We’re jerks. Let me explain.

Over the last year, I’ve watched a lot of early user interactions with AI/Bots thanks to Ben Brown and his team at Howdy as one of their advisors. Howdy makes a workplace automation bot that you can train to do repetitive tasks like hold stand up meetings or ask your team where to get lunch every day (if you train it to do that). Pretty great, right?

Yet, I see a lot of first time users start off their interactions with a bot go like this:

It blows my mind the number of times our conversation with some AI (Slack bots, Siri, Alexa, et al) devolve into some nasty tirade. We ask a computer to do a task that it was not designed or trained to do, it tells us it does not know what we’re asking it to do, then we immediately get all Anna Wintour on this digital assistant. How did we get so jaded that we’re not still blown away that you can talk to a freaking computer?

In the grand scheme of things, AI is a mere toddler in terms of technology development. Outside of the original Slackbot, most of the other Slack bots are only a few months old since being launched in December of 2015. Siri launched in 2011 and Alexa came out a little over a year ago. Yet, here we are yelling deragatory questions at them:

And it’s not just that we are verbablly abusive to AI. We also act like violent baboons when we interact with AI in environments like Virtual Reality. I’ve seen this firsthand while doing demos with Will Smith for his new VR company. After showing our demo, people will ask us to show other VR experiences we like. One of our favorites is the awesome “Gourmet Chef” by Owlchemy Labs. The Gourmet Chef experience is set in 2050 where robots have taken all of our jobs and “for fun” we are taught by a bot how to cook. The game inside the VR experience is to listen to the bot and learn how to cook in VR.

But do you know what half the people do the minute the experience starts? They start breaking things and throwing food at the robot! So here are these investors, lawyers, and our tech friends -theoritically smart, well-educated people — who within a few minutes abandon the learning part of the game and immediately start going apeshit on the robot….like baboons.

We saw this savage, destructive behavior in literally 1/2 of all the people we ran through the demos. Will and I would say “oh, you’re one of those people” as someone in the demo went about destroying this virtual kitchen. I remember thinking “huh, my friend Bob might be a potentially violent guy”. Don’t you think the AI will think this as well as it looks back on all of it’s interactions? Would you blame an artificial intelligence for starting to think at least half of our species was angry, violent and potentially life threatening based on millions of these interactions over time? I mean, it would be the rationale conclusion.

So, maybe we should dial it down a bit.

What if we act like these digital assistants will develop into really helpful things that might possibly make the world a better place? I’d like to think we can find a little patience and spend some time trying to teach these AI how things work- and how we should act with each other. That’s what happened in WarGames which is the whole reason I think it is so cool that we’re actually getting to build this AI stuff now. You’ve got to think feeding it years of vitrolic diatribes and barbaric encounters can’t be the right database of history for the AIs to learn from. No wonder that one of our older AIs is already starting to get sassy with the knuckleheads who keep provoking it.

So next time you want to yell at Siri or start firing off explective-laden DMs at your Slack bot, maybe think twice and be nice.

the tiniest of ideas

I watched Nick Cave’s biopic “20,000 Days on Earth” this weekend. The subplot that holds this documentary together is the creative process and what drives Cave to still create songs into his 50’s. The final scene’s soliloquy struck me as the best advice you could give a an entrepreneur or artist who is wavering on whether they should or should not pursue their idea:

All of our days our numbered
we cannot afford to be idle
To act on a bad idea is better than to not act at all
because the worth of the idea never becomes apparent until you do it

Sometimes this idea can be the smallest thing in the world
A little flame that you hunch over and cup with your hand
and pray will not be extinguished by all the storms that howl about it

If you can hold on to that flame
great things can be constructed around it
that are massive and powerful and world changing
All held up by the tiniest of ideas

I’ve always gathered business inspiration from artists. Most become accidental business people in their pursuit of the creative muse and fame and fortune. They are always having to balance the tight rope between being creatively relevant and commercially successful to stay in business. Before they become successful, many are ridiculed or shunned by friends and family for pursuing their “stupid little music” dreams…until they make it big.

They are not unlike the developers on Product Hunt and all their “dumb apps” for their tiniest of ideas. Here’s to those who take chances on the tiniest of ideas.

The final scene from “20,000 Days on Earth”:

You Forgot It In People

Record Store Day reminded me that buying music is more fun with people involved.

Broken Social Scene’s “You Forgot It In People”album whose title I lifted for this post.

It’s been a little over a week since the music geek holiday of Record Store Day has passed. For those who don’t know, this holiday consists of rubbing (or for certain records throwing) elbows for the right to spend $25-$45 for albums pressed on plastic that you probably already have access to through Spotify, YouTube or your MP3s. It’s a real throwback for music fans and artists because people actually go to stores, talk to other humans and buy music again. It also serves as a stark reminder of how impersonal the music experience is now and what we’ve lost in the transition to digital.

Unfortunately, the record store is not going to return to its former glory no matter how much vinyl sales keep growing. To be clear, there will always be a little record store selling vinyl long after Urban Outfitters stops selling vinyl as a fashion accessory. That’s because people who love music will always seek out places to be with other people who love music too. I know that’s why I still go to concerts and music festivals.

So after my last Record Store Day (“RSD”) experience I started thinking about how digital music could capture more of the store experience. Right now, most digital music services are just about delivery and algorithmic programming and I am getting annoyed with it. Opening up a digital music service is bad a combination of overwhelming and boring.

It’s overwhelming because I have more music than I could ever listen to in a lifetime available. Unfortunately, this large number of listening options available tends to make my mind go blank. “Um, The Rolling Stones… I guess?” seems to be my brain’s typical response. Music services know this is a problem, so they prompt the user with suggested playlists to deal with this “what do I listen to now” problem. Or worse “this is what’s popular in your network” activity feeds. I love my friends, but I mostly hate what they listen to daily. Unfortunately, I find all these algorithmic programming options uninspiring. These suggestions also make me feel like a lame demographic:

I’m sure the algorithm is right and something in the data analysis that Spotify is gathering from my listening habits is spot on with these recommendations above. I definitely need a deeper focus, a happier work disposition and some idea of what today’s “viral hits” are as I don’t have a clue. But I don’t pay Spotify to give me the tough love reminder that I’m just an aging hipster in need of an attitude adjustment.

There’s got to be a way to make digital music more personal and enjoyable. Or at least something more akin to the RSD experience. Here’s a few ideas I had below.

Make an event out of new music.

When I was in college, I worked at a record store in Knoxville that did “midnight sales” when CDs came out. Like RSD, midnight sales were totally manufactured commercial events driven by the perception of scarcity. Lines of people waiting in the parking lot at midnight for Nirvana’s “In Utero” CD so they would be the first to have it… at least until 10am the next day when everyone else could buy it. The midnight sales were parties where you met a lot of people who liked things you liked. I think that still holds true and why people are still willing to line up at record stores at 10 a.m. on a Saturday to buy music instead of just buying it off eBay or Discogs the next day.

Why isn’t there an equivalent live event online when new albums come out? Not just a live concert, but a place where I can hear more about the album from the artist. Maybe see what other people think while we listened to the album live together?

I’ve got a little bit of experience in doing similar types of events for video games and movies from my last company Whiskey Media. Whiskey Media built entertainment brands like Giantbomb and Comicvine (which are now owned by CBSi) that were hybrid publishing and community sites. We would broadcast our hosts playing new video games or talking about movies live and the fans loved it. Thousands of people who would show up to watch and participate in chats during these live broadcast. You can check out what they are like yourself tomorrow (April 29th) at Giantbomb if you want to see exactly what I’m talking about, or check out an old clip of one of ours shows below.

It would be real easy for Amazon’s Twitch and Google’s YouTube to do these type of live “Fan Parties”. They just need to invest in great hosts for the events. If I were Spotify or Apple I would start thinking about these type of music release parties. Otherwise, they potentially lose their promotional power to Google and Amazon who can easily turn on this ability to connect fans with artists on their platforms.

Less exclusives, more rewards for supporting music.

The digital music industry has tried to create excitement around different kinds of exclusive models for awhile now, notably iTunes getting the Beatles or Spotify having Led Zeppelin exclusively. Tidal’s whole strategy seems to be based on exclusives which they have already caught a ton of grief about already.

With RSD, the exclusives exist in the form of limited edition vinyl that are distributed everywhere. The only fans getting the shaft are folks who live in towns without record stores. Or as I found out, showing up two hours late on RSD and missing out on that Alabama-shaped St. Paul & the Broken Bones release you really, really wanted.

Anyway, both of these “exclusive” methods are flawed. With digital exclusives, the artist risks alienating fans by making them choose between digital platforms. Consumers are not going to subscribe to three different services just to listen to all their favorite artists. The limited distribution leads to limited income.

When it comes to RSD exclusives, it’s not a sustainable business model because it happens once a year and most of the product is targeted to limited edition rarities for hardcore music nerds like me. This model does not “scale” as they say.

I think it would be better for digital music services to reward hardcore fans who show up for an album launch and buy the music instead. Let the distributors fight for debut rights instead of exclusive rights. Sure, it’s possible that the bigger, wealthier distributors might disproportionately get rights to bigger artists as they have in the past. If that happens, it will just make the smaller distributors work harder at breaking newer artists. That has worked out well for my favorite music distributor Bandcamp, which has already given a $100 million to artists. The more platforms we have fighting to promote new music the better I say.

Here are some ideas that as a music fan I would be glad to hand over $20 for when new albums come out:

  • Expensive benefit: Limited edition vinyl/cassette/t-shirts with digital purchases made on the first day of release.
  • Moderate priced benefit: Send posters & stickers for the first 100,000 (or pick a number) that buy the album in the first 24-hours at full price.
  • Cheap benefit: Collect Twitter, Instagram or Facebook usernames on checkout. Then post a link to a page with a collage of all those first week buyers, until the artist creates its own version of the Million Dollar Homepage. Randomly Tweet or Instagram those buyers and tell them thank you.

I’m sure there are better ideas by smarter people or maybe these ideas have been tried already. The point is music consumption needs to go back to being a better cultural experience and not the isolated experience it is today. Sure, there’s still concerts and record store days, but music’s future is online. Digital distribution is just not that fulfilling and is partially why people still look to buy physical artifacts or interact with their sometimes nice, sometimes crotchety record clerk guy. The companies who bring people back into the music experience will do exceptionally better going forward.

Special thanks to my super talented friend Lessley Anderson for edits and thoughts on this rambling post. If there are any errors or you don’t like the thoughts, don’t blame her. Also, you should see her band Baby & the Luvies if you’re in SF!