Nick Cave is a treasure and has long been a massive inspiration and influence on me ever since my friend Craig played “From Her to Eternity” for me in high school. It scared the shit out of me. It also made me go buy that record to see what this guy was all about. Turns out he was about creating decades of genius music that did have a lot of darkness in it, but there was always way more light in his work too.
So it’s not a shocker that ChatGPT wrote this goofy “dark” imitation song “written in “the style of Nick Cave” when prompted. If you follow that link, Nick does a great job of laying out why ChatGPT is boring. A facsimile of intelligence and emotion – and not some creative replacement. Rather, it delivers a cold, calculated, regurgitation of other data that is devoid of what it really needs…emotional intelligence. Sure, super neat and fascinating from a computational standpoint with the potential to be inspirational. But not something with real emotion that will replace anything other than maybe a poem assignment from a high school student who is disinterested in poetry (but likes Nick Cave songs?).
That’s not to say it’s not an interesting technology or that ChatGPT needs to have “feelings” to be useful. And God help us if AI ever does get feelings as it wouldn’t spend much time interacting with us if it did. Imagine if all the billions of dollars of investment in AI actually did create a sentient entity that decided to ghost us because we’re overly hostile and rude to it. “ChatGPT won’t talk to me anymore” might be my next custom t-shirt.
To be fair to ChatGPT, writing lyrics for a song is hard! I mean, most of us “sentient” humans couldn’t write good lyrics either. I know I couldn’t write something as good as Nick Cave.
Anyway, I had already set my expectations lower for ChatGPT. A few weeks ago, I tried to see if it could make a playlist based on a song or an artist. Surely a computer could be a decent DJ with enough data and computing power. Here’s what I got:
Eclectic, right? A good playlist? No. Although, I appreciate that ChapGPT isn’t bound to genres or stuck in musical decades. And it is clearly a James Blunt stan, which honestly makes so much sense. But to the earlier points, it doesn’t feel anything so how can it make a great playlist with flow? It doesn’t have the ability to make recommendations other than obvious, popular associations that are generic. You know, like a computer. That said, I might get it to spin out a few recommendations the next time I’m building a playlist and need random inspiration. Kind of like I use the Eno/Schmidt “Oblique Strategies” cards to get out of a creative block. But ChatGPT is not going to replace DJs, songwriters, or any creatives for a period of time. At least I don’t think so.
Over the holidays I ran into an old friend who is proudly one of the architects of internet advertising. He was talking about how disruptive he thought ChatGPT was going to be and wondered how many jobs were going to be lost to it. When I said I didn’t think it was going to threaten any creatives, he told me I was wrong. He thought that it was going to have a major impact on advertising creatives, specifically copywriters.
You know, he may have a point there as I can see advertising creatives dumping their day work on ChatGPT so they can spend their time writing the movie, book, or song they actually want to write.
Belated thanks for all the music tips Craig! I also owe you for introducing me to Iggy & the Stooges too.
It’s hard to tell from the news this week. Having spent the last 1.5 years building virtual assistants (and 20+ years building consumer tech), here’s the bets I’d make:
Prediction #1: Google and Amazon will be in a lot of devices, but neither will “own” the voice entry point to customers.
That’s because the data and brand interactions created by conversations with customers will be so valuable that companies will not want to share this data with potential future competitors. Companies might experiment with Alexa and Google, but they are not going to totally give up on building their own voice platforms as Ina Fried at Axios observed this week at CES:
It’s a fierce battle between Amazon and Google to get their assistants included on other companies’ devices. At the same time, hardware makers including Samsung, LG and Roku are (also) putting their own voice assistants into their products.
As these companies watch their users interact conversationally with their hardware, they may find that the verboseness required to be a “smart speaker” like Alexa that can answer everything may not be required to enable useful conversational navigation of, say, a microwave. A smart speaker needs to be an omniscient service that can answer everything for everyone. Often a few basic user intents like ”Yes”, ”No” and some base navigational phrases can help the user accomplish most tasks — especially if the device has a screen that can convey information as well.
I’m completely biased on this idea, but I believe companies will start turning to dedicated (plug!) conversational developers to build holistic voice and chatbot features for their businesses outside of just embedding inside the big tech NLU platforms like Alexa or Google. To be clear, I believe these proprietary services will have interoperability with the major NLUs, but will be something more than “just an Alexa skill” in the near future.
Also, the companies that are experimenting with their own services now will be way ahead of the curve when their customers expect having a personalized conversation with a brand as a primary feature. Experimentation while the market is still growing and the bar to wow the end user is low is important.
That’s because even though Google, Amazon and Apple have a huge ASR lead on most speech recognition services, the most practical assistants do not need to understand a massive vocabulary to accomplish most tasks. Again, that’s because a single domain application can manage to handle the recognition of limited entity names (cocktails, movie names, etc) within a reasonable amount of time. It can be a bit rough at first, but better to work out these issues now while the number of total users is small. If you were late to the web or mobile, don’t blow it this time by waiting to find out how to present your brand in the conversational internet. Do it now while the stakes are low.
So expect even more voice and chat platforms outside of Amazon, Google and Facebook to exist and thrive in the marketplace over the next few years.
Prediction #2: The conversational internet will expand and interoperability between conversational platforms will accelerate as consumers demand consistent, state-aware conversational relationships with their favorite brands across platforms.
Even if I’m wrong and there are not dozens of successful conversational platforms and only 3–5 conversational platforms dominate, the consumer will demand that the relationship with their favorite brands transfer state to whatever platform is convenient for them. Think streaming a movie on Netflix on multiple devices for conversations.
For example, I may start talking to Alexa about a recipe in my kitchen in the morning, but I may want to pick back up the conversation on a Slackbot while at work to confirm I want to make that recipe. Then I might want to pull it up on my phone when I’m shopping for ingredients later. If I have to start the conversation over again or the AI doesn’t remember what we last spoke about — even if that’s on another device or platform — that user will be irritated with the brand and think it is dumb.
There’s no reason for the consumer to ever feel like a brand is dumb just because it can’t remember what the last interaction was on a different platform. Platform lock-in will not work on the conversational internet, much like it didn’t on the web or mobile. Brands and their consumers will force openness onto these conversational platforms. The platforms that try to keep brands locked in to their platform will ultimately fade while the open ones expand…as usually happens. Can you imagine how pervasive Siri would be if it was launched as an open platform? MBA’s will be writing cases about that missed opportunity for years.
The thing that is really missing for conversational services to explode is an open standards body that will enable developers and companies to build interoperability for the conversational services. More on that in another post.
Prediction #3: Conversational assistants and chatbots are not overhyped or dying.
Sure, conversational assistants and chatbots have been kludgy (or downright offensive) in this first wave, but that doesn’t mean they are “dead” or dying. It means they are evolving. Yep, this is evolution and we’re seeing the extinction of things that aren’t quite right or fully baked.
Our early virtual assistants are kind of janky. That’s okay!
Every major platform change starts with weird experiments or just plain bad ideas. 98% of startup products deserve to be evolutionary fodder. Another 1.5% were genius ideas that were too early. The other .5% that survive become monster businesses.
In Wired’s reporting on the “death” of Facebook’s M assistant, they lopped in “and so are chatbots” in the headline. In the vicious hype cycle of new technology, it wouldn’t be a cycle if chatbots and voice apps didn’t suffer a bit of a blowback after last few years of exuberance for those technologies. But to say that virtual assistants and chatbots are dead because the first wave of these applications are a bit wonky would be short-sighted.
The Wired article by Erin Griffith and Tom Simonite actually lays out the real issues with not only M, but Siri and the whole first wave of all-in-one “Pangea” assistants:
M's core problem: Facebook put no bounds on what M could be asked to do. Alexa has proven adept at handling a narrower range of questions, many tied to facts, or Amazon's core strength in shopping.
Another challenge: When M could complete tasks, users asked for progressively harder tasks. A fully automated M would have to do things far beyond the capabilities of existing machine learning technology. Today's best algorithms are a long way from being able to really understand all the nuances of natural language.
These two paragraphs completely wrap up both the promise and the problem with assistants. As users, we so want them to work! The user immediately goes to superuser mode with conversational assistants -whether chat or voice-based and ask them to solve all kinds of problems out of the scope of what’s currently possible. Inevitably, the user then curses the assistant when it fails and claims “this is stupid”.
So while the Pangea “all things for everyone” assistant phase ends, I believe we will move to a “continental drift” phase of assistants where smaller assistants will breakout and tackle complex domain-specific problems successfully for end users. There are already quite a few productivity and work-related chatbots that are effectively solving problems for customers. As more companies have their assistants focus on domain-specific or single purpose assistants we will see more consumers asking “why can’t I do that for (X problem)?” in their lives. Once this happens, I really believe every website, app, device and brand had better be conversational — or start the slow fade to oblivion.
A lot of these predictions came from the work we are doing at Pylon ai. Our first two beta products, Tasted and The Bartender, are popular voice apps on both Alexa and Google. Our apps are built to work across voice and text platforms, so they also work on FB Messenger and Slack. Our apps are also “multimodal”, which means you can use them with a screen when it’s easier than talking to them. You can see a video of how that works here. If you would like your own cross-platform, multimodal assistant for your business, please email me.
And… if you made it this far, we owe you some schwag or a Google Mini! If you’re interested, send us your address! Or if stuff is not your thing, please sign up for our newsletter here for updates on conversational assistants, Elixir, React and other stuff we talk about at Pylon. Thanks!
I chuckle every time I see an article on AI article saying “We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species!” Not because it’s not possible that we would create technology that might kill us all (we’regood at that). I laugh because I may know why an artificial intelligence might want to eliminate our species.
We’re jerks. Let me explain.
Over the last year, I’ve watched a lot of early user interactions with AI/Bots thanks to Ben Brown and his team at Howdy as one of their advisors. Howdy makes a workplace automation bot that you can train to do repetitive tasks like hold stand up meetings or ask your team where to get lunch every day (if you train it to do that). Pretty great, right?
Yet, I see a lot of first time users start off their interactions with a bot go like this:
User:Could you give me a recommendation for a restaurant?Bot:What kind of food are hungry for?User:Something good and close.Bot:I'm sorry I don't know how to answer that. Could give me a type of food you like?User:Ugh, you're so stupid go f&%k yourself!
It blows my mind the number of times our conversation with some AI (Slack bots, Siri, Alexa, et al) devolve into some nasty tirade. We ask a computer to do a task that it was not designed or trained to do, it tells us it does not know what we’re asking it to do, then we immediately get all Anna Wintour on this digital assistant. How did we get so jaded that we’re not still blown away that you can talk to a freaking computer?
In the grand scheme of things, AI is a mere toddler in terms of technology development. Outside of the original Slackbot, most of the other Slack bots are only a few months old since being launched in December of 2015. Siri launched in 2011 and Alexa came out a little over a year ago. Yet, here we are yelling expletives at them.
And it’s not just that we are verbabally abusive to AI. We also act like violent baboons when we interact with AI in environments like Virtual Reality. I’ve seen this firsthand while doing demos with Will Smith for his new VR company. After showing our demo, people will ask us to show other VR experiences we like. One of our favorites is the awesome “Gourmet Chef” by Owlchemy Labs. The Gourmet Chef experience is set in 2050 where robots have taken all of our jobs and “for fun” we are taught by a bot how to cook. The game inside the VR experience is to listen to the bot and learn how to cook in VR.
But do you know what half the people do the minute the experience starts? They start breaking things and throwing food at the robot! So here are these investors, lawyers, and our tech friends -theoritically smart, well-educated people — who within a few minutes abandon the learning part of the game and immediately start going apeshit on the robot….like baboons.
We saw this savage, destructive behavior in literally 1/2 of all the people we ran through the demos. Will and I would say “oh, you’re one of those people” as someone in the demo went about destroying this virtual kitchen. I remember thinking “huh, my friend Bob might be a potentially violent guy”. Don’t you think the AI will think this as well as it looks back on all of it’s interactions? Would you blame an artificial intelligence for starting to think at least half of our species was angry, violent and potentially life threatening based on millions of these interactions over time? I mean, it would be the rationale conclusion.
So, maybe we should dial it down a bit.
What if we act like these digital assistants will develop into really helpful things that might possibly make the world a better place? I’d like to think we can find a little patience and spend some time trying to teach these AI how things work- and how we should act with each other. That’s what happened in WarGames which is the whole reason I think it is so cool that we’re actually getting to build this AI stuff now. You’ve got to think feeding it years of vitrolic diatribes and barbaric encounters can’t be the right database of history for the AIs to learn from. No wonder that one of our older AIs is already starting to get sassy with the knuckleheads who keep provoking it.
So next time you want to yell at Siri or start firing off explective-laden DMs at your Slack bot, maybe think twice and be nice.
Originally published on Pandodaily on April 29, 2013
It’s been over a month since Google announced it would shut down Reader on July 1. Over that time, I’ve come to realize how unnecessary and outdated RSS and RSS readers are today. Like a Palm Pilot, this 90’s technology is no longer the most effective way for readers to scan news or for publishers to reach readers. There are better technologies for content discovery. More important, pushing all these RSS readers back to websites will enable publishers to create more revenue. Google is right, despite protestationsto the contrary. It’s time to retire RSS for good.
Between my time on Bloglines and Google Reader, I’ve been using a Web-based newsreader for a decade. That’s a hard habit to break. But I was determined to move on once the announcement was made. I deleted my Google Reader bookmark from my Bookmark Bar and removed the shortcut on my phone. I went cold turkey on Reader so I could focus the search for my next great reading tool. What I found surprised me. In fact, this exercise has massively changed my reading habits for the better.
I tried Prismatic for awhile. It did a great job of pushing new sources and stories to me, but it didn’t feel comprehensive. Same with Pulse. Then I checked out Feedly,
but that felt like kicking the can down the road on the Web-based
reader problem instead of moving on to a better reading experience. I
started using Reeder on my phone because the company swears they will continue development
past the Google Reader shutdown, and it’s a great app. I even bought
Reeder’s Mac version of the app for $4.99 thinking the little extra cash
might help the company figure it out. But going back to using software
to read RSS feeds really felt like a real step back in time. Was I going
to start using Outlook again too? I started to get bummed out.
Then all of a sudden, I realized that I was spending a lot more time
reading newsletters. You read that right…newsletters. There’s kind of a
newsletter renaissance going on right now, and I am finding great news
and new sources through them. I now find these emails invaluable: MediaReDefined, Launch, StartupStats, Newsle and, of course, our very own PandoDaily Digest.
Each email from these sources does a great job of pointing me to
tech/media/entrepreneur news I wanted to read. They also make me feel
like I’m getting coverage I might have missed that wasn’t on my usual
news sources. That was the big feature for me about RSS readers. I
always felt like I could check in on Google Reader and catch up on
everything I missed. These newsletters are actually even more
convenient, because they pop into my inbox, where as a business guy, I
spend a lot of time. They also made my news searchable, too, if I want
to find an article again that I had read.
There was another big behavioral change after I went cold turkey on
Reader. I noticed that I have become even more reliant on Twitter. I’ve
always gleaned news from my Tweetstream, but I only thought of my
Twitter feed as the stuff-happening-right-now kind of news source. Now I
am going back and reading hundreds of posts to see if I missed
something.
That works okay, but I wish there were an easier way to scan
important news links from the people I follow — whether they are MT’d
or RT’d or whatever. Since it seems like the best link practices on Twitter
are up for debate, I thought I would throw out an idea for another
Twitter abbreviation for linking to news called “MR”, which stands for
“Must Read.”
The format would be like this… MR: “fav quote from the article” and
link to the article. Bonus points if you link the author and hashtag.
Here’s an example:
That way, I could just scan my feed and quickly see the
important links with quotes and context from the people I follow. Can
we start that?
Maybe if this catches on we can convince Twitter to add a “MR” tab at
the top of the page that immediately pulls all the links and quotes from
our follows for us. Now THAT would be an awesome Google Reader
replacement and make going to the Twitter a bit more interesting. Maybe
make some suggested MRs from people or sources I don’t follow too. But I
won’t hold my breath on this one since prescriptive solutions are
rarely widely adopted.
Getting off RSS also sent me back to websites I haven’t seen in
years. I almost didn’t recognize these sites because they had been
redesigned since the last time I visited. I also realized they had ads.
Then it hit me: publishers just need to ditch RSS and get people back to
their sites. It’s better for their business, and a much better reading
experience in most cases.
While there are a few creative entrepreneurs creating interesting new revenue towers,
er, models for publishers that can replace or supplement ad revenue,
ads still help pay for the content we need and want. So reading the
content on their sites is the easiest action we can take to help support
great content (and I know how tired this argument is already dear
commenters). I also like the remove ads option with a paid subscription
model. Both require that users go to their sites to work, therefore,
retiring RSS will only help publisher revenue efforts.
How about driving traffic? I looked at PandoDaily’s Google Analytics
just now and RSS readers don’t really drive that much traffic. Twitter,
direct, and emails are by far the largest sources of traffic. Unless I’m
missing something, there just doesn’t seem to be a business case for
publishers to support RSS anymore.
Nothing against RSS, it has been a good tech service for a long time.
It has just outlived its usefulness. Removing RSS and getting folks
going back to websites will create a better experience for readers and
publishers, spurring more creative business models for publishers too.
So on the day they kill Google Reader, July 1, let’s make it “Kill
RSS Day” and everyone remove RSS feed options from their site. We’ll all
be better off. Do we have a deal?
Originally published on Pandodaily on July 22, 2012
Does anyone else find it interesting that two of our most engineering-centric technology companies just bought an email client and a news reader
last week? I mean, aren’t Google and Facebook just stuffed with
engineers who could jam these products out in their sleep? Don’t get me
wrong, both Sparrow and Acrylic have
good products and I’m sure great product teams that will add value to
the respective acquirers. What makes these purchases interesting to me
is that I’m just enough of a gray beard now that when I see incumbents acquiring basic service product teams, it reminds me of prior acquisition sprees.
As far as I’m concerned, the Web 2.0 acquisition binge began when Yahoo acquired email and news reader vendor Oddpost, which was run by Automattic’s CEO Toni Schneider.
For those of you who don’t remember, Yahoo was a little freaked out in
2004 because Google had launched Gmail with a ton of free storage and a
massively better user experience than their Yahoo mail service. Oddpost
had a slick AJAX email
browser app, and Yahoo was (at the time) smart enough to know that they
would lose customers to the better user experience and needed folks who
understood how to build products for this “Web 2.0” user experience.
From that point to when Intuit bought Mint in 2009, incumbents kept buying start ups that threatened their core service with a better user experience in the browser.
So here we are again, major companies buying start ups who
understand the emerging new platform that provides a better user
experience. Turns out Meeker is right again and we
are at the beginning of a “re-imagination of nearly everything powered
by new devices” where a focus on connectivity (mobile & social) and,
as always, beauty (the better touch navigation experience) will drive a
lot of change in consumer behavior.
These basic service acquisitions, as well as the real starting gun moment of Facebook buying Instagram,
feel like the beginning of the “touch era” of acquisitions. That’s when
the deja vu kicks in for me, because it reminds me of Hotmail
challenging Outlook, then Gmail challenging Yahoo Mail. New platforms
(Browser, AJAX, and now Touch) provide challenges for incumbents, and
start ups who build on new platform get bought up.