Good morning! This is The Neuron, where we serve up delicious AI news with a side of pancakes and a strong cup of coffee.
Today in AI:
- Let's Unpack Two AI Models From The Weekend
- Startup Catches Flak For GPT Experiment
- Around the Horn
- Leo Sends His Regards
Let's Unpack Two AI Models From The Weekend
Meet Claude and VALL-E. Both are generating some buzz.
Claude, you're up first. Claude is a ChatGPT-like model developed by AI lab Anthropic.
Anthropic just loosened their NDA on Claude, so over the weekend, beta testers started sharing what it can do.
Here's what they're saying:
- Claude is more specific and insightful than ChatGPT - While ChatGPT keeps things high-level, Claude isn't afraid to get into details and say interesting things.
- Claude admits when it doesn't know - It still makes some stuff up, but in some cases, Claude prefers not getting it wrong vs. confidently guessing (like ChatGPT).
- Claude knows humor - Write a Seinfeld scene? Meme about Fast and Furious? No problem. (h/t to reader Dave Kasten for these links)
Isn't this just another language model? Yes, but it's from Anthropic, which people don't talk about enough. Anthropic is basically tied for 2nd in language AI with Microsoft.
Language model benchmarking from Stanford
We'll let you know when you can have your turn having Claude write your standup jokes.
Speaking of Microsoft, they released VALL-E, a text-to-speech model. It can:
- Copy your voice and audio quality using just a 3-second clip
- Inflect tones like "amused" or "sleepy"
- Generate multiple takes of the same script
Anything bad? Just a small one. The output often has some robotic sounds that give it away.
OpenAI's turn. VALL-E and OpenAI's DALL-E have similar names because they use very similar architectures.
The speculation is that OpenAI can easily copy the approach and 10x the training data: VALL-E had 60K hours of audio, but OpenAI trained its speech recognition model Whisper with 680K hours.
Startup Catches Flak For GPT Experiment
Yikes. This was not a good look.
Koko is a nonprofit that sets up peer mental health support. They tested GPT-3 in the workflow, published a Twitter thread on it and got more backlash than Kendall Jenner in that Pepsi ad.
What the thread said:
- Some people providing mental health support had GPT help them write a response. They could choose whether to use/edit that response.
- Koko said the people receiving support liked those responses more, plus it reduced response times by 50%+.
- But, they also were also mega-turned off when they found out the response was AI-generated.
What's the drama? Researchers are required to tell people when they're part of an experiment (called "informed consent"). Koko might have admitted to breaking those laws, or coming way too close.
Koko claims that having GPT assist with the response (like writing an email with an AI assistant) isn't the same. Even if it isn't, the ethics here are a bit shady.
We get AI is hot. But we're in Mile 0.5 of a marathon. Providing care using AI is a risky area to play with. Even if you think it's legal, it's not worth the terrible optics.
Around the Horn
- Having trouble keeping track of all these language models? Here's a list of 80+ of them.
- DoNotPay will pay you $1 million to take their AI to your Supreme Court case.
- Scale AI's Prompt Engineer tests the limits of GPTZero, the tool that flags if text came from an AI model.
- AI researcher Francois Chollet on managing the hype in AI.
- Entrepreneurs: here's your starting point on product ideas built around large language models.
- Neeva (search competitor, like You.com) releases Neeva AI, a ChatGPT-like experience in its search engine.
- CES was last week. Here's a list of 9 AI-driven products that caught people's attention.
- Chat with Benjamin Franklin, Plato, Genghis Khan and more using Historical Figures.
- After NYC schools banned ChatGPT, did others follow suit? TL;DR: Not yet.
- Stylized image generators are so cool. Here's a "timeless" look: