Welcome, humans.
Looks like we spoke too soon about the AI crash yesterday, huh? Missed calling it by ~two hours. TL;DRālots of bumpy economic news, NVIDIA no longer impresses, and GPT-4.5ās āmidā release sorta put a dent in the scale hypothesis. More on that below.
Last Call: We've teamed up with our pals at DZone on that GenAI surveyāand today is your final chance to get it done before it closes! It'll take less than 10 minutes, promise.
Seriously, we timed ourselves filling it out and still had time left to wonder if GPT-4.5's price tag will require a second mortgage or just a car loanāyouāll get that joke in a sec.
What's in it for you? Early access to their trend report data (perfect for impressing your boss with industry insights), a free Getting Started with Agentic AI ref card, and a chance to win one of two $125 gift cards.
Check it out here before close of business todayāthink of all the things you could buy with that gift card: a fancy mechanical keyboard, 25 cups of overpriced coffee, or approximately 4 minutes of GPT-4.5 compute time!
Hereās what you need to know about AI today:
- OpenAI released GPT 4.5 to mixed reactions.
- Meta planned a standalone AI app.
- IBM released an AI family for enterprises.
- Meta unveiled the Aria Gen 2 research glasses.

Was GPT-4.5 so āmidā that it crashed the stock market?

Yesterday, OpenAI released GPT-4.5āits ālargest and most knowledgeable model yetā, prioritizing emotional intelligence over raw reasoning power (Pro only atm).
You knew things were gonna be rough when OpenAI positioned this release more about āvibesā than anything else.
AI researcher Gary Marcus, who constantly criticizes the current AI hype train, called it a ānothing burger release.ā
The truth isā¦ somewhat in the middle? Very fitting, for a model called ā4.5āā¦
First, the vibe takeā¦
- Sam Altman called GPT-4.5 āthe first model that feels like talking to a thoughtful person.ā
- Ben Hylak declared it āthe midjourney-moment for writing.ā
- Dan Shipper (Every) finds it āmore extroverted and less neurotic,ā but still prone to hallucinations.
- Ethan Mollick notes it ācan write beautifullyā but gets āoddly lazy on complex projects."
And several testers noted it will confidently share opinions rather than deflecting with āAs an AI...ā responses.
Now, the ānothing burgerā takeā¦
- Sam also acknowledged it's āa giant, expensive modelā that āwon't crush benchmarks.ā
- Former OpenAI researcher Andrej Karpathy explained it required 10X more compute for ādiffuseā improvements.
- Gary Marcus calls it evidence that āscaling data and compute is not a physical law.ā
The biggest issue against GPT-4.5? The pricing is prohibitiveā$75/input and $150/output per million tokens (thatās ~10-25X more than competitors).
As one observer perfectly summed up: āHalf the TL saying it's bad and too expensive. Half the TL saying it's good and too expensive.ā
In fact, GPT-4.5 perfectly encapsulates the AI industry's current dilemma: incredible technological achievements that can't yet justify their astronomical costs.
See, GPT-4.5 is the first major reality check in the AI scaling race, and GPT-4.5's marginal improvements suggest we're hitting fundamental limits.
Andrej Karpathy explained it well: āeverything is a little bit better and it's awesomeā, but in ways that are hard to noticeāslightly better word choice, marginally improved understanding, reduced hallucinationsābut nothing revolutionary.
Meanwhile, the economics are brutal: It cost approximately ~$500M to train GPT-4.5, and OpenAI plans to burn a lot more than that in 2025. Sam also says the company is āout of GPUs.ā Hence, Stargate.
While all the new chips and servers will remain valuable for running ChatGPT, a model like GPT-4.5 simply can't achieve mass adoption if its economics don't work at scale.
Our take: Call us conspiratorial, but we donāt think itās a coincidence that NVIDIA stock sold off right around the time GPT-4.5 was releasedā¦
The question isn't whether GPT-4.5 offers better vibes or notāit's whether any amount of vibes can justify burning billions on models most people will never use (and by āmodelsā, we mean you, GPT-4.5).
For OpenAI, this ātweener release buys time while they search for a more sustainable approach to pay for new GPUs. Why else put out such a womp womp model?
For investors, yesterdayās market reaction was about uncertainty. And the truth is, nobody knows what happens next with AI. Sam doesnāt know. NVIDIA CEO Jensen Huang doesnāt know. And Wall Street CERTAINLY doesnāt know.
The only thing everybody DOES know is that the days of blank-check AI funding are numbered. As with everything in AI, itās just a matter of how big that number isā¦
Goes without saying, but not financial advice!

FROM OUR PARTNERS
This tech company grew 32,481%...

No, it's not Nvidiaā¦ It's Mode Mobile, 2023ās fastest-growing software company according to Deloitte.1
Their disruptive tech, the EarnPhone and EarnOS, have helped users earn and save an eye-popping $325M+, driving $60M+ in revenue and a massive 45M+ consumer base. And having secured partnerships with Walmart and Best Buy, Modeās not stopping thereā¦
Like Uber turned vehicles into income-generating assets, Mode is turning smartphones into an easy passive income source. The difference is that you have a chance to invest early in Modeās pre-IPO offering3 at just $0.26/share.
Theyāve just been granted the stock ticker $MODE by the Nasdaq2 and the time to invest at their current share price is running out.
Join 33,000+ shareholders and invest at $0.26/share today.
Disclaimers
1 Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur.
2 The rankings are based on submitted applications and public company database research, with winners selected based on their fiscal-year revenue growth percentage over a three-year period.
3 A minimum investment of $1,950 is required to receive bonus shares. 100% bonus shares are offered on investments of $9,950+.

Prompt Tip of the Day
Andrej Karpathy released a new video in his āgeneral audienceā series on language models and how to use them, with over 15 tips for prompting and best practices when using AI tools.

Treats To Try.

- *Join Fiddler AI and Datastax to build better, safer RAG applications with comprehensive observability tools + LLM monitoring via Fiddlerās Trust Model. Register + get the replay here.
- Deep Review finds you the most relevant academic papers by thinking critically (like a researcher).
- Basalt helps you integrate AI into your product in seconds with tools to create, test, deploy, and monitor prompts that actually work in real conditions.
- OpenArt Consistent Characters helps you create characters you can pose, place, and combine in any scene.
- Pinch translates your voice in real-time during video calls so you sound like a native speaker in 30+ languages.
- Quanta gives you instant, automated accounting services instead of making you wait weeks for your accounting data (raised $4.7M).
- Forage Mail cleans up your inbox by filtering out low-priority emails and sending you one digestible summary.
See our top 51 AI Tools for Business here!
*This is sponsored content. Advertise in The Neuron here.

Around the Horn.
- Meta planned a standalone AI app for Q2 2025 to compete with ChatGPT and also planned to raise $35B for more data centers in a new financing w/ Apollo.
- IBM debuted Granite 3.2, a large language model family that solves practical enterprise problems and is focused on real-world utility rather than benchmarks.
- Meta also announced Aria Gen 2 glasses, an upgraded research device with advanced sensors that enables researchers to explore machine perception, contextual AI, and robotics applications.

FROM OUR PARTNERS
Building Reliable AI Agents

AI agents are trickyābugs, hallucinations, and edge cases can break workflows.
In this exclusive AI Engineering Summit talk, Anita from Vellum unpacks how we got here, how TDD improves reliability, and even demos her SEO agent. Get access here!

Intelligent Insights
- Ethan Mollick boiled the āmultiple paths in AIā down to three levers: pre-training (scale), post-training, and reasoningāand breaks out where each major model excels.
- Check out this interview with Nobel economist Daron Acemoglu who argues we're ādriving 200 miles an hourā in the wrong direction by prioritizing automation over tools that could actually enhance human capabilities.
- Ed Zitron wrote the ultimate bear take on the genAI industry thatās worth a read.
- Coracle and University of Hertfordshire are developing an offline AI tutor for UK prisoners thatās surprisingly wholesome?

A Cat's Commentary.

