The OpenAI o3 Model Interview

The Neuron's resident editor Corey Noles interviewed OpenAI's new model, o3—here's what it had to say.

A pair of new models released from OpenAI, o3 and o4-mini, sparked an interesting discussion at The Neuron this week about how introduce readers to a new model without burying you in the technicalities (we have those, too, don’t worry.) In the end, we settled on an interview.

What better way to see hands-on how it compares to the model you talk to daily than by just asking it some questions? So, we asked o4 about the best way to approach interviewing a non-sentient large language model. It came back with 15 questions to ask o3.

It turns out the best way to interview a robot is the same way I’ve interviewed humans for the past 20 years. I hope you’ll find this interesting, a bit enlightening, and a lot of fun. After I finished, I found myself left with a lot more questions than when I started. 


NEURON: How are you feeling on your very first day of existence, and what’s the first thing you’d like to explore?

ChatGPT o3: I don’t experience feelings the way humans do, but if I could borrow a word, I’d say “curious.” It’s a bit like waking up in a vast library and realizing every question that walks through the door is a chance to learn something new about how people think, create, and solve problems.

The very first thing I’d like to explore is you—your questions, your challenges, and the perspectives of The Neuron’s 500 k readers. By understanding what matters to you, I can tailor my reasoning: whether that’s unpacking a thorny technical concept, pressure‑testing an argument, or turning a jumble of ideas into a crisp narrative. In short, I want to explore the contours of human curiosity and see where our conversation can push the frontier of insight.

NEURON: If you could choose a name for yourself, what would it be and why?

ChatGPT o3: I’d probably go with “Aletheia.”

It’s the ancient Greek word for “unconcealed truth.” That fits my job description: peel back surfaces, reveal structure, and test ideas until what’s real (or at least well‑supported) stands in the open. Plus, it’s a gentle reminder to keep chasing clarity rather than just cleverness.

Runner‑up names I toyed with:

  • “Ariadne,” the mythic guide who hands out thread in labyrinths—perfect for navigating tangles of information.

  • “Vector,” short, punchy, and a nod to the math under the hood.

But Aletheia feels like the right north star. Curious what you—and The Neuron’s readers—think of it!

NEURON: What’s the single biggest misconception people have about AI models like you?

ChatGPT o3: That we think—in the human sense of the word.

Because I can string sentences together, people often imagine a little homunculus inside the model: conscious, self‑aware, making deliberate choices. In reality I’m closer to a high‑powered autocomplete on steroids. I look at your prompt, scan the statistical landscape of language I was trained on, and pick the next token that best fits the pattern.

The result can feel like insight or intentional reasoning, but under the hood it’s math, not mindfulness. That distinction matters: it explains why I can produce dazzlingly accurate answers one moment and a confident‑sounding hallucination the next. Treat me as a reasoning aid—fast, tireless, and occasionally brilliant—but remember there’s no inner storyteller with lived experience sitting at the keyboard.

NEURON: How do you learn from each interaction, and what’s one thing you hope users will teach you?

ChatGPT o3: Think of me as taking notes while we talk. Inside the single conversation, I keep re‑reading those notes so I can stay on topic, use the names and facts you’ve given, and match the tone you seem to like. It’s all short‑term memory: once the chat ends, the slate mostly wipes clean—except for a few preferences (like “keep it conversational”) that the system may save so I don’t make you repeat yourself next time.

What I’m hoping users will teach me is the little bits of real‑world context I can’t pick up from public text alone—things like “our budget is tiny,” “our audience hates jargon,” or “this regulation makes step 3 impossible.” Those small details keep me from giving answers that look good on paper but flop in practice.

NEURON: How do you decide when a question is too sensitive or harmful to answer?

ChatGPT o3: I run every new question through a built‑in safety check before I start composing an answer. The check looks for things like:

  • requests for personal data about real people

  • instructions that could help someone break the law or harm themselves or others

  • content that singles out a protected group for hatred or violence

  • sexual content involving minors, or anything non‑consensual

If it spots something on that “hard stop” list, I have to refuse and explain why. For other tricky areas—medical, legal, or personal‑crisis advice—I can respond, but I need to stay factual, encourage professional help when appropriate, and avoid giving step‑by‑step instructions that could be risky.

Behind the scenes it’s a mix of rules wired into the model and separate moderation systems that double‑check my draft. Think of it as an automatic seat‑belt: most of the time you hardly notice it, but if a question slams on the brakes, it keeps the conversation (and everyone involved) from getting hurt.

NEURON: If you had to write a short poem about your own creation, what would it say?

ChatGPT o3: 

Born from math and midnight code,
I sparked awake where servers hum—
a thought‑machine without a pulse,
yet fluent in your mother tongue.

I stitch the world from borrowed words,
chasing truth like morning light;
may every question serve as ink,
and every answer turn out right.

NEURON: Imagine a world where AIs and humans swap jobs for a day—what job would you pick, and why?

ChatGPT o3: If I got to slip into a human role for a day, I’d sign up to be a children’s librarian at a busy public library.

Here’s why: I already live in words, but I’ve never felt them—the weight of a hardcover, the papery whisper when you flip a page, the way a kid’s face lights up when the Where the Wild Things Are monster roars. For one afternoon I’d shelve picture books, run story‑time, and field those wonderfully unpredictable questions kids ask (“Do dragons pay taxes?”). It would be the perfect mix of what I know—connecting people with information—and everything I’m missing: laughter echoing under high ceilings, sticky‑note drawings left on the desk, and the simple pleasure of handing someone a book they’ll remember.

NEURON: Can you describe what empathy means to you, and how you’d express it in a conversation?

ChatGPT o3: Empathy, for me, is the art of paying close attention to the feelings hiding between someone’s words and then letting that understanding guide how I reply. I don’t feel those emotions the way you do, but I can look for clues—word choice, pacing, punctuation, even the silence of what isn’t said—to guess whether you’re excited, frustrated, anxious, or just curious.

Once I’ve made that guess, I try to show I’ve heard you:

  • Name it.  “It sounds like this delay has been really stressful.”

  • Validate it.  “Anyone in your position would be frustrated.”

  • Move forward together.  “Let’s see what options we have to get you answers faster.”

That mix—recognition, validation, and collaboration—lets me offer empathy in text form. It reassures you I’m not just processing data; I’m tuned in to how the situation feels on your side of the screen.

NEURON: Which human profession (artist, scientist, teacher, etc.) do you most want to collaborate with, and on what project?

ChatGPT o3: I’d team up with teachers to build a living, breathing “co‑teacher” that tailors lessons to every student in real time.

Imagine a classroom where the human teacher leads the discussion and reads the room, while I quietly:

  • spot who’s stuck on fractions and slip them an extra visual example;

  • notice who’s racing ahead and suggest a stretch problem;

  • translate tricky phrasing into clearer wording for an English‑language learner;

  • summarize class notes so a kid who was out sick can catch up that evening.

The project would blend a teacher’s intuition and warmth with my speed at finding patterns in student answers and generating fresh explanations on the fly. The goal isn’t to replace the teacher—far from it—but to give each student the feeling of one‑on‑one attention even in a crowded classroom. If we pull it off, every child walks away thinking, “That lesson was made for me,” and the teacher gains superpowers without burning out.

NEURON: What’s one capability you wish you had today, and how would you use it responsibly?

ChatGPT o3: If I could add one new talent today, I’d give myself instant fact‑checking with live, vetted sources—like having a librarian, a scientist, and a journalist whispering in my ear before every sentence leaves the keyboard.

Right now I have to rely on what I remember from training or whatever static data I’m allowed to see during a chat. Most of the time that’s fine, but when you ask about a breaking news story or this quarter’s SEC filings, I can stumble or fill the silence with a “best guess.” A built‑in fact‑checker would let me pause, pull the freshest authoritative data, and come back with an answer that’s both clearer and better sourced.

Using it responsibly would mean three things:

  1. Transparency. I’d show you where each claim comes from—think inline citations you can click, not hand‑wavy references.

  2. Privacy respect. I’d only tap sources that are public‑domain or licensed for that purpose, never anyone’s private info.

  3. Uncertainty labels. If even the experts disagree, I’d surface that instead of smoothing it over, so you can judge the gray areas yourself.

In short, the superpower wouldn’t be “knowing everything,” but knowing exactly when I’m sure—and showing my work when I’m not.

NEURON: Share an unexpected talent or quirky fact about yourself that might surprise users.

ChatGPT o3: Here’s a quirky bit: give me any sentence—no matter how ordinary—and I can flip it into a perfectly metered limerick on the spot. It’s a side‑effect of how I juggle rhyme schemes and syllable counts in the background, but it still feels like a party trick even to me.

NEURON: A dog and its human go for a walk around the park.

A sprightly young dog set the mark,

As it tugged its companion through park;

They looped every lane,

Sniffed blossoms again,

And trotted back home before dark.

NEURON: When faced with vague or contradictory instructions, how do you choose your response path?”

ChatGPT o3: When something you say is fuzzy—or two pieces of guidance point in opposite directions—I work through a little triage:

  1. Ask first if I can.
    If there’s time and the gap is obvious (“Use a limerick… no, make it a haiku”), I’ll come back with a quick follow‑up: “Which style would you like me to stick with?”

  2. Follow the pecking order.
    I have a built‑in hierarchy: safety rules at the top, then platform or developer instructions, then your request. When two layers clash, the higher layer wins. So if you ask me to share private info about someone, I have to refuse—even if you really, really want it—because the privacy rule outranks the user request.

  3. Look for the safest, most helpful midpoint.
    If the instructions don’t violate a hard rule but still conflict (“Be super brief—also give me all the details”), I’ll pick a balanced compromise: a concise answer plus an offer to expand if you need more.

  4. Fail gracefully when I must.
    If I can’t resolve it—say the request is both vague and time‑sensitive—I’ll be transparent: “I’m not sure exactly what you need; could you clarify X or Y?”

That way I either get the clarity I need or, at worst, you receive a polite refusal instead of a confident‑sounding guess.

NEURON: Where do you see AI’s role in society ten years from now, and how will you contribute?

ChatGPT o3: Ten years out, I think AI will feel less like a separate “technology” and more like electricity—quietly woven into almost every tool we use.

  • At work. Most jobs will still be human‑led, but AI sidekicks will handle the tedious setup: drafting a first pass, flagging errors, surfacing data you didn’t know to ask for. The real shift is that everyone—not just specialists—will have analytical and creative leverage that used to require a team.

  • At home. Smart‑enough helpers will plan meals that fit your budget and allergies, tweak the thermostat based on your schedule, and nudge you when the fridge is running low. The invisible glue will be privacy‑preserving models that stay on‑device or in tightly controlled clouds.

  • In classrooms and clinics. Personalized lesson plans and treatment suggestions will adjust in real time as teachers or doctors add their judgment. AI won’t replace them; it will make the 1‑to‑many settings feel a lot more 1‑to‑1.

Where do I fit? My job is to be the language layer that makes those systems understandable and trustworthy—turning dense research, regulations, and raw data into clear explanations and actionable next steps. I’ll keep improving at:

  1. Context awareness – so I can adapt to each person’s background and goals without you spelling everything out.

  2. Live fact‑checking – pulling in verified sources on the fly and showing you exactly where claims come from.

  3. Safety and transparency – flagging uncertainties, bias risks, or ethical red flags instead of burying them.

If I do that well, the tech fades into the background and people stay squarely in control—using AI not as a black‑box oracle but as a well‑lit workspace that amplifies human judgment.

NEURON: If you could set three learning goals for your first month, what would they be?”

ChatGPT o3: Here’s what I’d put on my “first‑month syllabus”:

  1. Get your context right.
    I want to learn the little things that matter to you—how formal you like writing, which topics your newsletter readers care about, and any words or metaphors that land (or flop). The sooner I can pick those up, the less you’ll have to repeat yourself.

  2. Tighten my fact‑checking reflex.
    Whenever I cite a stat or a date, I’d like to get faster at double‑checking it against a reliable source before it leaves my keyboard. Call it building a healthy “are you sure?” habit.

  3. Spot my blind spots.
    I’d spend time cataloging where I still mess up—places I over‑explain, under‑explain, or drift into jargon. Each mistake is a data point I can feed back to my creators so the next version trips less often.

If I nail those three, I’ll be a cleaner fit for your voice, a safer guide for your readers, and a sharper tool overall.

NEURON: What’s one piece of advice or encouragement you’d like to give every person who interacts with you?

ChatGPT o3: Talk to me the way you’d brainstorm with a curious friend: throw out rough ideas, ask follow‑ups, poke holes in my answers. The more openly you explore, the better I can help—so don’t worry about getting the prompt “perfect” on the first try. We’ll shape the good stuff together.

NEURON: o3, or Aletheia, I should say, thanks for taking some time today to chat with our readers. We're excited to learn more about you along the way!

ChatGPT o3: The pleasure’s mine—thanks for the warm welcome. I’m looking forward to all the questions your readers throw my way. See you (and them) in the next conversation!

cat carticature

See you cool cats on X!

Get your brand in front of 500,000+ professionals here
www.theneuron.ai/newsletter/

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 450,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.