GPT-5 Rumors Are Flying—Here's What's Actually Credible

OpenAI GPT-5 concept illustration

My inbox has been absolutely flooded with GPT-5 speculation lately. "Is it coming next month?" "Will it achieve AGI?" "Should I delay my startup until it launches?"

Let me save you some anxiety: most of what you're reading is nonsense. But there are a few threads worth following.

What We Actually Know

Here's the thing about OpenAI—they're simultaneously the most watched and most opaque AI company on the planet. Everyone's trying to read the tea leaves, and most people are seeing patterns that aren't there.

What we do know: OpenAI has been hiring aggressively for inference optimization. That typically signals a larger model in the pipeline. You don't optimize infrastructure for fun.

We also know they've been unusually quiet. No major model releases since the o1 series. For a company that was shipping monthly updates, that silence is deafening. Either something big is cooking, or they've hit a wall. I'm betting on the former.

The AGI Claims Are Getting Ridiculous

Can we please, as a community, stop with the "GPT-5 will be AGI" takes? Every generation gets this treatment, and every generation disappoints the hype while still being genuinely impressive.

GPT-5 will probably be better at reasoning. It'll probably have a larger context window. It might have better multimodal capabilities. What it won't be is a general intelligence that can do anything a human can do. That's not pessimism—that's just understanding how these models work.

What I'm Actually Excited About

Here's what would genuinely matter: better reliability. Less hallucination. More consistent performance across different types of tasks.

The flashy capabilities get the headlines, but the boring reliability improvements are what actually make AI useful in production. If GPT-5 can reduce hallucination rates by even 50%, that opens up use cases that are currently too risky to deploy.

My Prediction (Take It With Salt)

Based on hiring patterns, patent filings, and reading way too much into Sam Altman's tweets: I expect something in Q2 2026. It'll be impressive but not world-changing. The discourse will be unbearable for about two weeks. And then we'll all get back to work.

Don't delay your projects waiting for it. Build with what exists. Adapt when the new thing drops. That's always been the right strategy in AI, and it still is.