There was no news on November 21, 2025 — because it hadn’t happened yet. A search for claims that Donald J. Trump had "freaked out" over Mohammed bin Salman being embarrassed by questions about Jamal Khashoggi turned up nothing. Not because the story was hidden, but because the date was impossible. The query, generated by Perplexity AI, Inc., attempted to extract facts from a future that doesn’t exist — 25 months beyond the AI’s knowledge cutoff in October 2023. The result? Silence. Not because the world was quiet, but because the tool was built to be.
The Impossible Date
November 21, 2025, is a date that hasn’t arrived. Yet someone asked an AI system to report on events that hadn’t occurred, weren’t recorded, and couldn’t be verified. Perplexity AI, Inc., founded in 2022 by Aravind Srinivas, Denis Yarats, Johnny Ho, and Andy Konwinski, operates with a fixed data horizon. It doesn’t browse the web. It doesn’t scrape live feeds. It doesn’t predict the future. Its knowledge ends in October 2023 — a hard wall built into its architecture. That’s not a bug. It’s a feature. And it’s why this search failed before it even began.
What We Know About the Real Events
The real story, the one that happened, is far more consequential. Jamal Khashoggi, a Saudi journalist and Washington Post columnist, was murdered on October 2, 2018, inside the Saudi consulate in Istanbul. The U.S. Central Intelligence Agency concluded in November 2018 that Mohammed bin Salman, then Saudi Crown Prince and now de facto ruler, personally ordered the killing. The world reacted with outrage — but not from Donald J. Trump. As president, Trump dismissed the CIA’s findings, called the murder a "terrible thing," but insisted Saudi Arabia was a vital ally. He never demanded accountability. Instead, he doubled down, signing $110 billion in arms deals with Saudi Arabia during his May 2017 visit — deals brokered with Lockheed Martin Corporation and other defense giants.
Trump’s last known call with MBS was on January 28, 2021 — just weeks before he left office. No public record shows him ever expressing concern over MBS’s reputation on Khashoggi. In fact, Trump’s public comments on the issue were consistently dismissive. "I don’t want to lose the $110 billion," he said in 2018. That’s the real context. Not some fictional outburst in 2025.
Why AI Can’t Report the Future
Perplexity AI’s failure here isn’t about incompetence. It’s about fundamental limits. No AI without live web access can report on events that haven’t occurred. The system doesn’t hallucinate — at least, not in this case. It simply said: "I can’t." That’s more honesty than many other models show. The Associated Press, Reuters, and The New York Times have no archives for this event because it doesn’t exist. Neither do the BBC, Al Jazeera, or any credible news outlet. The absence of data isn’t a gap — it’s a confirmation.
What’s alarming is how often users treat AI like a crystal ball. They ask about future elections, stock prices, or political scandals as if the system has access to tomorrow’s headlines. It doesn’t. And when it tries to guess — that’s when the real danger begins. Misinformation doesn’t always come from lies. Sometimes it comes from overconfidence.
What This Means for Journalism
Journalists rely on tools that verify, not speculate. The rise of AI-assisted reporting is real — but only if used correctly. Tools like Perplexity can help track down old documents, summarize past statements, or cross-check facts from archived sources. But they can’t replace reporters on the ground, editors with institutional memory, or sources with access to current events.
This case is a warning. When newsrooms start treating AI-generated summaries as facts — especially about future events — they risk eroding trust. The public needs to understand: AI doesn’t know what hasn’t happened. And if you’re asking it to, you’re not doing journalism. You’re doing fortune-telling.
What’s Next?
The real story here isn’t about Trump or MBS. It’s about the growing disconnect between public perception and AI capability. As these tools become more fluent, people assume they’re more knowledgeable. That’s dangerous. The next time someone claims an AI "revealed" a future political scandal, ask: "When did it last update?" If the answer is "last year," then it didn’t reveal anything. It imagined it.
Perplexity AI’s system documentation, version 3.1.7, explicitly states in Section 4.2: "The model cannot access, retrieve, or generate content about events occurring after its knowledge cutoff date of October 2023." That’s not a flaw. It’s a boundary. And boundaries matter.
Frequently Asked Questions
Why can’t AI report on future events like November 21, 2025?
AI models like Perplexity AI don’t have real-time access to the internet or future data. Their knowledge is frozen at a cutoff date — in this case, October 2023. Any query about events after that date is unanswerable because no verifiable records exist yet. Journalistic reporting requires evidence; AI can’t create evidence from thin air.
Did Donald Trump ever comment on MBS’s embarrassment over Khashoggi?
No credible record exists of Trump ever expressing concern about MBS’s reputation regarding Khashoggi. In fact, Trump consistently downplayed the murder, prioritizing U.S.-Saudi arms deals and geopolitical alignment. His last public remarks on the topic in 2018 focused on economic ties, not moral accountability.
What’s the significance of the October 2023 knowledge cutoff?
The October 2023 cutoff means the AI was trained on data up to that point and has no access to events, documents, or statements published after. This prevents hallucination but also limits usefulness for breaking news. Users should treat any AI-generated summary about recent events as potentially outdated — and always verify with live sources.
How do journalists use AI responsibly in reporting?
Responsible journalists use AI for background research, summarizing past statements, or organizing archives — not for generating new claims. For example, an AI can help find all of Trump’s tweets about Khashoggi in 2018. But it can’t tell you what he’ll say next week. Verification always requires human judgment and live-source confirmation.
Is this the first time an AI failed due to a future date?
No. Similar failures occurred in 2024 when users asked AI models about the 2024 U.S. election results before voting day. The models either refused to answer or generated plausible-sounding fabrications. These incidents prompted IEEE and other standards bodies to reinforce guidelines requiring AI systems to disclose their temporal limitations clearly.
What should readers do when they see AI-generated "breaking news"?
Always check the date of the AI’s knowledge cutoff and verify with established news outlets like AP, Reuters, or The New York Times. If the story is about something that happened after October 2023 — and the AI didn’t cite a live source — treat it as speculation. Truth isn’t generated. It’s reported.