- Today On AI
- Posts
- OpenAI reveals more details about its agreement with the Pentagon
OpenAI reveals more details about its agreement with the Pentagon
AND: Perplexity’s new Computer is another bet that users need many AI models

✨TodayOnAI’s Daily Drop
OpenAI reveals more details about its agreement with the Pentagon
Perplexity’s new Computer is another bet that users need many AI models
Anthropic Defies Pentagon AI Demand as Google and OpenAI Employees Rally Support
💬 Let’s Fix This Prompt
🧰 Today’s AI Toolbox Pick
| 📌 The TodayOnAI Brief |
OPENAI

🚀 TodayOnAI Insight: OpenAI has secured a classified AI deployment deal with the U.S. Department of Defense—just days after Anthropic’s negotiations collapsed and the company was labeled a supply-chain risk. CEO Sam Altman admitted the agreement was “definitely rushed,” but framed it as a strategic move to de-escalate tensions between Washington and the AI industry.
🔍 Key Takeaways:
OpenAI will deploy models in classified environments via cloud API, keeping personnel and infrastructure under its control.
The company reaffirmed red lines: no mass domestic surveillance, autonomous weapons, or high-stakes automated decision systems.
OpenAI claims a “multi-layered” safeguard approach—retaining control of its safety stack and embedding contractual protections.
Critics argue references to Executive Order 12333 could still enable surveillance tied to foreign data collection.
Anthropic’s failed deal triggered a federal phase-out, clearing the path for OpenAI’s rapid agreement.
💡 Why This Stands Out: This episode underscores the growing entanglement between frontier AI labs and national security priorities. OpenAI is betting that tighter deployment control—not just policy language—can balance military partnerships with public trust. The question now: can technical safeguards withstand political pressure when AI becomes strategic infrastructure?
Perplexity

🚀 TodayOnAI Insight: Perplexity has launched Perplexity Computer, a cloud-based agentic system for $200/month subscribers that autonomously executes complex workflows using 19 AI models. The move signals a sharp pivot toward enterprise-grade research and high-value users over mass adoption.
🔍 Key Takeaways:
New “Computer” agent orchestrates 19 models and can spawn subagents to complete multi-step tasks.
Designed for complex research—financial, legal, statistical—outputting finished websites or visualizations.
Cloud-based execution aims to reduce security risks tied to local agent tools.
Exclusive to $200/month Max tier, reflecting a shift toward premium enterprise subscriptions.
Perplexity is doubling down on a “multi-model” strategy, auto-routing queries to specialized models like Gemini Flash, Claude Sonnet 4.5, or GPT-5.1.
💡 Why This Stands Out: Perplexity is repositioning itself from AI search challenger to orchestration layer for frontier models. Rather than competing on raw scale with OpenAI or Google, it’s targeting users making “GDP-moving decisions.” If models are specializing—not commoditizing—owning the routing layer could prove more strategic than owning the model itself.
Anthropic

🚀 TodayOnAI Insight: Anthropic is refusing a Pentagon demand for unrestricted access to its AI, drawing public support from over 360 Google and OpenAI employees. The standoff centers on red lines around mass surveillance and autonomous weapons—turning AI governance into a high-stakes test of industry unity.
🔍 Key Takeaways:
Anthropic rejected U.S. Department of Defense pressure to allow use of its models for domestic surveillance and fully autonomous weaponry.
More than 300 Google and 60 OpenAI employees signed an open letter urging leadership to uphold those same boundaries.
The Pentagon reportedly threatened to label Anthropic a “supply chain risk” or invoke the Defense Production Act to compel compliance.
OpenAI CEO Sam Altman criticized the use of DPA threats; an OpenAI spokesperson affirmed alignment with Anthropic’s red lines.
The military already uses Grok, Gemini, and ChatGPT for unclassified tasks and is negotiating classified access.
💡 Why This Stands Out: This isn’t just a contract dispute—it’s a defining moment for AI industry self-governance. As governments push for deeper model access, companies must decide whether commercial scale outweighs ethical guardrails. If firms fracture under pressure, red lines may become negotiable. If they hold, AI policy could shift from regulation-by-threat to consensus-driven norms.
| 💬 Let’s Fix This Prompt |
✨ See how a simple prompt upgrade can unlock better AI output.
🔹 The Original Prompt
"Generate blog ideas for a tech company."
At first glance, this prompt might seem okay. But it's too broad — and that limits the quality of AI-generated results. Let’s improve it using prompt engineering best practices.
✅ The Improved Prompt
Generate a list of unique, engaging blog post ideas for a B2B tech company that wants to attract decision-makers in mid-sized companies. Focus on topics related to emerging technology trends, industry insights, and practical solutions their software offers. Include suggested titles and a 1–2 sentence summary for each idea.
💡 Why It's Better
Specific audience: Targets decision-makers in mid-sized companies.
Contextual focus: Emphasizes emerging tech and practical solutions.
Actionable output: Requests summaries and titles to spark execution.
Tone and style: Guides the type of content (insightful, engaging, relevant).
🛠️ Learn how to adapt this prompt for SaaS, AI tools, dev teams & more →
Read the full PromptPilot breakdown
💡 Bonus Tool: Want to generate and master prompts instantly?
👉 Try PromptPilot by TodayOnAI (Free to use)
| 🧠 Smart Picks |