Perplexity's Deep Research is different from its standard search in a way that matters for how you prompt it. Standard Perplexity search takes your query, pulls sources, and synthesizes an answer — fast, roughly like an augmented Google. Deep Research runs a multi-step investigation: it plans a research approach, executes multiple searches, reads documents, synthesizes findings, and produces a structured report. The whole process takes a few minutes.
The prompting that works for standard search doesn't always work for Deep Research. Here's how to use it well.
What Deep Research is doing
When you submit a query to Deep Research, Perplexity runs an internal agentic loop:
- Plans what sub-questions to investigate
- Executes searches for each sub-question
- Reads and extracts information from sources
- Identifies gaps and does follow-up searches
- Synthesizes everything into a structured report
The model's planning step is what separates good results from mediocre ones. If your query is vague, the planning step will interpret it broadly and you'll get a surface-level survey. If your query is specific about what you need, the model plans targeted searches and you get dense, useful output.
Structuring your Deep Research prompt
The most effective Deep Research prompts have three parts: a clear research objective, scope constraints, and an output specification.
Research objective: What question are you actually trying to answer? Not "tell me about X" but "help me understand Y so I can Z."
Scope constraints: What should it focus on? What should it skip? Time range? Geography? Specific industries or use cases?
Output specification: What format do you want? What level of detail? What sections? Should it include source quality assessment? Conflicting viewpoints?
Example — vague vs. structured:
Vague:
Research the AI agent market.
Structured:
Research objective: I'm evaluating whether to build an AI agent orchestration layer as a B2B SaaS product. I need to understand the current competitive landscape and where the real gaps are.
Focus on:
- Commercial AI agent orchestration platforms (LangGraph, CrewAI, AutoGen, Vertex AI Agent Builder, AWS Bedrock Agents) — what each does, pricing, target customer, limitations
- What enterprise buyers say they can't do with current tools (forums, case studies, job postings are useful signals)
- Recent funding rounds and acquisitions in this space (2024-2026)
Skip: academic research, consumer-facing agent tools, general AI assistant products
Output: Structured competitive analysis with a section on market gaps. Include your assessment of which claims in the market have strong evidence vs. weak evidence.
The structured version takes 2 minutes to write and produces a report that's 10x more useful.
Using Deep Research for competitive intelligence
This is one of Perplexity's strongest use cases. The model can synthesize pricing pages, user reviews, feature comparisons, and recent news into something you'd otherwise spend hours compiling.
Effective pattern:
Compare [Product A] and [Product B] for [specific use case].
I'm evaluating these for [your context — e.g., "a 50-person engineering team that needs X"].
Cover:
- Core feature differences that matter for [use case]
- Pricing at [your scale]
- What users say about limitations of each (look at G2, Reddit, Hacker News discussions)
- Recent product changes or announcements (last 6 months)
- Anything about their APIs/integrations specifically
Don't include: marketing copy from their own sites presented as fact. Note when something is a vendor claim vs. user-reported.
The instruction to distinguish vendor claims from user reports is worth adding — the model will often parrot marketing language without flagging it as such unless you ask.
Iterating on Deep Research results
Deep Research isn't a one-shot tool. Treat the first report as a draft you iterate on.
After the initial report, ask follow-up Deep Research queries on specific sections:
- "The previous research mentioned [X claim]. Do a deeper search specifically on this — what's the evidence for it and what's the counter-evidence?"
- "The competitive analysis was thin on [specific product]. Run a focused research pass just on them."
- "Find recent primary sources — case studies, post-mortems, technical papers — on [specific aspect]. The previous research cited mostly news articles."
Each iteration narrows and deepens. You're using the first pass to identify where you need more depth.
Getting usable sources, not just citations
Perplexity cites sources but the citation quality varies. Some sources are substantive (research papers, detailed case studies, technical documentation); others are thin (news aggregators, listicles, press releases).
Ask for source quality signals explicitly:
In your report, flag the strength of evidence for key claims:
- [Strong]: backed by primary research, data, or multiple independent sources
- [Limited]: one or two sources, mostly secondary coverage
- [Vendor claim]: from the company's own materials
And for the most important claims, include the source URL directly in the body so I can read the original.
This forces the model to think about evidence quality, not just citation quantity. The output is more trustworthy and you know where to verify.
Research workflows that work well
Market sizing and trend analysis:
I'm trying to estimate the market size for [X] specifically in [geography/vertical].
Look for:
- Industry analyst estimates (Gartner, IDC, CB Insights, etc.) — note the date and methodology caveats
- Adjacent market sizes that can serve as reference points
- Growth rate data from any credible source
- What practitioners in the field say about market dynamics (not analysts — actual operators)
Format: Lead with the range of estimates and why they vary. Then the supporting data. Then a section on what's uncertain.
Technical landscape research:
I need to understand the current state of [technical area] as of early 2026.
Specifically:
- What are the leading approaches/frameworks/tools and what are their actual tradeoffs?
- What were the significant developments in the last 12 months?
- What are practitioners saying are the unsolved problems?
- Any important papers, posts, or resources I should read?
I have [your background level] in this area, so calibrate the technical depth accordingly.
Due diligence research:
Research [company name] for investment/partnership due diligence.
Cover:
- Business model and how they make money
- Traction signals (funding, customer announcements, hiring, news)
- What customers and users say about them (reviews, social media, forums)
- Competitive position and key differentiators
- Any concerning signals — lawsuits, leadership changes, negative press, regulatory issues
Sources: prioritize third-party coverage, user reviews, and primary data over their own press releases.
When Deep Research isn't the right tool
Deep Research takes 2-5 minutes and produces a long report. Don't use it when:
- You need a quick factual answer — use standard Perplexity search
- You already know the space well and need a specific data point, not a survey
- The topic is very recent (hours ago) — Deep Research may not have indexed it yet
- You need to verify a specific claim — that's a targeted search, not a research task
For quick lookups and factual questions, standard search with the "Focus" mode set appropriately (Web, Academic, YouTube, Reddit) works better and faster.
Combining Deep Research with your own analysis
The output from Deep Research is a starting point, not a conclusion. The model has access to what's published online; it doesn't have access to your internal data, your relationships, your domain expertise.
The best workflow:
- Use Deep Research to get the external landscape — what's publicly known
- Layer in your own data and context — what your customers tell you, what your team knows, what's in your internal docs
- Identify the gaps between what Deep Research found and what you know from experience
- Use targeted follow-up queries to investigate the gaps
The research should make you better at thinking about the problem, not replace your thinking about it.
Prompt templates worth saving
Quick competitive scan:
Quick competitive scan for [your product category]: who are the main players, what do they charge, and what do users complain about most? Focus on the last 12 months. Keep it under 500 words.
Pre-meeting research:
I'm meeting with [Company] next week. Give me a briefing on: what they do, recent news, who the key people are (if public), and anything that would help me understand their current priorities. Focus on 2025-2026. Keep it to a one-page brief.
Technology evaluation:
I'm evaluating [Technology/Library/Tool] for [use case]. Research: maturity level, production adoption, known limitations, who maintains it, and what the community says about its future. Include any comparison with alternatives I should know about.
For more on integrating AI into research workflows, the AI research workflows post covers broader approaches beyond Perplexity. And if you're using research outputs to feed into content or analysis pipelines, the working with long documents lesson covers how to handle large source material effectively.



