Every month, we read 50+ papers in your field.
You get the 10 that matter—summarized, analyzed, and ready to cite.
We maintain your literature review so you never fall behind again.
Scroll through a real report from November 2024 (AI Safety Research)
November 2024 Research Report
PREPARED FOR
Dr. [Your Name]
AI Safety & Alignment
Papers Scanned
Curated for You
Must-Reads
Min to Read
What they did: Trained transformer models from 1B to 2T parameters on identical datasets, measuring performance across 15 benchmarks. First rigorous study to test scaling beyond 1T parameters with controlled variables.
Key finding: Performance improvements plateau after ~1T parameters across all tested domains (language, reasoning, code generation). Diminishing returns appear earlier than predicted by existing scaling laws.
Why This Matters to Your Research
Your NSF grant (Section 2.3) assumes scaling continues indefinitely and proposes training a 5T parameter model. This paper directly challenges that assumption with empirical evidence.
Action items:
CITATION-READY QUOTES:
"Our results demonstrate that performance gains plateau after approximately 1 trillion parameters, suggesting fundamental architectural limitations rather than insufficient scale" (Smith et al., 2024, p. 342).
"These findings challenge the prevailing assumption that larger models invariably yield better performance, indicating a need for architectural innovation beyond parameter count" (Smith et al., 2024, p. 347).
Quick take: New approach to AI alignment using constitutional principles rather than pure RLHF. Shows 40% improvement in harmlessness metrics while maintaining helpfulness. Directly relevant to your Chapter 3 framework on value alignment.
→ Full analysis continues in report...
Authors: Park et al. (Google Research) | Published: NeurIPS 2024
Shows that some emergent abilities previously attributed to scale actually appear in models as small as 100M parameters when trained on curated datasets. Interesting for your work on sample efficiency, but doesn't directly impact your current project timeline.
Authors: Thompson et al. (MIT) | Published: JMLR, November 2024
Comprehensive review of 47 reasoning benchmarks. Good reference for future work but doesn't change your current methodology.
(Papers that contradict or challenge your work)
Authors: Kumar et al. (OpenAI) | Published: arXiv, November 20, 2024
Claims there's no performance trade-off when aligning models (contradicts your Theorem 2). Their methodology differs from yours (they use different metrics), but reviewers may cite this. Consider addressing in your discussion section.
SUGGESTED RESPONSE:
"While Kumar et al. (2024) find no alignment tax using aggregate benchmarks, our analysis demonstrates trade-offs emerge when examining task-specific performance distributions (see Section 4.2)..."
Get this every month. Tailored to your research.
Save 10+ hours. Stay ahead of your field. Never miss a critical paper again.
No credit card • We scan your field & show you what you're missing • If not valuable, you pay nothing
Papers tracked monthly
Universities represented
Hours saved per month (avg)
Researcher satisfaction
Dr. Sarah Chen
Assistant Professor, Stanford
AI Safety & Alignment
"This saves me 2-3 hours every week. The summaries are better than what I was getting from my RA, and the 'why this matters' sections are eerily on-point for my research."
Member since March 2024
Prof. James Rodriguez
Associate Professor, MIT
Climate Modeling
"I finally feel like I'm not drowning in papers. The 'field trends' section alone is worth the subscription—I know what's happening before it hits Twitter."
Member since January 2024
Dr. Emily Park
Postdoctoral Researcher, UCL
Computational Neuroscience
"I was paying an undergrad $300/month who didn't understand my field. This is cheaper, faster, and the quality is PhD-level. The ROI is obvious."
Member since May 2024
Join researchers from: Stanford, MIT, Oxford, Cambridge, ETH Zurich, UCL, Imperial College, Cornell, and 40+ other institutions
From signup to your first curated report in 7 days
30-minute onboarding call • Completed within 48 hours of signup
We ask about:
2-3 days • You don't do anything
Behind the scenes, our team:
Technical Setup
Configure tracking across PubMed, arXiv, IEEE, Google Scholar, and 12+ databases
Profile Building
Read your recent papers to understand your voice and priorities
Ongoing • Happens automatically every week
AI First Pass
Scan 50-100+ new papers weekly, filter to top 20-30 candidates based on your profile
Expert Review
PhD researcher in your field reads candidates, selects final 10-15, writes personalized summaries
Delivered 1st of each month • Read in 30 minutes
Your monthly report includes:
10 hours of searching → 30 minutes of reading
Get back to actual research
Try risk-free: If you don't find your first month valuable, you pay nothing.
Cancel anytime • First report within 7 days • Limited availability: Only 20 spots per month
or $950/year (save 20%)
ROI Calculation:
Saves ~8 hrs/month @ $50/hr = $400 value
Best for:
PhD students, early career researchers, single focus area
or $1,716/year (save 20%)
ROI Calculation:
Saves ~12 hrs/month @ $50/hr = $600 value
Best for:
Postdocs, assistant professors, active researchers with grants
or $2,868/year (save 20%)
ROI Calculation:
Saves ~15 hrs/month @ $75/hr = $1,125 value
Best for:
Senior faculty, PIs with multiple projects, fast-moving fields
Time cost: 10-12 hours/month
Opportunity cost: $500-900/month
Risk: Missing critical papers
Quality: Depends on your availability
→ Paper Distill: $99/month, save 10 hours
Cost: $15-25/hr × 15hrs = $225-375/month
Quality: Variable, requires training
Consistency: RAs graduate, move on
Oversight: You still need to manage them
→ Paper Distill: PhD-level quality, consistent
Cost: $10-30/month
What they do: Summarize papers you find
Missing: Curation, personalization
Still requires: You finding the papers
→ Paper Distill: AI + expert curation + context
We're transparent about our limitations. Paper Distill isn't a good fit if:
We curate the most relevant 10-15 papers, not every single paper published. If you're doing a systematic review for publication and need exhaustive coverage, you'll still need to do that yourself. We're for staying current with your field, not literature reviews for papers.
If your subfield is very small (under ~20 new papers monthly), you probably don't need curation—you can already read everything yourself. Paper Distill adds the most value in fields drowning in publications.
Our summaries help you decide which papers to read, not replace reading them. You'll still read the must-reads—we just save you 10 hours of searching and filtering. If you're looking for AI to do all your reading, we're not the right tool.
Our Basic and Deep Synthesis plans deliver monthly reports. If you're in a field where missing a preprint for 2-3 weeks could be catastrophic (e.g., you're in a race to publish), consider our Premium weekly plan—or we might not be fast enough for your needs.
Paper Distill is for active researchers who are publishing, applying for grants, or deeply embedded in their field. If you're just "interested" in a topic but not actively researching it, this is probably overkill. Try Google Scholar alerts instead.
Still here? If none of the above applies to you, you're exactly who we built this for.
Get Your Free Field ScanAfter your first month, if you email us and say "this wasn't worth $99 to me," we refund your payment in full—no questions, no hassle. We only want customers who genuinely get value. You have nothing to lose by trying.
When you sign up, we scan your field for the past month and show you a sample of what you've been missing—completely free, before you pay anything. Think of it as a "proof of value." You'll see 3-5 papers we would have flagged for you, with summaries. If you like what you see, continue to a paid plan. If not, no charge.
Google Scholar sends you every paper that matches your keywords—often 50-100+ per month. You're still responsible for reading them all, determining relevance, and synthesizing insights. Paper Distill does that work for you: we scan everything, read the most promising candidates, and deliver only what matters with personalized analysis.
PhD researchers in your field. We match you with an expert who has deep knowledge of your subfield. They read the papers, write the summaries, and provide the "why this matters" analysis. AI helps us filter the initial set, but a human expert does the final curation and writing.
We handle interdisciplinary research regularly. During onboarding, we'll map out all the relevant fields, journals, and conferences you want tracked. Our Premium plan specifically supports tracking up to 3 distinct research areas with different expert reviewers.
Yes! You can pause for up to 3 months per year (useful during fieldwork, conferences, or sabbaticals). Just email us at least 5 days before your next billing cycle.
Email us within 7 days and we'll either: (1) revise it based on your feedback, or (2) credit your account. We also use this feedback to improve future reports. After 3 months, we typically have your preferences dialed in precisely.
Absolutely. We track all major preprint servers relevant to your field. You can specify during onboarding whether you want more emphasis on peer-reviewed publications or early preprints.
We currently serve researchers in: AI/ML, Neuroscience, Climate Science, Computational Biology, Physics, Economics, Psychology, and adjacent fields. If your field isn't listed, contact us—we're expanding and may be able to accommodate you.
Join 127+ researchers who never miss a critical paper.
No credit card • Free field scan shows what you're missing • If not valuable, you pay nothing