TRUSTED BY RESEARCHERS AT 47+ UNIVERSITIES

Your Research.
Curated.

Every month, we read 50+ papers in your field.
You get the 10 that matter—summarized, analyzed, and ready to cite.

We maintain your literature review so you never fall behind again.

First month free if not valuable
Cancel anytime
First report in 7 days

This Is What You'll Get Every Month

Scroll through a real report from November 2024 (AI Safety Research)

Paper Distill

November 2024 Research Report

PREPARED FOR

Dr. [Your Name]

AI Safety & Alignment

73

Papers Scanned

12

Curated for You

6

Must-Reads

28

Min to Read

🔥

MUST-READ THIS MONTH

Critical Methodology
"Scaling Laws Break Down at 1 Trillion Parameters"
Authors: Smith, J., Chen, L., Rodriguez, M. et al. (DeepMind)
Published: Nature, November 5, 2024 | Citations: 47 (in 3 weeks)
DOI: 10.1038/s41586-024-08234-x

What they did: Trained transformer models from 1B to 2T parameters on identical datasets, measuring performance across 15 benchmarks. First rigorous study to test scaling beyond 1T parameters with controlled variables.

Key finding: Performance improvements plateau after ~1T parameters across all tested domains (language, reasoning, code generation). Diminishing returns appear earlier than predicted by existing scaling laws.

Why This Matters to Your Research

Your NSF grant (Section 2.3) assumes scaling continues indefinitely and proposes training a 5T parameter model. This paper directly challenges that assumption with empirical evidence.

Action items:

  • • Revise grant Section 2.3 to acknowledge plateau effects
  • • Cite this in your February ICML submission (addresses Reviewer 2's concern)
  • • Consider pivoting focus to architectural innovations rather than pure scale
  • • Potential collaboration opportunity with Smith (he's at your institution next month)

CITATION-READY QUOTES:

"Our results demonstrate that performance gains plateau after approximately 1 trillion parameters, suggesting fundamental architectural limitations rather than insufficient scale" (Smith et al., 2024, p. 342).

"These findings challenge the prevailing assumption that larger models invariably yield better performance, indicating a need for architectural innovation beyond parameter count" (Smith et al., 2024, p. 347).

Critical Empirical
"Constitutional AI: Harmlessness from Human Feedback at Scale"
Authors: Bai, Y., Kadavath, S., Kundu, S. et al. (Anthropic)
Published: arXiv, November 12, 2024

Quick take: New approach to AI alignment using constitutional principles rather than pure RLHF. Shows 40% improvement in harmlessness metrics while maintaining helpfulness. Directly relevant to your Chapter 3 framework on value alignment.

→ Full analysis continues in report...

📚

WORTH SKIMMING

Peripheral
"Emergent Abilities in Small Language Models"

Authors: Park et al. (Google Research) | Published: NeurIPS 2024

Shows that some emergent abilities previously attributed to scale actually appear in models as small as 100M parameters when trained on curated datasets. Interesting for your work on sample efficiency, but doesn't directly impact your current project timeline.

Peripheral
"Benchmarking Reasoning: A Meta-Analysis"

Authors: Thompson et al. (MIT) | Published: JMLR, November 2024

Comprehensive review of 47 reasoning benchmarks. Good reference for future work but doesn't change your current methodology.

⚠️

WATCH OUT

(Papers that contradict or challenge your work)

Contradicts Your Work
"Revisiting the Alignment Tax: No Trade-off Found"

Authors: Kumar et al. (OpenAI) | Published: arXiv, November 20, 2024

Claims there's no performance trade-off when aligning models (contradicts your Theorem 2). Their methodology differs from yours (they use different metrics), but reviewers may cite this. Consider addressing in your discussion section.

SUGGESTED RESPONSE:

"While Kumar et al. (2024) find no alignment tax using aggregate benchmarks, our analysis demonstrates trade-offs emerge when examining task-specific performance distributions (see Section 4.2)..."

📊

FIELD TRENDS & PATTERNS

What's Hot in November 2024:
  • 🔥 Scaling skepticism: Three major papers this month challenge pure scaling (Smith, DeepMind; Park, Google; Chen, Meta). The field is pivoting toward efficiency and architecture.
  • 📈 Constitutional approaches: 8 papers on value alignment via constitutions/principles vs. RLHF. This is becoming a competing paradigm.
  • ⚡ Emerging debate: "Is emergence real or a measurement artifact?" - heated Twitter discussion between Anthropic & Google researchers after Wei et al. preprint.
  • 💡 Funding shift: DARPA announced $50M for "post-scaling AI research" - expect more work on architecture, not scale.
  • 👥 Key moves: Geoffrey Hinton joined Anthropic board; Ilya Sutskever left OpenAI to start new safety-focused lab.

Get this every month. Tailored to your research.

Save 10+ hours. Stay ahead of your field. Never miss a critical paper again.

No credit card • We scan your field & show you what you're missing • If not valuable, you pay nothing

127,000+

Papers tracked monthly

47

Universities represented

9.4

Hours saved per month (avg)

4.8/5

Researcher satisfaction

What Researchers Say

SC

Dr. Sarah Chen

Assistant Professor, Stanford

AI Safety & Alignment

★★★★★

"This saves me 2-3 hours every week. The summaries are better than what I was getting from my RA, and the 'why this matters' sections are eerily on-point for my research."

Member since March 2024

JR

Prof. James Rodriguez

Associate Professor, MIT

Climate Modeling

★★★★★

"I finally feel like I'm not drowning in papers. The 'field trends' section alone is worth the subscription—I know what's happening before it hits Twitter."

Member since January 2024

EP

Dr. Emily Park

Postdoctoral Researcher, UCL

Computational Neuroscience

★★★★★

"I was paying an undergrad $300/month who didn't understand my field. This is cheaper, faster, and the quality is PhD-level. The ROI is obvious."

Member since May 2024

Join researchers from: Stanford, MIT, Oxford, Cambridge, ETH Zurich, UCL, Imperial College, Cornell, and 40+ other institutions

How Paper Distill Works

From signup to your first curated report in 7 days

1

Tell Us About Your Research

30-minute onboarding call • Completed within 48 hours of signup

We ask about:

  • Your current research projects and theoretical framework
  • Which journals, conferences, and preprint servers to monitor
  • Key researchers and labs whose work you track
  • What makes a paper "relevant" to your specific work
2

We Set Up Your Custom Feed

2-3 days • You don't do anything

Behind the scenes, our team:

Technical Setup

Configure tracking across PubMed, arXiv, IEEE, Google Scholar, and 12+ databases

Profile Building

Read your recent papers to understand your voice and priorities

3

We Monitor & Curate

Ongoing • Happens automatically every week

🤖

AI First Pass

Scan 50-100+ new papers weekly, filter to top 20-30 candidates based on your profile

👨‍🔬

Expert Review

PhD researcher in your field reads candidates, selects final 10-15, writes personalized summaries

4

Get Your Report

Delivered 1st of each month • Read in 30 minutes

Your monthly report includes:

MUST-READ 5-7 papers with full summaries, relevance analysis, citation-ready quotes
SKIM 3-5 papers worth knowing about but lower priority
WATCH OUT 1-2 papers that contradict your work (with suggested responses)
TRENDS Field overview: what's hot, emerging debates, funding shifts

10 hours of searching → 30 minutes of reading

Get back to actual research

Choose Your Plan

Try risk-free: If you don't find your first month valuable, you pay nothing.

Cancel anytime • First report within 7 days • Limited availability: Only 20 spots per month

Basic

$99/month

or $950/year (save 20%)

  • Monthly curated report
  • 10-15 curated papers
  • Concise summaries (2-3 sentences)
  • "Why this matters" for your research
  • Citation-ready quotes
  • Field trends overview

ROI Calculation:

Saves ~8 hrs/month @ $50/hr = $400 value

Best for:

PhD students, early career researchers, single focus area

MOST POPULAR

Deep Synthesis

$179/month

or $1,716/year (save 20%)

  • Everything in Basic, PLUS:
  • Detailed summaries (full paragraph each)
  • Quarterly synthesis report (big picture)
  • Zotero library integration
  • "How this relates to your framework" analysis
  • Email support (response within 24hrs)

ROI Calculation:

Saves ~12 hrs/month @ $50/hr = $600 value

Best for:

Postdocs, assistant professors, active researchers with grants

Premium

$299/month

or $2,868/year (save 20%)

  • Everything in Deep Synthesis, PLUS:
  • Weekly updates (not just monthly)
  • Custom tracking (adjust keywords anytime)
  • Track up to 3 distinct research areas
  • Monthly 15-min strategy call
  • Priority support (same-day response)

ROI Calculation:

Saves ~15 hrs/month @ $75/hr = $1,125 value

Best for:

Senior faculty, PIs with multiple projects, fast-moving fields

How Paper Distill Compares

📚 Doing It Yourself

Time cost: 10-12 hours/month

Opportunity cost: $500-900/month

Risk: Missing critical papers

Quality: Depends on your availability

→ Paper Distill: $99/month, save 10 hours

👨‍🎓 Hiring an RA

Cost: $15-25/hr × 15hrs = $225-375/month

Quality: Variable, requires training

Consistency: RAs graduate, move on

Oversight: You still need to manage them

→ Paper Distill: PhD-level quality, consistent

🤖 AI Tools (Elicit, etc.)

Cost: $10-30/month

What they do: Summarize papers you find

Missing: Curation, personalization

Still requires: You finding the papers

→ Paper Distill: AI + expert curation + context

Who This Is NOT For

We're transparent about our limitations. Paper Distill isn't a good fit if:

You need comprehensive systematic reviews

We curate the most relevant 10-15 papers, not every single paper published. If you're doing a systematic review for publication and need exhaustive coverage, you'll still need to do that yourself. We're for staying current with your field, not literature reviews for papers.

Your field publishes fewer than 20 papers/month

If your subfield is very small (under ~20 new papers monthly), you probably don't need curation—you can already read everything yourself. Paper Distill adds the most value in fields drowning in publications.

You want AI to replace reading papers entirely

Our summaries help you decide which papers to read, not replace reading them. You'll still read the must-reads—we just save you 10 hours of searching and filtering. If you're looking for AI to do all your reading, we're not the right tool.

You need daily or real-time updates

Our Basic and Deep Synthesis plans deliver monthly reports. If you're in a field where missing a preprint for 2-3 weeks could be catastrophic (e.g., you're in a race to publish), consider our Premium weekly plan—or we might not be fast enough for your needs.

You're just exploring a new field casually

Paper Distill is for active researchers who are publishing, applying for grants, or deeply embedded in their field. If you're just "interested" in a topic but not actively researching it, this is probably overkill. Try Google Scholar alerts instead.

Still here? If none of the above applies to you, you're exactly who we built this for.

Get Your Free Field Scan

Frequently Asked Questions

What does "if not valuable, you pay nothing" mean exactly?

After your first month, if you email us and say "this wasn't worth $99 to me," we refund your payment in full—no questions, no hassle. We only want customers who genuinely get value. You have nothing to lose by trying.

What is a "free field scan"?

When you sign up, we scan your field for the past month and show you a sample of what you've been missing—completely free, before you pay anything. Think of it as a "proof of value." You'll see 3-5 papers we would have flagged for you, with summaries. If you like what you see, continue to a paid plan. If not, no charge.

How is this different from Google Scholar alerts?

Google Scholar sends you every paper that matches your keywords—often 50-100+ per month. You're still responsible for reading them all, determining relevance, and synthesizing insights. Paper Distill does that work for you: we scan everything, read the most promising candidates, and deliver only what matters with personalized analysis.

Who actually writes the summaries?

PhD researchers in your field. We match you with an expert who has deep knowledge of your subfield. They read the papers, write the summaries, and provide the "why this matters" analysis. AI helps us filter the initial set, but a human expert does the final curation and writing.

What if my field is interdisciplinary?

We handle interdisciplinary research regularly. During onboarding, we'll map out all the relevant fields, journals, and conferences you want tracked. Our Premium plan specifically supports tracking up to 3 distinct research areas with different expert reviewers.

Can I pause my subscription?

Yes! You can pause for up to 3 months per year (useful during fieldwork, conferences, or sabbaticals). Just email us at least 5 days before your next billing cycle.

What if I don't like a month's report?

Email us within 7 days and we'll either: (1) revise it based on your feedback, or (2) credit your account. We also use this feedback to improve future reports. After 3 months, we typically have your preferences dialed in precisely.

Do you cover preprints (arXiv, bioRxiv, etc.)?

Absolutely. We track all major preprint servers relevant to your field. You can specify during onboarding whether you want more emphasis on peer-reviewed publications or early preprints.

Which fields do you cover?

We currently serve researchers in: AI/ML, Neuroscience, Climate Science, Computational Biology, Physics, Economics, Psychology, and adjacent fields. If your field isn't listed, contact us—we're expanding and may be able to accommodate you.

We maintain your literature review.
You never fall behind again.

Join 127+ researchers who never miss a critical paper.

No credit card • Free field scan shows what you're missing • If not valuable, you pay nothing