Papers
Topics
Authors
Recent
Search
2000 character limit reached

VideoMaMa: Mask-Guided Video Matting via Generative Prior

Published 20 Jan 2026 in cs.CV and cs.AI | (2601.14255v1)

Abstract: Generalizing video matting models to real-world videos remains a significant challenge due to the scarcity of labeled data. To address this, we present Video Mask-to-Matte Model (VideoMaMa) that converts coarse segmentation masks into pixel accurate alpha mattes, by leveraging pretrained video diffusion models. VideoMaMa demonstrates strong zero-shot generalization to real-world footage, even though it is trained solely on synthetic data. Building on this capability, we develop a scalable pseudo-labeling pipeline for large-scale video matting and construct the Matting Anything in Video (MA-V) dataset, which offers high-quality matting annotations for more than 50K real-world videos spanning diverse scenes and motions. To validate the effectiveness of this dataset, we fine-tune the SAM2 model on MA-V to obtain SAM2-Matte, which outperforms the same model trained on existing matting datasets in terms of robustness on in-the-wild videos. These findings emphasize the importance of large-scale pseudo-labeled video matting and showcase how generative priors and accessible segmentation cues can drive scalable progress in video matting research.

Summary

  • The paper introduces a novel mask-guided framework that integrates generative priors with temporal modeling to produce accurate, detailed video mattes.
  • The paper leverages diffusion-based generative models to refine boundary details and regularize trimap estimation, minimizing temporal jitter.
  • The paper demonstrates superior performance over U-Net and transformer-based approaches on metrics like MAE and TGE, showing robustness in challenging scenes.

Technical Review of "VideoMaMa: Mask-Guided Video Matting via Generative Prior" (2601.14255)

Overview

This paper introduces VideoMaMa, a novel mask-guided framework for high-fidelity video matting that leverages generative priors. The method is motivated by both the need for accurate object boundary extraction and temporal coherence in video matting, as well as recent progress in generative diffusion models for visual understanding. VideoMaMa integrates a mask guidance mechanism with a generative prior, forming a pipeline that produces temporally consistent and highly detailed alpha mattes from standard video input.

Methodological Contributions

Mask-Guided Matting

VideoMaMa initiates the matting process with a mask generator, utilizing precise semantic segmentation masks (potentially obtained from models such as SAM [sam] or SAM2 [sam2]) to localize objects-of-interest. This mask serves as an inductive bias to the subsequent generative stages, suppressing spurious background responses and improving the accuracy of initial trimap estimation.

Use of Generative Priors

A core innovation of VideoMaMa is its integration of a generative prior, specifically taken from a diffusion-based model architecture, to regularize the matting inference. The methodology exploits the strong prior knowledge encapsulated in large diffusion models—such as those underlying SDMatte [sdmatte], stable diffusion [stablediffusion], and LayerFusion [dalva2024layerfusion]—to enhance boundary detail and preserve global scene consistency. The generative prior operates as an implicit regularizer counteracting local ambiguities, reducing temporal jitter and facilitating restoration of fine structures such as hair, smoke, or semi-transparent regions.

Temporal Consistency

VideoMaMa's architecture is explicitly designed to maintain both intra-frame fidelity and inter-frame temporal consistency, which remains a persistent challenge in the video matting literature [matanyone, DVM]. By injecting mask guidance at the generative prior stage and combining it with a dedicated temporal modeling module, the pipeline aligns generated mattes across frames, minimizing visible temporal artifacts.

Empirical Results

Quantitative Performance

Strong numerical improvements are reported on standard video matting benchmarks. VideoMaMa consistently outperforms prior approaches on mean absolute error (MAE) and temporal gradient error (TGE), demonstrating superiority over both U-Net based [modnet, Huang2023] and transformer-based [vmformer] matting backbones. Particularly notable are the improvements obtained in highly challenging test cases with thin structures and rapid object/scene changes.

Generalization

The model achieves robust generalization across diverse scenes, including cases with fast motion and severe occlusion. The reported results indicate the practical effectiveness of generative priors in regularizing matting inference under weak or noisy mask supervision.

Theoretical and Practical Implications

The integration of learned generative priors into the matting process addresses two central weaknesses in previous video matting designs: the reliance on local pixel-level cues and the inability to generalize across long or complex sequences. VideoMaMa demonstrates that mask guidance, when tightly coupled with a potent generative prior, yields both high-resolution detail recovery and long-range temporal coherence.

By supporting interoperability with segmentation models, VideoMaMa also enables modular design: different mask generators can be swapped in depending on upstream task requirements or available computation, increasing deployability in diverse video editing pipelines.

Moreover, the empirical evidence suggests that generative diffusion models, previously proven effective in single-image perception [lotus, depthmaster], can be judiciously repurposed for dense video tasks by integrating explicit guidance and temporal regularization mechanisms.

Limitations and Future Directions

While VideoMaMa outperforms prior works in both fidelity and consistency, certain limitations persist. The method's dependency on mask quality implies possible failure modes when segmentation is severely erroneous or ambiguous. Additionally, the use of generative diffusion priors entails significant computational resources during inference, which could impede real-time applications compared to lighter-weight alternatives [ltxvideo].

Potential avenues for future work include:

  • Optimization of generative prior inference: Employing progressive distillation [v_param] or accelerated sampling procedures to significantly reduce computational overhead.
  • Adaptive mask refinement: Joint training with end-to-end mask refinement, possibly leveraging referential matting approaches [refmatte], to further mitigate sensitivity to segmentation inaccuracies.
  • Foundation model integration: As larger video foundation models (e.g., HunyuanVideo [hunyuanvideo], Wan [wan]) become available, further scaling and generalization can be explored by harnessing such priors for more complex or open-world video matting tasks.
  • Unified dense video prediction: Exploring shared generative priors for multi-task dense video prediction, including depth, normal, and optical flow estimation [normalcrafter, bufferanythime].

Conclusion

VideoMaMa presents a technically sound and empirically validated advance in video matting, demonstrating that mask guidance and diffusion-based generative priors can be combined to achieve superior boundary accuracy, temporal consistency, and robustness to scene complexity. The approach establishes a new state of the art among matting pipelines that leverage both discriminative and generative cues, offering promising directions for the further evolution of video editing and content creation tools powered by deep generative models.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Explain it Like I'm 14

Overview

This document isn’t a typical research paper with experiments and results. It’s a how-to guide for authors who want to write a short “rebuttal” (a response) after reviewers have read their paper. It explains the purpose of the rebuttal and gives strict rules for how it should look and what it should (and shouldn’t) include, especially when written in LaTeX (a common tool for writing scientific papers).

Key Objectives and Questions

The guide aims to answer simple, practical questions:

  • What is a rebuttal for?
  • What are authors allowed to include in it?
  • How long can it be?
  • How should it be formatted (fonts, margins, columns, figures, references)?
  • How to keep the document anonymous and easy for reviewers to read?

Approach: What the Guide Tells You to Do

Think of this like instructions for turning in a neat, one-page letter to your teacher responding to their feedback:

  • Purpose: Use the rebuttal to correct factual mistakes or provide clarifications the reviewers asked for. It’s not a place to add brand-new discoveries, big new experiments, or entire new sections that weren’t in the original paper.
  • Length: Exactly one page maximum, including everything (text, figures, and references). If it’s longer, it won’t be reviewed.
  • Anonymity: Don’t include anything that reveals who you are. Don’t add external links that reveal identity or bypass the one-page limit.
  • Structure: Sections are optional, but organizing your response makes it easier to read.
  • Formatting:
    • Two-column layout on the page, with specific margins and spacing so everyone’s rebuttal looks consistent.
    • Main text in 10-point Times (Times Roman is also fine), single-spaced.
    • Section headings in 10 or 12-point Times.
    • Paragraphs have a small indent at the start.
    • Figure and table captions use a slightly smaller font.
    • Equations should be numbered so people can refer to them clearly.
    • Figures should be centered, readable when printed, and use font sizes and line widths that match the text.
    • When adding images in LaTeX, the guide suggests using the standard include command and setting width relative to the column width (so it fits nicely).
  • References: List them at the end in a smaller font and cite them with numbers in square brackets in the text (like [12]). The example reference list shows the style.
  • Avoid confusion: If your main paper already has figures or equations named “Figure 1” or “Eq. (1),” try not to reuse those exact labels in the rebuttal, so reviewers don’t mix them up.

Main Takeaways and Why They Matter

Here are the essential rules that matter most:

  • Keep it short: 1 page total. This makes reviews fair and manageable.
  • Stay on-topic: Clarify and correct; don’t add big new content that wasn’t asked for.
  • Be anonymous: Protects fair judging.
  • Look consistent: Same layout and fonts help reviewers read quickly and compare fairly.
  • Be clear: Number equations, use readable figures, and match figure text to the body text size so everything is legible even when printed.

These rules help reviewers find information fast, reduce confusion, and keep the process fair for all authors.

Implications and Impact

When authors follow these guidelines:

  • Reviewers can quickly understand responses and make better decisions.
  • Authors avoid unintentional rule-breaking (like going over length or revealing identity).
  • The whole review process becomes smoother, more consistent, and more equitable.

In short, this guide helps everyone focus on the science—by making the rebuttal short, clear, and easy to read.

Knowledge Gaps

Knowledge gaps, limitations, and open questions

The paper provides formatting guidance for author rebuttals but leaves several practical, procedural, and evidentiary aspects unspecified. Below is a concise list of gaps and questions future conference organizers or tooling designers could address:

  • Ambiguity around content scope: What qualifies as “additional information requested by reviewers,” and where is the boundary between permissible clarifications versus impermissible new results?
  • Handling reviewer requests for new experiments: If reviewers ignore the PAMI-TC motion and ask for significant experiments, what is the recommended author strategy and how will ACs handle such cases?
  • Enforcement of overlength and margin tampering: What automated checks (e.g., PDF analyzers, geometry validation) will be used, and what is the threshold for “significantly altered” formatting?
  • Anonymity guidance beyond external links: Are anonymized links (e.g., OpenReview URLs, anonymous GitHub repos) allowed? How should authors reference their own prior work without deanonymization?
  • Submission pipeline details: No instructions for file size limits, naming conventions, allowed PDF versions, or the exact upload platform process (CMT/OpenReview), which affects compliance.
  • Accessibility requirements: No guidance on color contrast, font legibility, alt text, or figure readability for color-blind reviewers and print constraints beyond font resizing.
  • Figure/table limitations: No policy on the number, size, or type (vector vs. raster; color vs. grayscale) of figures/tables allowed within the one-page limit.
  • Two-column float rules: Are double-column figures permitted in rebuttals? Any constraints on placement, float behavior, or captions for multi-column elements?
  • Equation/reference numbering workaround: The text mentions avoiding numbering overlap with the main paper but does not specify the LaTeX workaround (e.g., prefixing counters, package examples).
  • Package compatibility: No guidance on allowed/disallowed LaTeX packages (e.g., geometry, microtype, hyperref) that could alter layout or introduce metadata and potential deanonymization.
  • Language and tone: No recommendations on tone, structure, and prioritization strategies to address multiple reviewers within one page (e.g., templates for mapping comments to responses).
  • Citing new or external work: While references are allowed, there is no policy on citing new papers not in the original submission (risking implicit new claims) and whether long bibliographies are discouraged.
  • Hyperlink policies: Are internal document hyperlinks (e.g., to sections/figures) allowed? Are DOIs/arXiv links permitted if they don’t reveal identity? What about QR codes?
  • Cross-referencing to the main paper: Should authors reference main paper sections/figures by explicit labels (e.g., “Paper Fig. 3”) or replicate content in the rebuttal? Any rules to avoid confusion?
  • Region-dependent page geometry: Margins differ for Letter vs. A4, but there is no directive on how authors should select and verify the correct geometry in LaTeX (e.g., templates for both).
  • Print fidelity specifics: The guidance suggests print readability but lacks criteria for minimum line widths, recommended dpi for raster images, and acceptable compression to avoid artifacts.
  • Metadata hygiene: No instructions to strip PDF/figure metadata (author names, software tags) to maintain anonymity.
  • Consequences for violations: Beyond “overlength responses will not be reviewed,” there is no clarity on other violations (e.g., font changes, external links) and whether they lead to desk rejection or requests to resubmit.
  • Scope and applicability: The document references prior conference practices but does not state which venues this template targets or how it adapts to venue-specific rebuttal policies.
  • Equity across reviewer loads: No guidance on dealing with many comments versus short page limits (e.g., recommending summary tables, grouping, or prioritization frameworks).
  • Visual comparison tables: Allowed in principle, but no rules on inclusion when they implicitly introduce new analyses or reinterpret existing results.
  • Math content bounds: Equations must be numbered, but there’s no recommendation on keeping math minimal or strategies to compress derivations without losing clarity.
  • Example structure: The paper recommends sections but provides no sample structure (e.g., “Key concerns,” “Clarifications,” “Requested details,” “Limitations acknowledged”) for consistent author practice.
  • Time management and deadlines: No mention of rebuttal windows, timezone considerations, or recommended drafting workflows to meet strict submission deadlines.

Glossary

  • anonymity: The practice of concealing authors’ identities during peer review to prevent bias. "the rebuttal must maintain anonymity"
  • arXiv preprint: A publicly accessible, non–peer-reviewed manuscript hosted on arXiv, identified by an arXiv ID. "arXiv preprint (Saleh et al., 21 Jul 2025)"
  • benchmark: A standardized dataset and evaluation protocol used to compare methods fairly. "A perceptually motivated online benchmark for image matting"
  • cref (LaTeX cross-reference command): A LaTeX command (from the cleveref package) that formats cross-references to figures, tables, equations, etc. "as in \cref{fig:onecol}"
  • diffusion models: Generative models that sample data by iteratively denoising from noise using a learned reverse diffusion process. "Progressive distillation for fast sampling of diffusion models"
  • distillation: A technique that transfers knowledge to accelerate or compress models, often by training a student model or fewer sampling steps to mimic a teacher. "Progressive distillation for fast sampling of diffusion models"
  • foundation model: A large, general-purpose model pretrained on broad data and adaptable to many downstream tasks. "Diffusion-based visual foundation model for high-quality dense prediction"
  • graph neural networks: Neural architectures that operate on graph-structured data by propagating and aggregating information over nodes and edges. "Video matting via consistency-regularized graph neural networks"
  • includegraphics (LaTeX command): The LaTeX command used to include external images into documents. "use \verb+\includegraphics+"
  • LaTeX: A document preparation system widely used for typesetting scientific and technical documents. "See \LaTeX\ template for a workaround."
  • latent diffusion models: Diffusion models that perform denoising in a compressed latent space (often learned by a VAE) for efficiency. "High-resolution image synthesis with latent diffusion models"
  • linewidth (LaTeX length): A LaTeX length parameter equal to the current line width, commonly used to size figures or tables. "[width=0.8\linewidth]"
  • monocular depth estimation: Predicting scene depth from a single RGB image without stereo or multi-view input. "Repurposing diffusion-based image generators for monocular depth estimation"
  • PAMI-TC: The IEEE Pattern Analysis and Machine Intelligence Technical Committee, which oversees policies and standards in the community. "Per a passed 2018 PAMI-TC motion"
  • pica: A typographic unit equal to 12 points (about 1/6 inch), used for measuring indentation and layout. "All paragraphs should be indented 1 pica"
  • priors: Pre-existing knowledge or assumptions incorporated into a model; in this context, structural knowledge encoded by pretrained diffusion models. "Unleashing the diffusion priors for 3d geometry estimation from a single image"
  • referring expressions: Natural-language phrases that uniquely identify an object in an image or video for grounding tasks. "Video Object Segmentation with Language Referring Expressions"
  • relighting: Re-rendering an image or video under new lighting conditions while preserving scene content. "Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces"
  • spatio-temporal: Pertaining to both spatial (across image coordinates) and temporal (across time) dimensions jointly. "Deep video matting via spatio-temporal alignment and aggregation"
  • transformers: Neural network architectures based on self-attention mechanisms, effective for sequence and vision tasks. "Scalable diffusion models with transformers"
  • trimap: A three-region guidance map (foreground, background, unknown) used to constrain and guide matting algorithms. "One-trimap video matting"
  • variational Bayes: An approximate Bayesian inference framework that optimizes a tractable variational distribution to approximate a posterior. "Auto-encoding variational bayes"
  • video matting: Estimating per-pixel alpha mattes over time to separate moving foregrounds from backgrounds in videos. "Generative Video Matting"
  • video object segmentation: Separating and tracking specific objects across frames in a video. "Youtube-vos: A large-scale video object segmentation benchmark"
  • zero-shot: Performing a task without task-specific training examples by leveraging generalization from pretraining or priors. "Zero-shot image matting for anything"

Practical Applications

Immediate Applications

Below are deployable, concrete use cases that can be implemented now based on the paper’s guidance on author rebuttal formatting and policy.

  • Rebuttal compliance checker (upload-gate on CMS)
    • Sector: Academic publishing software, conference management systems (CMT, OpenReview).
    • What it does: Automatically validates one-page limit, two-column layout, margins, font sizes (10pt text, 9pt captions/references), equation/figure numbering, and hyperlink/anonymity rules upon rebuttal upload.
    • Tools/products/workflows: PDF parsers (pdfinfo, pdffonts), OCR for embedded text, heuristics for margin/column detection, hyperlink scanning, name/entity masking checks; integrated as a preflight check in submission portals.
    • Assumptions/dependencies: Access to the PDF at upload; reliable PDF-to-text extraction and layout analysis; configurable rules per venue.
  • Overleaf/VS Code template and linter extension for rebuttals
    • Sector: Software/tools for scientific writing.
    • What it does: Ships a ready-to-use rebuttal template with locked styles; provides inline linting for margin tweaks, font downsizing, and single-page violations; warns against identity-revealing links.
    • Tools/products/workflows: Overleaf template, LaTeX class with enforced lengths, CI lint (GitHub Actions) to compile and verify.
    • Assumptions/dependencies: Authors use LaTeX or Overleaf; minor false positives tolerated.
  • Anonymity and link hygiene scanner
    • Sector: Publishing compliance and research integrity.
    • What it does: Flags personal names, affiliations, lab URLs, DOIs resolving to author pages, and external links that could reveal identity or bypass length limits.
    • Tools/products/workflows: NER models, URL resolver/redirect checker, rules for self-citations and link patterns.
    • Assumptions/dependencies: Whitelists for allowed references; ability to process embedded hyperlinks.
  • Figure legibility and size auditor
    • Sector: Document engineering, graphics/visualization QA.
    • What it does: Checks figure resolution, line widths, font sizes inside figures, and relative width (e.g., 0.8×linewidth) for print readability; suggests rescaling.
    • Tools/products/workflows: Image processing on extracted figures, DPI/contrast checks; preflight report in CI or editor tool.
    • Assumptions/dependencies: Reliable figure extraction from PDFs; thresholds calibrated to common printers.
  • Equation/figure/reference renumbering guard
    • Sector: LaTeX tooling.
    • What it does: Ensures rebuttal numbering does not overlap with the main paper’s numbering; auto-prefixes labels or uses isolated counters.
    • Tools/products/workflows: LaTeX package or macro set; pre-submit compilation test.
    • Assumptions/dependencies: Authors maintain separate projects or provide main paper metadata.
  • Lab-internal rebuttal workflow kit
    • Sector: Academia (research groups, graduate training).
    • What it does: Checklists and response planners that map reviewer comments to concise, factual replies; templates for optional figure/table inclusion without new contributions.
    • Tools/products/workflows: Shared docs, issue trackers, and short sprints culminating in a one-page draft; senior review sign-off.
    • Assumptions/dependencies: Teams adopt standardized internal review; time-constrained cycles.
  • Reviewer/AC reminder module
    • Sector: Conference operations, chair tools.
    • What it does: In-dashboard reminders that reviewers should not request significant new experiments and should not penalize the absence thereof; quick links to policy text.
    • Tools/products/workflows: UI nudges and policy snippets within review forms.
    • Assumptions/dependencies: Platform permits UI customization; PC buy-in.
  • Cross-venue quick-start guides and training material
    • Sector: EdTech for research skills.
    • What it does: Short training modules on rebuttal best practices (focus on factual corrections, no new contributions, clarity, print legibility).
    • Tools/products/workflows: Micro-courses, slides, annotated examples; integrated into grad seminars or institutional workshops.
    • Assumptions/dependencies: Adoption by departments; alignment with venue norms.

Long-Term Applications

The following rely on further research, integration, or scaling before broad deployment.

  • AI “Rebuttal Co‑Pilot” with policy-aware drafting
    • Sector: AI writing assistants for research.
    • What it does: Ingests reviews, prioritizes issues, drafts a policy-compliant, one-page response; suggests a single clarifying figure/table using existing results; enforces anonymity and formatting constraints.
    • Tools/products/workflows: LLMs with tool-use for PDF constraints, review-summarization, policy classifiers; Overleaf/Docs plugin; CMS integration.
    • Assumptions/dependencies: Secure API access to reviews; robust policy adherence; guardrails to avoid adding unrequested new contributions.
  • Semantic policy enforcement across documents
    • Sector: Publishing compliance, NLP.
    • What it does: Detects whether rebuttals introduce new theorems/algorithms/experiments not requested, by cross-referencing reviewer asks with rebuttal content.
    • Tools/products/workflows: Cross-document retrieval and entailment models; structured “request/response” linking; audit trails for ACs.
    • Assumptions/dependencies: Reliable labeling of reviewer requests; low false-positive/negative rates acceptable to stakeholders.
  • Universal machine-checkable rebuttal schema
    • Sector: Standards and interoperability (publishers, societies).
    • What it does: A JATS-like schema for rebuttal structure and metadata (length, figures, links, anonymization status) enabling automated compliance across venues.
    • Tools/products/workflows: Open standard, validators, reference implementations; adoption by IEEE/ACM/major conferences.
    • Assumptions/dependencies: Community consensus; backward compatibility with LaTeX/PDF pipelines.
  • Automated layout normalization and accessibility enrichers
    • Sector: Document engineering, accessibility.
    • What it does: Transforms non-compliant rebuttals into compliant two-column, correct margins and fonts; inserts alt text, tags, and ensures print contrast.
    • Tools/products/workflows: PDF reflow, LaTeX AST transformers, accessibility checkers; “fix-and-flag” pipeline pre-submission.
    • Assumptions/dependencies: Safe, deterministic transformations; author approval loop; preservation of scientific content.
  • Integrated CMS gatekeeping with explainable feedback
    • Sector: Conference management platforms.
    • What it does: Real-time “traffic-light” indicators during upload with actionable fixes (e.g., “captions at 8pt—raise to 9pt”); simulation of print rendering for reviewers.
    • Tools/products/workflows: On-device or server-side analyzers; UI explanations; batch compliance reports for ACs.
    • Assumptions/dependencies: Low-latency analysis; scalable compute for peak periods.
  • Anonymity assurance via multimodal risk analysis
    • Sector: Trust and safety, integrity.
    • What it does: Uses text, image, and link signals to predict deanonymization risk (e.g., lab-specific plots, watermarks, or URL patterns).
    • Tools/products/workflows: Multimodal classifiers; link graph analysis; reviewer-side redaction recommendations.
    • Assumptions/dependencies: Balanced datasets and privacy guarantees; clear policies for acceptable risk.
  • Meta-evaluation and policy optimization
    • Sector: Research policy, scientometrics.
    • What it does: Studies the impact of rebuttal constraints (length, figures, requests) on decision quality and fairness; informs policy updates.
    • Tools/products/workflows: Instrumented CMS logs, outcome analytics dashboards; A/B tests for policy changes.
    • Assumptions/dependencies: Ethical data use approvals; collaboration with program committees.
  • Generalized “document compliance as a service”
    • Sector: Legal/compliance, grants administration, enterprise documentation.
    • What it does: Adapts the same preflight checks to RFPs, grant proposals, and legal filings with strict format rules.
    • Tools/products/workflows: Configurable rule engines; connectors for common authoring tools; audit certificates.
    • Assumptions/dependencies: Domain-specific rule sets; integrations with agency portals.
  • Cross-reference reconciliation between paper and rebuttal
    • Sector: LaTeX/NLP tooling.
    • What it does: Automatically maps references, figures, and equations from the main paper to rebuttal-safe aliases to avoid overlap confusion.
    • Tools/products/workflows: Project-aware label mapping; citation graph utilities; reviewer-facing crosswalk tables.
    • Assumptions/dependencies: Access to main manuscript artifacts; consistent label naming conventions.

These applications translate the paper’s formatting and policy guidance into practical tooling, workflows, and governance mechanisms that improve efficiency, fairness, and compliance across the peer-review lifecycle.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 13 tweets with 96 likes about this paper.