DerivKit: stable numerical derivatives bridging Fisher forecasts and MCMC
Abstract: DerivKit is a Python package for derivative-based statistical inference. It implements stable numerical differentiation and derivative assembly utilities for Fisher-matrix forecasting and higher-order likelihood approximations in scientific applications, supporting scalar- and vector-valued models including black-box or tabulated functions where automatic differentiation is impractical or unavailable. These derivatives are used to construct Fisher forecasts, Fisher bias estimates, and non-Gaussian likelihood expansions based on the Derivative Approximation for Likelihoods (DALI). By extending derivative-based inference beyond the Gaussian approximation, DerivKit forms a practical bridge between fast Fisher forecasts and more computationally intensive sampling-based methods such as Markov chain Monte Carlo (MCMC).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
DerivKit: Stable numerical derivatives bridging Fisher forecasts and MCMC
Overview: What is this paper about?
This paper introduces DerivKit, a Python toolbox that helps scientists figure out how a model’s outputs change when you tweak its inputs. In math, this “how much it changes” is called a derivative. DerivKit makes it easier to get reliable derivatives even when models are noisy, complicated, or not easy to modify. With these derivatives, scientists can quickly predict how well they can measure certain parameters (using something called Fisher forecasts) and also handle trickier, non-simple situations without immediately jumping to slow, heavy methods like MCMC (a kind of random sampling). In short, DerivKit is a bridge between fast predictions and more detailed, slower analyses.
Key objectives: What problems does it try to solve?
To keep things simple, DerivKit aims to:
- Compute stable, trustworthy derivatives from almost any model, including “black-box” ones you can’t easily change.
- Build useful math tools from those derivatives, like gradients (slopes), Jacobians (slopes for many outputs), and Hessians (how slopes change).
- Create fast forecasts with Fisher matrices and add smarter corrections when things aren’t simple bell-curve shapes (Gaussian), using a method called DALI.
- Provide warnings and checks so users know when derivative estimates might be unreliable.
Methods: How does DerivKit work?
Think of measuring a slope on a curvy hill. You can either:
- Look at points very close together and estimate the slope, or
- Fit a small curve around the point and take the slope of that curve.
DerivKit offers both styles, and it chooses smart settings based on the situation.
- Finite differences (nearby points): The software looks at how a function changes at nearby points to estimate the slope. It uses advanced tricks like Richardson extrapolation and Ridders’ method to combine different step sizes and cancel out errors. This works well when the model is smooth and not too noisy.
- Polynomial fitting (local curve fitting): If the model is noisy or “stiff” (sensitive to small changes), DerivKit fits a small polynomial curve around the point of interest and then takes the slope from the fitted curve. It chooses smart sample locations (Chebyshev points), can adjust the curve’s complexity, and uses regularization (a gentle penalty to avoid overfitting) to keep things stable. It also reports diagnostics and warns you if the fit looks suspicious.
On top of these derivative engines, DerivKit includes:
- CalculusKit: Builds gradients, Jacobians, and Hessians from the numerical derivatives.
- ForecastKit: Creates Fisher matrices (a way to estimate how well you can measure parameters), calculates bias (how wrong results shift when the data are off), and assembles DALI corrections to go beyond simple bell-curve assumptions.
- LikelihoodKit: Provides simple “likelihood” models (Gaussian and Poisson) for testing and examples.
All parts come with tests and diagnostics so users can trust the results.
Helpful analogies for key terms:
- Derivative: The slope of a function at a point. Like how steep a hill is right where you’re standing.
- Fisher matrix: A tool to predict how precisely you can measure parameters. Think of it as a “clarity meter” for your measurements.
- Gaussian: A bell-shaped curve describing uncertainties that are symmetric and simple.
- Posterior: Your updated belief about parameters after seeing data.
- MCMC: A slow but reliable way of exploring possible parameter values by random sampling.
- DALI: A method that uses higher-order derivatives to capture non-Gaussian, “not-a-perfect-bell-curve” behavior, improving forecasts beyond simple assumptions.
Main findings: What did they show?
- The adaptive polynomial-fit method gives more accurate and stable derivative estimates when there’s noise, compared to standard fixed-step finite differences. In tests, the adaptive method’s results are closer to the true answer and vary less when the data are noisy.
- DerivKit can produce standard Fisher forecasts, include prior information, and handle uncertainty both in inputs and outputs. It can also estimate how much parameters would shift if your data are biased (Fisher bias).
- Using DALI, DerivKit extends beyond the simple Gaussian assumption. In examples, the DALI-based contours (the shapes showing where parameter values are likely) match the results from MCMC much better than plain Fisher forecasts, but at a fraction of the computational cost.
Why this matters: Reliable derivatives lead to trustworthy forecasts and sensitivity studies. With DerivKit, scientists can explore models quickly and still capture important non-Gaussian effects, without always needing slow, heavyweight sampling.
Implications: Why is this useful?
- Saves time: You get better-than-basic forecasts without immediately using slow sampling methods.
- Works with real-world models: Even if your model is noisy, tabulated, or a black box, you can still get solid derivatives and forecasts.
- More accurate planning: Scientists can plan experiments and analyze data with greater confidence, especially in areas like cosmology, climate, and particle physics.
- Open-source and practical: It’s available on GitHub and can be installed with pip, making it easy to try and use in many projects.
Overall, DerivKit helps researchers move smoothly from quick, early predictions to more detailed, careful analysis, making scientific inference more robust and reproducible.
Knowledge Gaps
Unresolved gaps, limitations, and open questions
Based on the paper, the following concrete gaps and open questions remain for DerivKit and its proposed workflows:
- Numerical error control: Lack of formal, per-derivative error bounds or uncertainty quantification (UQ) for derivative estimates; no propagation of derivative UQ into Fisher errors, Fisher bias, or DALI contours.
- Backend selection criteria: No principled, automated procedure (or decision rules) for choosing between finite differences, Richardson/Ridders schemes, and adaptive polynomial fits given observed noise, smoothness, or conditioning.
- Non-smooth/discontinuous models: Unclear detection and handling of kinks, discontinuities, and non-differentiable regions (e.g., subgradient approximations, piecewise fits, or switching strategies), and what accuracy guarantees remain in such cases.
- Mixed partials in multivariate settings: Sampling design, conditioning control, and error analysis for mixed partial derivatives and high-order tensors in dimensions >2 are not specified; scalability of polynomial fits for cross-partials under the curse of dimensionality is unaddressed.
- Boundary and constrained parameters: No methodology for stable differentiation near parameter bounds or under constraints (e.g., positivity), nor guidance on reparameterization and handling Jacobians to preserve fidelity.
- Heteroscedastic and correlated noise: Derivative estimation assumes simple noise models; support for heteroscedastic, correlated, or non-Gaussian noise in outputs (and across vector components) is not documented, nor are strategies using replicate evaluations to estimate noise levels.
- Step/window size adaptation: No principled bias–variance trade-off or sample-budget optimization for step sizes and adaptive-fit window/degree selection; default heuristics are not validated across diverse regimes.
- SPD covariance derivatives: Differentiation of parameter-dependent covariance matrices may violate symmetry/positive-definiteness; no enforced SPD-preserving parameterizations (e.g., via Cholesky/log-Cholesky) or guarantees during numerical differentiation.
- DALI validity and robustness: The order supported, normalization, positivity, and region of validity for DALI expansions are not rigorously characterized; missing strategies for correcting negative densities or ensuring well-behaved credible regions.
- When to escalate to MCMC: No quantitative diagnostic criteria indicating when Fisher/DALI is insufficient and MCMC (or nested sampling) should be invoked; lack of hybrid workflows with automatic escalation based on diagnostics.
- Benchmarks vs autodiff/analytic: No systematic benchmarking against analytic derivatives or automatic differentiation across a suite of models, quantifying accuracy/runtime trade-offs and failure modes.
- Performance and scaling: Missing complexity analysis, caching/batching strategies, and strong/weak scaling results; no GPU/JAX support or guidance for large vector outputs and high-order tensors in high dimensions (memory and time).
- Tabulated/irregular grids: Treatment of derivatives from irregular, sparse, or anisotropic parameter grids (boundary handling, interpolation choices, and error characterization) is not described.
- Stochastic simulators: Strategies for Monte Carlo–noisy models (e.g., common random numbers, control variates, replication budgeting) are not integrated; impact of stochastic noise on derivative bias/variance remains unquantified.
- Prior modeling: Beyond Gaussian priors for Fisher, general non-Gaussian or hierarchical priors and their derivatives (including transformations and prior-induced curvature) are not supported or evaluated, especially within DALI.
- Likelihood breadth: LikelihoodKit is limited to Gaussian/Poisson; extensions to heavy-tailed, mixture, censored, or latent-variable likelihoods (and their derivative structures) are not provided.
- Errors-in-variables (X–Y Fisher): Generalization to non-Gaussian, correlated, or structured uncertainties in inputs x (and joint x–y error models), including identifiability and bias correction, is not developed beyond the illustrative case.
- Fisher degeneracies and regularization: Handling of near-singular Fisher matrices (e.g., regularization, pseudoinverses, prior augmentation) and their uncertainty effects is not specified or validated.
- Automatic scaling/reparameterization: No automated parameter scaling, whitening, or reparameterization to improve numerical conditioning, nor associated Jacobian handling in forecasts/likelihoods.
- Diagnostics calibration: The reported diagnostics/warnings lack quantified false-positive/false-negative rates, threshold rationales, and integration with stopping criteria; reproducibility of diagnostics under randomness/parallelism is not addressed.
- Precision/roundoff robustness: No assessment of rounding errors near machine precision (e.g., catastrophic cancellation), adaptive precision (float64 vs long double), or compensated differencing for extreme scales.
- Hybrid with autodiff: No interface to combine numerical differentiation with autodiff for differentiable subcomponents (e.g., JAX/PyTorch integration), or guidance on partitioning models for hybrid gradients.
- Real-data validations: End-to-end validations on real cosmology or cross-domain datasets (with ground-truth or MCMC baselines) are deferred to future work; generality to non-cosmology domains remains to be demonstrated empirically.
- Vectorization and memory management: Missing API guidance and internal mechanisms for memory-safe, batched/streamed differentiation of large vector outputs and high-order tensors, especially under multiprocessing.
- Parallel reproducibility: Deterministic behavior under multiprocessing (seed handling, ordering, platform differences) is not detailed; CI-tested cross-platform reproducibility is not documented.
- Adaptive experiment design: No active learning/sequential sampling methods to place evaluation points (1D or multi-D) that minimize derivative error under a fixed evaluation budget.
- Hyperparameter tuning: Sensitivity to polynomial degree, ridge regularization, and sampling scale is not quantified; automated tuning (e.g., via cross-validation, AIC/BIC, or Bayesian evidence) is absent.
- Comparative ecosystem positioning: No head-to-head comparison with existing numerical differentiation and Fisher/DALI toolkits in terms of accuracy, robustness, and usability.
- Licensing and archival clarity: The paper mentions both CC BY-4.0 (manuscript) and “MIT (or similar OSI-approved)” for code without a definitive SPDX identifier or archived release/DOI, which may impede reproducible citation and reuse.
Practical Applications
Practical Applications Derived from the Paper
The following applications translate DerivKit’s stable numerical differentiation, Fisher/DALI tooling, and diagnostics into real-world use across industry, academia, policy, and daily life. Each item notes sectors, potential tools/workflows, and assumptions that affect feasibility.
Immediate Applications
- Sensitivity audits for legacy simulations and black-box models
- Sectors: aerospace, automotive, energy, climate, manufacturing, pharmacometrics
- Tool/workflow: “Sensitivity Dashboard” that computes gradients/Jacobians/Hessians via DerivKit for models without analytic/autodiff derivatives; highlights unstable regions with diagnostics
- Assumptions/dependencies: Model evaluations must be callable and locally smooth within a fit window; sufficient evaluation budget to sample nearby parameter points
- Fisher-matrix forecasting for experiment design and resource allocation
- Sectors: cosmology, particle physics, imaging systems, sensor networks
- Tool/workflow: “Pre-Experiment Design Assistant” using ForecastKit to quantify parameter constraints, impact of priors, and X–Y Fisher (uncertainty in inputs and outputs)
- Assumptions/dependencies: Gaussian or near-Gaussian posteriors; linearizable parameter dependence in the region of interest; representative covariance model
- Rapid bias quantification of pipelines using Fisher-bias estimates
- Sectors: scientific data processing, remote sensing, clinical diagnostics
- Tool/workflow: “Bias Monitor” computes delta_nu and parameter shifts caused by known data vector biases; flags where MCMC or deeper validation is needed
- Assumptions/dependencies: Accurate characterization of data bias; reliable covariance estimates
- Non-Gaussian posterior approximation via DALI to triage MCMC
- Sectors: cosmology, pharmacometrics, reliability engineering
- Tool/workflow: “Non-Gaussian Likelihood Explorer” that builds DALI tensors to approximate skew/kurtosis and prioritize regions for focused sampling
- Assumptions/dependencies: Posterior is smooth and unimodal enough for truncated expansions; DALI accuracy degrades with strong multimodality
- Derivatives from tabulated or emulated models (lookup tables, grids)
- Sectors: materials design, power systems, weather/climate, computational chemistry
- Tool/workflow: “Table-to-Gradient” utility that estimates derivatives directly from grids without rewriting the model or retraining surrogates
- Assumptions/dependencies: Grid density adequate for local polynomial fits; interpolation errors manageable; diagnostics used to detect conditioning issues
- Gradient validation for autodiff pipelines and ML surrogates
- Sectors: software/ML, digital twins, optimization
- Tool/workflow: “Gradient Validator” compares autodiff outputs to DerivKit’s robust estimates; detects silent autodiff failures or boundary artifacts
- Assumptions/dependencies: Access to both autodiff and numerical evaluations; consistent parameter scaling; tolerance selection informed by diagnostics
- Parameter-dependent covariance differentiation for robust forecasts
- Sectors: signal processing, epidemiology, finance risk modeling
- Tool/workflow: “Covariance Sensitivity Module” to include covariance derivatives in Fisher/DALI, preventing under/over-confidence in forecasts
- Assumptions/dependencies: Covariance must be evaluable as a function of parameters; differentiable behavior near operating points
- Warm-starting MCMC and nested sampling from derivative-based summaries
- Sectors: any posterior sampling workflow (academia and industry)
- Tool/workflow: Use Fisher means/covariances or DALI expansions to initialize samplers near high-probability regions
- Assumptions/dependencies: Approximations sufficiently accurate locally; samplers configured to escape approximation-induced bias
- A/B testing and count-data analyses with Gaussian/Poisson likelihoods
- Sectors: tech product analytics, marketing, operations
- Tool/workflow: Quick sensitivity and forecast of parameter shifts in conversion rates or event counts using LikelihoodKit + ForecastKit
- Assumptions/dependencies: Likelihood form matches data; independence assumptions; sample sizes sufficient for stable derivatives
- Robotics and control parameter tuning under sensor noise
- Sectors: robotics, industrial automation
- Tool/workflow: “Noise-Robust Tuner” using adaptive polynomial-fit derivatives to update gains/parameters where finite differences fail
- Assumptions/dependencies: Local smoothness despite noise; bounded actuation cost to probe parameter neighborhoods safely
- Finance: Greeks for black-box pricing models
- Sectors: finance (derivatives pricing, risk)
- Tool/workflow: “Greeks Estimator” computes sensitivities (Delta, Gamma, Vega) for proprietary or legacy pricing engines
- Assumptions/dependencies: Pricing outputs locally smooth; evaluation latency acceptable; careful scale selection to handle discontinuities (e.g., barriers)
- Education and training: hands-on inference labs
- Sectors: education
- Tool/workflow: Classroom notebooks demonstrating Fisher, bias, and DALI with real diagnostics; helps teach limits of Gaussian assumptions
- Assumptions/dependencies: Python environment; curated examples with controlled noise
Long-Term Applications
- Adaptive experimental design and active learning integrated with DALI
- Sectors: healthcare trials, cosmology surveys, materials discovery
- Tool/workflow: “Adaptive Design Engine” that iteratively chooses measurements based on derivative-informed uncertainty and non-Gaussianity
- Assumptions/dependencies: Closed-loop measurement capability; reliable real-time model evaluations; governance for adaptive protocols
- Policy-grade scenario analysis with derivative-aware uncertainty
- Sectors: climate policy, energy planning, public health
- Tool/workflow: “Policy Scenario Planner” integrating DALI/Fisher derivatives into decision models to quantify sensitivity to interventions and biases
- Assumptions/dependencies: Validated domain models; stakeholder acceptance of approximation limits; transparency of diagnostics in reports
- Hybrid autodiff–numerical differentiation frameworks
- Sectors: scientific computing, ML systems
- Tool/workflow: Combine autodiff where applicable with DerivKit backends for discontinuities/tabulations; automatic method selection driven by diagnostics
- Assumptions/dependencies: Interfaces between AD and DerivKit; method-selection heuristics; benchmarked error control
- Cloud service for derivative audits and likelihood approximations
- Sectors: software platforms, enterprise analytics
- Tool/workflow: “Derivatives-as-a-Service” API offering stable derivatives, Fisher/DALI outputs, and diagnostic reports at scale
- Assumptions/dependencies: Secure model upload or remote execution; cost controls for high-dimensional evaluations; SLA on numerical stability
- High-dimensional, multi-fidelity inference pipelines
- Sectors: aerospace design, subsurface modeling, CFD
- Tool/workflow: Pipelines that switch fidelity levels and derivative methods based on conditioning; integrate MCMC selectively in complex subspaces
- Assumptions/dependencies: Multi-fidelity models available; orchestration logic; careful scaling of polynomial fits and stencil sizes
- Real-time control with derivative-informed safety envelopes
- Sectors: autonomous vehicles, grid operations, advanced manufacturing
- Tool/workflow: Controllers using on-the-fly derivative estimates for predictive safety margins under non-Gaussian uncertainties
- Assumptions/dependencies: Fast, deterministic evaluations; verified stability of online fitting; certification/regulatory standards
- Standardized derivative diagnostics in regulatory submissions
- Sectors: medical devices, finance, energy
- Tool/workflow: Include DerivKit’s metadata, fit-quality checks, and warnings to document numerical soundness of sensitivity claims
- Assumptions/dependencies: Regulator acceptance; documented thresholds; reproducibility across compute environments
- Integration into probabilistic programming systems
- Sectors: broad scientific/industrial modeling
- Tool/workflow: Backends that provide derivative-informed proposals or surrogates for PPLs (e.g., PyMC/Stan/NumPyro), improving sampler efficiency
- Assumptions/dependencies: API compatibility; rigorous testing for edge cases; community adoption
- Domain-specific wrappers and GUIs
- Sectors: cosmology, pharmacometrics, reliability engineering
- Tool/workflow: “Cosmo-DerivKit,” “PK-DerivKit,” etc., with presets for sector-specific likelihoods, covariances, and diagnostics
- Assumptions/dependencies: Maintained domain libraries; curated examples and defaults; user training
- Hardware-accelerated derivative estimation
- Sectors: HPC, embedded systems
- Tool/workflow: GPU/TPU kernels for adaptive polynomial fits and stencil extrapolation; FPGA for low-latency control
- Assumptions/dependencies: Numerics stable under mixed precision; efficient parallel sampling strategies; portability across accelerators
- Digital twin calibration with non-Gaussian uncertainty quantification
- Sectors: manufacturing, smart cities, energy grids
- Tool/workflow: Twin calibration loop that uses DALI-corrected forecasts to tune parameters and choose measurements
- Assumptions/dependencies: Accurate twin-model structure; streaming data integration; monitoring for approximation breakdowns
- Consumer data science utilities
- Sectors: daily life, personal analytics
- Tool/workflow: Lightweight apps to estimate robust slopes/sensitivities (e.g., fitness progress, budget trends) in noisy time series
- Assumptions/dependencies: Simplified UX for window selection; protection against overfitting; clear caveats on statistical limits
Glossary
- Adaptive polynomial-fit method: A local regression approach that automatically selects sampling scales and polynomial degree to stabilize numerical derivative estimates in noisy settings. "Two variants are available: a fixed-window polynomial fit, and an adaptive polynomial-fit method."
- Automatic differentiation (autodiff): A technique for computing exact derivatives of differentiable programs by systematically applying the chain rule during execution. "Automatic differentiation (autodiff) provides exact derivatives for differentiable programs"
- Chebyshev sampling grids: Sets of sampling points based on Chebyshev nodes that improve polynomial approximation stability and conditioning. "constructs domain-aware Chebyshev sampling grids with automatically chosen scales"
- Covariance matrices: Matrices that encode variances and pairwise covariances of random variables or data components, critical in likelihoods and Fisher analysis. "covariance matrices depend explicitly on model parameters."
- Derivative Approximation for Likelihoods (DALI): A likelihood expansion method using higher-order derivatives to capture leading non-Gaussian features of posteriors beyond Fisher’s Gaussian approximation. "The Derivative Approximation for Likelihoods (DALI) (Sellentin et al., 2014; Sellentin, 2015) extends Fisher formalism by incorporating higher-order derivatives to capture leading non-Gaussian features of the posterior."
- Finite-difference stencils: Discrete patterns of evaluation points used to approximate derivatives via finite differences. "3-, 5-, 7-, and 9-point stencils"
- Fisher bias: The parameter shift or bias predicted by Fisher-based analyses when data or models are biased. "Fisher bias estimates"
- Fisher forecasts: Fast, derivative-based predictions of parameter uncertainties under local Gaussian assumptions. "These derivatives are used to construct Fisher forecasts"
- Fisher information matrix: A matrix of expected second derivatives of the log-likelihood that lower-bounds parameter estimation variance. "forecasting methods based on the Fisher information matrix"
- Gauss-Richardson schemes: Noise-robust derivative estimation techniques combining extrapolation ideas to stabilize finite differences. "noise-robust Gauss-Richardson schemes"
- Gaussian approximation: The assumption that the posterior distribution is Gaussian near a fiducial point. "beyond the Gaussian approximation"
- Gaussian prior: A Gaussian-distributed prior belief over parameters integrated into forecasts or inference. "Standard Fisher contours with and without a Gaussian prior."
- Gaussianity: The property of being Gaussian; deviations indicate non-Gaussian posterior behavior. "When posterior distributions deviate from Gaussianity"
- Hessian: The matrix of second-order partial derivatives of a scalar function with respect to its parameters. "gradients, Jacobians, Hessians, and higher- order derivative tensors"
- Higher-order derivative tensors: Multidimensional arrays of third- and higher-order derivatives used in advanced likelihood expansions like DALI. "higher- order derivative tensors"
- Implicit solvers: Numerical methods that solve equations involving the unknown within the update step, common in simulation workflows. "workflows involving discontinuities or implicit solvers"
- Jacobian: The matrix of first-order partial derivatives of a vector-valued function with respect to its inputs. "gradients, Jacobians, Hessians"
- Log-likelihood: The natural logarithm of the likelihood function; derivatives of it are used in expansions and inference. "higher-order derivatives of the log-likelihood are required"
- Markov chain Monte Carlo (MCMC): Sampling algorithms that construct a Markov chain to draw from complex posterior distributions. "Sampling-based methods such as Markov chain Monte Carlo (MCMC), grid sampling, and nested sampling provide robust posterior estimates"
- Nested sampling: A Bayesian computation technique for efficiently estimating model evidence and sampling posteriors. "Sampling-based methods such as Markov chain Monte Carlo (MCMC), grid sampling, and nested sampling provide robust posterior estimates"
- Non-Gaussian likelihood expansions: Approximations to the likelihood that go beyond Gaussianity by incorporating higher-order derivative information. "non-Gaussian likelihood expansions"
- Poisson likelihood: A likelihood model appropriate for count data that follow a Poisson distribution. "Gaussian and Poisson likelihood models."
- Posterior: The distribution of parameters conditioned on observed data, central to Bayesian inference. "These frameworks assume Gaussian posteriors"
- Richardson extrapolation: A technique that improves numerical estimates by combining results at multiple step sizes to cancel leading errors. "Richardson extrapolation (Richardson, 1911; Richardson & Gaunt, 1927)"
- Ridders' method: An extrapolation-based algorithm for stabilizing and accelerating convergence of numerical derivative estimates. "Ridders' method Ridders (1979)"
- Ridge regularization: An L2 penalty applied in polynomial or linear regression to improve conditioning and reduce overfitting. "optional ridge regularization"
- Surrogate models: Simplified models that emulate complex simulations to reduce computational cost in inference workflows. "including surrogate models and emulators."
- Tabulated models: Model outputs provided on discrete parameter grids or lookup tables, often incompatible with autodiff. "not directly applicable to tabulated models"
- X-Y Fisher: A Fisher-analysis variant that incorporates uncertainty in both inputs and outputs. "X-Y Fisher contours accounting for uncer- tainty in both inputs x and outputs y."
Collections
Sign up for free to add this paper to one or more collections.