Counting partial Hadamard matrices in the cubic regime
Abstract: We give a precise asymptotic formula for the number of $n\times 4t$ partial Hadamard matrices in the regimes $t/n3\to\infty$ and $t/n3\toΘ$ for sufficiently large fixed $Θ$. This strengthens earlier results of de~Launey and Levin, who obtained the asymptotic for $t/n{12}\to\infty$, and of Canfield, who extended this to $t/n4\to\infty$.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
A kid-friendly guide to “Counting partial Hadamard matrices in the cubic regime”
What is this paper about?
This paper studies special ±1 tables called partial Hadamard matrices. In these tables, each row is made of +1s and −1s, and any two different rows “don’t interfere” with each other: if you multiply their matching entries and add them up, you get 0. The big goal is to figure out how many such tables of a given size there are, especially when the number of columns is a multiple of 4.
The author finds a precise, easy-to-use formula that tells how many there are when the number of columns is large enough—specifically when it is at least on the scale of the number of rows cubed. This improves earlier results that needed even larger numbers of columns.
What are the big questions?
- If you choose an table of ±1s at random, how likely is it that all the rows are pairwise orthogonal?
- Equivalently, how many partial Hadamard matrices exist for given (rows) and (columns)?
- How accurate can we be about this count when grows with ? In particular, what happens when is about as large as ?
How did they approach the problem?
The paper uses a clever translation of the table-counting question into a question about a random walk and then analyzes that with “wave math” (Fourier analysis). Here’s the idea in everyday terms:
- Turning tables into a walk:
- Imagine you have a counter for every pair of rows (there are such pairs).
- Each column contributes +1 or −1 to each counter, depending on whether the two entries in that column match or differ.
- If after columns all the counters are back at 0, then the rows are pairwise orthogonal.
- So counting partial Hadamard matrices is the same as counting how often this high-dimensional walk returns exactly to the origin (all counters at 0).
- Using waves (Fourier analysis):
- To calculate the chance of returning to the origin, the paper uses Fourier analysis—think of it like breaking the problem into many tiny wave patterns and measuring how they add up.
- The integral (big sum) that comes from this “wave view” has places where it is strongest; those places dominate the final count.
- Focusing on the “core”:
- The integral is split into regions. The most important part is a small, central “core” region where the math looks like a Gaussian (a bell-shaped curve), which is friendly to compute.
- There’s a twist: a cubic “phase” term caused by triples of rows (triangles) introduces oscillations. The author shows how to control this and compute its leading effect.
- The rest of the regions (“off-core” and “residual”) are shown to contribute very little; the paper uses a mix of inequalities and a smart trick comparing ±1 variables to Gaussian (normal) variables to prove these parts are tiny.
What did they discover?
- The paper proves a sharp formula for the number of partial Hadamard matrices when is at least a constant times .
- There is a “base scale” that earlier work already identified as the right main term. This paper shows that
with remaining error terms that are much smaller when and is large.
- What does that mean in plain words?
- The main count is basically .
- There is a small correction that depends on the number of triangles among the rows (there are triangles). That correction is about .
- If is way bigger than , this correction becomes tiny and the old approximation is very accurate.
- If is exactly on the order of (say ), that correction stays visible. In fact, since , the correction is roughly —small, but not negligible.
- Why is this important compared to past work?
- Earlier results needed to be much larger relative to : first , then improved to .
- This paper reaches the “cubic regime”: it works with and even identifies the leading correction term.
Why does this matter?
- It tightens our understanding of how many nearly non-interfering ±1 codewords (rows) we can pack into a matrix. This has connections to signal design, error-correcting codes, and combinatorial designs.
- The famous Hadamard conjecture asks whether a full Hadamard matrix exists whenever is a multiple of 4. While this paper doesn’t settle that conjecture, its methods and precise counts for rectangular (partial) cases deepen the toolbox and show exactly where the challenges lie.
- The techniques—turning the problem into a random walk, using Fourier analysis, and carefully isolating the main contribution—provide a clean and powerful way to study similar counting problems.
A quick, friendly intuition
- Think of each row as a radio station broadcasting +1 and −1 “signals.” Two rows are orthogonal if, over the columns, their signals cancel out perfectly—no interference.
- The question “how many such tables exist?” becomes “how often do all the pairwise interference counters end at zero after steps?”
- Most of the answer comes from the core of the math where everything behaves like a smooth bell curve. A small extra effect comes from triples of stations (triangles), causing a predictable correction.
- The paper shows that once is roughly or bigger, we have a very accurate and simple formula for how many such tables exist.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
Below is a consolidated list of what remains missing, uncertain, or unexplored in the paper, formulated as concrete items that future researchers can act on:
- Push below the cubic threshold: Develop techniques to obtain asymptotics for when with (ideally down to ), by achieving finer control of the cubic phase on the core so that the current loss is reduced.
- Uniform-in- expansion at : Provide a uniform asymptotic expansion valid for all fixed (not just “sufficiently large” ), with explicit second- and higher-order terms; make the threshold (and its dependence on ) explicit.
- Explicit constants and ranges: Replace “sufficiently large ” and unspecified constants with explicit values and prove uniform validity ranges in , enabling concrete application/testing.
- Higher-order Edgeworth-type expansion: Compute precise next-order corrections beyond , identifying explicit contributions from quartic cycles , quintic cumulants, and the sixth-order remainder; aim for a full series in powers of $1/t$ with coefficients as explicit polynomials in .
- Sharpen residual bounds: Improve pointwise contraction of on the far shell to remove the current requirement ; ideally obtain an -independent contraction without resorting to Gaussian comparison.
- Strengthen the Gaussian comparison inequality: Tighten the bound to an error of order uniformly in direction, which would directly lower the residual threshold and potentially the overall exponent.
- Off-core contribution quantification: Replace the current estimates on with precise asymptotics to confirm whether off-core regions contribute to higher-order terms in .
- Bridge the gap between and : Derive bounds that close the remaining gap between the residual-negligibility condition and the core analysis threshold , potentially by combining refined phase analysis with improved pointwise contraction.
- Regime near the Hadamard conjecture (): Develop techniques that yield nontrivial lower bounds or structural insights for when is of order (particularly ), even if only in average or probabilistic senses, to connect the counting framework more directly with the conjecture.
- Alternative analytic methods for the cubic phase: Explore stationary phase/saddle-point methods, Stein’s method, or dependency-graph CLTs tailored to the triangle form to integrate the cubic phase more sharply in high dimensions.
- Exploit finer lattice/graph structure: Use detailed properties of and the associated graphs (degree sequences, cycle structure) to stratify cells and achieve stronger decay or more accurate local approximations on residual regions.
- Optimize the “core Gaussian mass” estimate: Improve the bound to tighter constants or dimension-free error forms, clarifying how much mass lies outside the core and whether this affects higher-order corrections.
- Small- behavior and uniformity: Analyze edge cases (e.g., ) to determine whether the large- expansion remains accurate and to identify any parity or small-size effects that alter correction terms or thresholds.
- Enumeration up to equivalence: Derive asymptotics for the number of partial Hadamard matrices modulo row/column permutations and sign flips, which is often the more natural object in combinatorial design contexts.
- Extensions beyond real : Generalize the framework to complex Hadamard matrices, other alphabets (e.g., roots of unity), or different orthogonality notions, and determine whether the cubic regime and leading corrections persist or change.
- Numerical validation and constant calibration: Perform computational experiments to empirically verify the predicted leading correction in the regime, estimate practical values of , and test the sharpness of residual/core error bounds.
Practical Applications
Overview
This paper develops precise asymptotics for counting n×(4t) partial Hadamard matrices (binary ±1 matrices with pairwise-orthogonal rows) in the “cubic regime” t ≳ C₀ n³, identifying both the leading scale and the first nonvanishing correction when t/n³ is constant. Beyond advancing the theory, these results have practical implications wherever large, strictly orthogonal ±1 codebooks or designs are useful, and the methods (Fourier-analytic decomposition, cumulant expansion with cubic-phase control, Gaussian comparison, hypercontractivity) offer transferable tools for counting and constructing other high-dimensional combinatorial objects.
Below are actionable applications grouped by time horizon, each noting sector alignments, prospective tools/workflows, and feasibility dependencies.
Immediate Applications
- Software/Communications — fast feasibility checks and capacity planning for binary orthogonal codebooks
- Use case: Given desired number of users/rows n and code length L=4t, quickly estimate the probability a uniformly random n×(4t) ±1 matrix is partial Hadamard, and hence the expected trials needed for random sampling to succeed.
- Why this paper helps: The paper gives P(success) = N_{n,4t}/2{n·4t} with accurate asymptotics for t ≳ C₀ n³ and the leading-order correction when t/n³ is fixed. This enables sizing L and compute budgets for randomized construction.
- Tools/workflow:
- A small calculator/library function implementing the asymptotic acceptance probability and expected trials for (n, t), incorporating the 1 − (binom(n,3)/(8t)) correction when t ∼ Θ n³.
- Integration into codebook/configuration planning tools in wireless/network simulation suites.
- Assumptions/dependencies:
- Width must be divisible by 4 (L=4t) for n ≥ 3.
- Asymptotics are accurate for large n and t ≥ C₀ n³; for small sizes, rely on fixed-n estimates or exact search.
- Codebooks are strictly ±1; mapping to {0,1} needs re-centering.
- Communications (CDMA/5G/6G massive MIMO/Pilot design) — parameter selection for orthogonal signature/pilot sets
- Use case: Choose pilot/beam training sequence lengths that ensure an abundance of strictly orthogonal ±1 sequences for n antennas/users.
- Why this paper helps: It quantifies how many such matrices exist when L scales with n³, providing guardrails for minimal viable L and margins (via the correction term ≈ 1/(48Θ) when t/n³→Θ).
- Tools/workflow:
- Design guidance tables mapping target n to recommended L=4t that yield high success probability via random generation, plus fallback to deterministic constructions if t is small.
- Assumptions/dependencies:
- Real channels may impose additional constraints (constant weight, spectral shaping); orthogonality here is exact in ±1, not necessarily constant-weight.
- Experimental Design/Statistics (A/B/n tests, factorial designs) — generation of orthogonal assignment matrices
- Use case: Construct treatment assignment matrices with orthogonal contrasts to minimize multicollinearity and variance.
- Why this paper helps: Demonstrates that for L=4t with t ≳ C₀ n³ there are many such designs; random generation with accept/reject becomes computationally predictable.
- Tools/workflow:
- A module that samples ±1 matrices and accepts those with pairwise-orthogonal rows; uses the asymptotic acceptance rate to set expected runtime and stop criteria.
- Assumptions/dependencies:
- Exact orthogonality assumed; in practice, near-orthogonality may suffice and relax length constraints.
- Large-n accuracy; for small n, fall back to known constructions/orthogonal arrays.
- Signal Processing/Imaging (Hadamard multiplexing, single-pixel cameras, coded illumination) — non-square pattern banks
- Use case: Build large banks of strictly orthogonal ±1 patterns with more columns than rows for fast multiplexed acquisition.
- Why this paper helps: Confirms abundance and gives length scaling where random generation is likely to succeed.
- Tools/workflow:
- Pattern-bank generator with runtime estimates from the paper’s asymptotics.
- Assumptions/dependencies:
- Hardware may require balanced patterns or specific ordering; conversion from ±1 to physical on/off patterns may need bias adjustment.
- Education/Academia — teaching modules on Fourier-analytic counting and high-dimensional probability
- Use case: Demonstrate modern techniques (Fourier inversion on lattices, cumulant expansions, Gaussian comparison, hypercontractivity) in upper-level probability/combinatorics courses.
- Why this paper helps: Provides a clean, self-contained route from random walks on Zd to precise counting in a well-motivated problem.
- Tools/workflow:
- Lecture notes and computational notebooks reproducing acceptance-rate predictions and small-scale experiments.
Long-Term Applications
- Combinatorial Design/Coding Theory — improved construction algorithms near the cubic threshold
- Use case: Develop randomized or semi-analytic algorithms that exploit the paper’s primary/residual decomposition and cubic-phase control to construct partial Hadamard matrices more efficiently than naive sampling.
- Potential products:
- Hybrid algorithms that combine Gaussian-guided proposals with accept/reject using fast orthogonality checks; adaptive schemes tuned to the identified core region.
- Assumptions/dependencies:
- Requires engineering of proposal distributions informed by the “core” D and phase structure; further research to guarantee speedups and robust performance at moderate n.
- Communications/Standards (3GPP/6G) — standards guidance for sequence lengths and orthogonal resource planning
- Use case: Codify minimal pilot/spreading lengths that secure a target abundance of orthogonal ±1 sequences as n grows.
- Potential outcomes:
- Reference tables and conservative margins that bake in the first-order correction when t ≈ Θ n³.
- Assumptions/dependencies:
- Must align with other system constraints (spectral masks, PAPR, hardware modulation), and with non-binary or complex sequences if required.
- Quantum Information/Quantum Error Correction — ensembles of orthogonal binary operators and measurement patterns
- Use case: Use large sets of orthogonal ±1 patterns as classical control/measurement masks or for constructing certain stabilizer-like structures where binary orthogonality is relevant.
- Potential outcomes:
- Ensemble-size estimates and randomized synthesis strategies derived from the counting asymptotics.
- Assumptions/dependencies:
- Translation from ±1 orthogonal matrices to quantum-compatible operators is nontrivial; additional algebraic constraints may dominate.
- Cryptography/Randomness Engineering — structured mixing and masking transforms
- Use case: Employ large orthogonal ±1 matrices in masking/whitening layers or in protocol phases that benefit from exact orthogonality.
- Potential outcomes:
- Design spaces and randomization strategies with predictable availability; resilience analyses that leverage abundance at L ∼ n³.
- Assumptions/dependencies:
- Security requires more than orthogonality (nonlinearity, diffusion, side-channel considerations); the counting results are one ingredient.
- General Counting/Inference in High-Dimensional Discrete Systems — method transfer
- Use case: Apply the three-way Fourier/gaussian-comparison/hypercontractive toolkit to count other structures (e.g., orthogonal arrays, conference matrices, low-discrepancy ±1 designs, constrained random walks).
- Potential outcomes:
- New asymptotics and thresholds for existence/abundance in related problems; better randomized algorithms with provable acceptance rates.
- Assumptions/dependencies:
- Success depends on identifying analogous “cubic-phase” bottlenecks and establishing suitable comparison inequalities.
- Large-Scale Experimentation/Operations Research — variance reduction via orthogonal assignments at scale
- Use case: In A/B/n with many arms (large n), choose assignment matrices with exact binary orthogonality to minimize estimator variance and interaction confounding.
- Potential outcomes:
- Workflow templates that scale arms and horizon lengths following L ≈ 4t with t ≳ C₀ n³ for guaranteed availability.
- Assumptions/dependencies:
- In many real settings, near-orthogonality suffices; practical constraints (unequal group sizes, ethics, logistics) may require approximate designs.
Cross-Cutting Assumptions/Dependencies
- Divisibility and regime: Results are stated for widths L=4t and are sharp when t ≥ C₀ n³; the leading correction term is accurate when t/n³ → Θ with Θ sufficiently large. Below the cubic regime, asymptotics remain open.
- Data model: Uniform, independent ±1 entries; additional constraints (e.g., constant weight, sparsity) are not covered and may change thresholds.
- Computational cost: Exact orthogonality checking scales as O(n² t) per sampled matrix; practical generators may need incremental checks or fast transforms.
- Finite-size effects: For small n and t near the threshold, use fixed-n results or empirical calibration; asymptotic formulas may over/underestimate acceptance rates.
- Mapping to 0/1: Converting ±1 designs to 0/1 for certain applications may require re-centering or bias correction to preserve desired properties.
By turning the paper’s asymptotics into concrete acceptance-rate estimators, length planners, and generator modules, practitioners can immediately improve feasibility assessments and workflows for building large orthogonal ±1 designs. Longer term, the analytical techniques promise broader impact across counting and randomized construction of complex discrete structures.
Glossary
- Asymptotic formula: An expression that approximates a quantity in a limiting regime (e.g., large parameters), capturing its leading behavior. "We give a precise asymptotic formula for the number of partial Hadamard matrices..."
- Characteristic function: In probability, the Fourier transform of a random variable’s distribution, used to analyze sums and convergence. "On odd cells the characteristic function is bounded by a constant less than one, making the contribution of these cells exponentially small."
- Complete graph: A graph where every pair of distinct vertices is connected by an edge. "triangles of the complete graph on vertices"
- Cosine-product bound: A specific inequality bounding the modulus of the characteristic function via a product of cosines, used to show contraction. "The key pointwise tool is the following bound from~\cite{DL10}."
- Cumulant expansion: A series expansion of the log of a characteristic function in terms of cumulants, enabling approximation by low-order terms. "For the cumulant expansion we write ..."
- Cubic phase: An oscillatory factor arising from third-order terms (cubic in variables) in the exponent of a Fourier integrand. "The main difficulty on the core is the cubic phase ..."
- Fourier-analytic framework: A method that uses Fourier analysis (e.g., characteristic functions and inversion) to study counting or probability problems. "De~Launey and Levin~\cite{DL10} introduced a Fourier-analytic framework that answers this question for large~."
- Fourier inversion formula: The theorem that recovers a probability (or density) from its characteristic function via an integral over the frequency domain. "the Fourier inversion formula \cite[P3, p.~57]{Spi1976} gives"
- Gaussian characteristic function: The characteristic function associated with a Gaussian distribution, often used as a tractable approximation. "Near the origin, is close to a Gaussian characteristic function."
- Gaussian comparison inequality: A bound comparing a non-Gaussian characteristic function to a Gaussian one to transfer contraction properties. "A Gaussian comparison inequality (Corollary~\ref{cor:weak-comparison}) bounds the distance:"
- Gaussian quadratic integral: A standard integral identity evaluating multivariate Gaussian-type integrals in terms of determinants. "Gaussian quadratic integral"
- Gaussian radial moments: Bounds on integrals of powers of the radius under Gaussian weight, used to control polynomial remainders. "Gaussian radial moments"
- Hadamard conjecture: The assertion that for every divisible by 4, an Hadamard matrix exists. "The Hadamard conjecture asks whether, for every positive integer divisible by~$4$, there exists an matrix ..."
- Hadamard matrices: Square matrices with entries ±1 whose rows (and columns) are orthogonal; they satisfy . "for background on Hadamard matrices"
- Hypercontractive inequality: An inequality relating Lp and L2 norms of low-degree polynomials of independent random variables, used to bound higher moments. "Hypercontractive inequality"
- Lattice: Here, a discrete set of frequency points where the integrand’s magnitude is maximal, structuring the Fourier domain. "The integrand is largest near the lattice $\Lambda=\{\lambda\inT^d:|\psi(\lambda)|=1\}.$"
- Partial Hadamard matrix: An ±1 matrix whose rows are pairwise orthogonal (a rectangular generalization of Hadamard matrices). "an matrix with entries in is a partial Hadamard matrix if its rows are pairwise orthogonal."
- Primary-secondary decomposition: A split of the Fourier integral into contributions from neighborhoods around lattice points (primary) and the remainder (residual). "Primary-secondary decomposition"
- Principal branch of the logarithm: The standard branch of the complex logarithm (with argument in ), required for analytic expansions. "so that the principal branch of the logarithm is well-defined."
- Rademacher signs: Independent random variables taking values ±1 with equal probability, used to model ±1 entries. "in either independent Rademacher signs or independent centered Gaussians."
- Random walk: A sequence of partial sums of random steps; here, a walk on induced by pairwise products. "random walk on~"
- Return to the origin: The event that a random walk is at the starting point after a given number of steps, used to count combinatorial structures. "Counting partial Hadamard matrices therefore reduces to counting returns to the origin of the random walk ..."
- Superlattice: A larger lattice (containing the main lattice) used to tile the frequency domain into structured cells. "The superlattice $\Lambda_0=\{\lambda\inT^d:\text{every coordinate}\in\{0,\pm\pi/2,\pi\}\}$ tiles the torus into quarter-boxes"
- Torus: The -dimensional torus , interpreted as with identified boundaries, serving as the Fourier domain for lattice-valued variables. "tiles the torus into quarter-boxes"
- Triangle form: A cubic polynomial aggregating products over all triangles in the complete graph, governing the leading non-Gaussian phase. "driven by the triangle form~."
Collections
Sign up for free to add this paper to one or more collections.