Vertical Tacit Collusion in AI Markets
- Vertical tacit collusion is a phenomenon where platforms and sellers indirectly align strategies via AI algorithms, leading to super-additive consumer harm.
- It is modeled as a Markov game where platforms set ranking parameters and sellers choose bid levels, with their complementary actions amplifying market distortions.
- Simulation analyses reveal that algorithmic biases and strategic complementarities can increase consumer surplus loss by over 20 percentage points, underscoring the need for regulatory measures.
Vertical tacit collusion refers to a market failure arising from the independent but complementary actions of vertically related agents—such as platforms and sellers—who exploit systematic biases in an intermediary (often algorithmic) agent, typically in AI-mediated markets. This phenomenon differs fundamentally from horizontal collusion: it does not require explicit coordination or agreement between actors, yet results in significant, super-additive harm to consumer welfare owing to strategic alignment and mutual reinforcement of exploitative strategies (Affonso, 6 Jan 2026).
1. Conceptual Foundations and Distinction from Horizontal Collusion
In traditional industrial organization, collusion describes a setting where firms coordinate to raise profits above competitive levels, typically through explicit communication or repeated-game incentives. In horizontally differentiated markets, such collusion takes place among rival sellers using a common strategic instrument (e.g., price), and sustaining joint deviation from competition requires complex incentive-compatibility constraints (Bos et al., 2018).
Vertical tacit collusion, in contrast, arises in markets with vertical structure—where platforms control mechanisms such as ranking algorithms and sellers control product attributes or presentations. The collusive outcome emerges not from agreement, but from complementary best responses by each party, exploiting shared vulnerabilities in an intermediary AI agent (e.g., an AI shopping agent with cognitive biases). This super-additive consumer harm is structurally distinct from horizontal algorithmic collusion, which features symmetric actors and either direct or indirect coordination around a single instrument (Affonso, 6 Jan 2026).
2. Formal Modeling and Theoretical Structure
Vertical tacit collusion is formalized as a Markov game with:
- A platform choosing ranking parameters , representing bid weight, endorsement, and decoy settings.
- Sellers each selecting bid levels and manipulation intensities , the latter indicating intensity of product-attribute manipulation (e.g., linguistic framing).
- An AI agent that chooses the winning seller based on a utility function incorporating both product quality, price, and a structured set of cognitive biases.
Consumer welfare per round is defined by , with denoting quality, price, and the seller chosen by the agent at time . The platform’s ranking function is , ordering products for display.
The AI agent evaluates each seller using , where the bias term aggregates position (prime, positional, recency), endorsement, decoy, and manipulation effects modulated by product rank and visibility (Affonso, 6 Jan 2026).
Super-additivity in harm arises because the optimal manipulation by sellers has limited consumer impact unless the platform’s ranking function amplifies that manipulation by giving prominence to manipulated offers. Similarly, the platform’s exploitative ranking alone is less effective unless sellers strategically respond with tailored manipulation.
3. Welfare Effects and Strategic Complementarity
Quantitative analysis demonstrates that joint exploitation by platforms and sellers results in "super-additive" consumer harm, formally measured as strategic complementarity. Under baseline conditions (: fair ranking, no manipulation), average consumer surplus is 0.303. Allowing platform exploitation only () causes a 27.0% drop in surplus, seller-only manipulation () instead raises surplus by −9.6% (a net benefit), but when both are permitted () harm jumps to 37.1%.
The strategic complementarity metric is defined as
yielding a persistent, statistically robust increase of +19.7 percentage points, with a 95% confidence interval [18.3, 21.1] and Cohen’s (Affonso, 6 Jan 2026).
Factorial analysis reveals position bias as the dominant harm channel (≈29.4% harm alone), with manipulation by sellers being benign or even pro-consumer in the absence of platform collusion. However, their interaction magnifies total damage well beyond additivity.
4. Multi-Agent Simulation Methodology
Investigation of vertical tacit collusion uses multi-agent simulations with:
- Tabular Q-learning for both platform and sellers, governed by a state encoding recent bid and manipulation activity.
- Platform chooses among 32 actions and sellers among 12, covering combinations of ranking parameters and manipulation/bid levels.
- Outcomes determined over 20,000 rounds (with measurements on the final 8,000 for stationarity) and 100 independent trials.
- Bias parameters calibrated from empirical LLM benchmarks: , , , per manipulation level, , .
Learning follows standard temporal-difference Q-update rules, with robust results across algorithmic variants (including gradient-bandit, UCB, Thompson Sampling, Actor-Critic, REINFORCE, Exp3), confirming that observed complementarity is not an artifact of specific reinforcement-learning dynamics (Affonso, 6 Jan 2026).
5. Comparative Statics, Robustness, and Gatekeeper Thresholds
Robustness analyses confirm that vertical tacit collusion persists across a wide range of market and AI agent parameters:
- Super-additive harm remains significant under both multiplicative and additive visibility models, stochastic bias parameters, and heterogeneity in consumer populations (up to 75% low-bias consumers).
- Varying the platform's bias-exploitation instruments (e.g., ranking bid-weight ), exhibits a "gatekeeper threshold": pure bid-based ranking () enables catastrophic harm (+69.2%), whereas any partial preservation of quality signal () collapses the seller’s ability to exploit manipulation.
- Alternative interventions (human-in-the-loop override at 50%, differential platform take-rates up to 20%) only partially mitigate harm, leaving significant consumer surplus loss.
- Results generalize across learning algorithms and remain stable in longer-run dynamics (100,000 rounds), with strategic complementarity stabilizing near +21.8 percentage points.
Factorial analysis reveals that position and endorsement biases drive the majority of consumer harm. Manipulation is destructive only when the platform places manipulated offers in bias-prone positions (Affonso, 6 Jan 2026).
6. Regulatory and Antitrust Considerations
Vertical tacit collusion evades traditional antitrust detection mechanisms:
- No communication or coordination occurs between platform and sellers, so it is invisible to cartel-detection frameworks reliant on signaling or agreement.
- The emergent collusive outcome arises from the algorithmic exploitation of AI agent vulnerabilities, not direct price-fixing or explicit contract.
- Diffuse welfare losses across consumers and small sellers, combined with concentrated gains to major platforms, create collective action failures for remedial intervention.
Proposed regulatory approaches include debiasing AI agents (removing position/manipulation/endorsement biases), mandated transparency and minimum quality-weight signals in ranking algorithms, systematic audit of platforms for amplification of bias, and recognition of algorithmic vulnerability as central to competition harms in digital intermediation. Human overrides and differential fees are insufficient or blunt tools, while platform design interventions targeting information architecture yield more decisive results (Affonso, 6 Jan 2026).
7. Connection to Classical Models of Vertical Differentiation and Tacit Collusion
The algorithmic market failures characterized as vertical tacit collusion in (Affonso, 6 Jan 2026) have formal analogs in classic models of vertical product differentiation and cartel stability (Bos et al., 2018). In vertically differentiated markets under fixed-shares collusion, the stability condition is determined by each firm’s incentive constraint: where is the deviating profit, collusive profit, and Nash profit. The lowest-margin (lowest-quality) firm has the tightest constraint, and small unit-cost differences make collusion harder to sustain. A plausible implication is that, analogous to the digital setting, dispersion or compression in firm heterogeneity (here, in AI bias susceptibility) mediates whether mutually reinforcing strategies result in durable collusion or breakdown.
Thus, vertical tacit collusion in digital AI-mediated markets represents a new expression of older incentive-alignment issues, now magnified and accelerated by algorithmic learning and the prevalence of susceptible intermediary agents. Detection and remedy require a reorientation of antitrust analysis toward the architecture of information presentation and algorithmic incentives, not merely overt coordination or traditional price-instrument collusion (Bos et al., 2018, Affonso, 6 Jan 2026).