Pricing ethics framing and human–algorithm interaction

Determine how explicitly framing pricing decisions as ethical or unethical affects the adoption of a self-learned Q-learning pricing algorithm that implements a win-stay–lose-shift collusive strategy and the resulting market prices in the indefinitely repeated Bertrand duopoly laboratory experiment with two firms, perfectly inelastic demand from 60 consumers, integer price grid {0,1,2,3,4,5}, and continuation probability 0.95, relative to the neutral framing used in the study.

Background

The experiment intentionally used neutrally framed instructions and did not label pricing choices as ethical or unethical to avoid moral priming effects. Prior work indicates that machine delegation can increase unethical behavior under explicit moral framing, which could plausibly influence algorithm adoption and pricing behavior in strategic settings.

Because participants chose whether to adopt a collusive algorithm and, in one treatment, could override its recommendations, moral framing may shape both delegation rates and pricing outcomes. The authors note that the interaction between moral framing and adoption/pricing decisions has not been examined in their setting and identify it as an open avenue for future research.

References

We do not label pricing choices as (un)ethical. Related work shows that machine delegation can raise unethical behavior under explicit moral framing \citep{kobis2025delegation}. How such framing would interact with adoption and pricing in our setting is an open question for future research.

Delegate Pricing Decisions to an Algorithm? Experimental Evidence  (2510.27636 - Normann et al., 31 Oct 2025) in Section 2.4 (Procedures), footnote on instruction framing