Artificial intelligence is revolutionizing the way companies interact with consumers, but a groundbreaking new study warns that this technological leap could usher in an era of invisible, personalized pricing that undermines consumer fairness on an unprecedented scale. Researchers Dr. Miroslava Marinova from the University of East London and Dr. Christian Bergqvist from the University of Copenhagen have co-authored eye-opening research that delves into how algorithmic pricing, powered by AI, is pushing markets into a world where the price you pay for the same product may differ vastly from that of your neighbor, without your knowledge.
Traditional pricing mechanisms have long relied on broad market factors—demand, production costs, and competitive pressures—to set prices visible to all consumers. These systems, by design, treat all customers fairly, generally offering the same price to anyone purchasing at the same time. However, this model is rapidly evolving. AI-driven pricing strategies utilize vast troves of consumer data, including browsing histories, spending patterns, and even geographic location, to finely tune quotes to individuals’ predicted willingness to pay. The result is a dynamic landscape where the price is not just set by supply and demand but intricately personalized, effectively eliminating transparent, standardized pricing.
This shift in pricing methodology, known as algorithmic personalized pricing, leverages complex machine learning models that analyze the minutiae of consumer behavior for the express purpose of maximizing profits. Such AI systems deploy predictive analytics and real-time data processing to identify how much a particular customer might tolerate paying before deciding to purchase or seeking alternatives. While businesses have historically experimented with some forms of price differentiation—think student discounts or loyalty rewards—AI magnifies these practices, scaling them across millions of transactions simultaneously and invisibly.
The ethical implications of this trend toward individualized pricing are profound. The research highlights that consumer backlash is often triggered not merely by higher prices but by the discovery of unequal treatment without transparency or justification. This sense of unfairness erodes trust between buyer and seller, potentially altering purchasing behaviors and damaging brand reputations. When consumers uncover that they are essentially penalized for their browsing habits or demographic profile, questions of discrimination, exploitation, and erosion of market fairness come sharply into focus.
Dr. Marinova emphasizes that the invisible nature of such personalized pricing mechanisms creates a slippery slope where fairness becomes the central concern. Unlike openly advertised price differences, AI’s adjustments occur behind the scenes, meaning customers often remain completely unaware they are subjected to discriminatory pricing structures. This secrecy poses significant challenges for regulators and competition watchdogs aiming to safeguard consumer rights and ensure fair market practices.
One particularly troubling dimension revealed by the study is the role of market dominance in amplifying abuses. In highly competitive markets, consumers theoretically can switch to cheaper alternatives if they believe a particular vendor is overcharging them. However, when a firm holds a dominant position, algorithmic pricing can transform into an exploitative tool reinforcing power imbalances. The study argues that under EU and UK competition law, such undisclosed, unjustified personal pricing could be considered an abuse of dominance, potentially warranting regulatory intervention.
Although the focus of this research is on EU legislation, its findings resonate strongly in the UK and likely beyond, given the global reach of AI technologies in commerce. The UK government has begun exploring whether competition authorities like the Competition and Markets Authority should be endowed with stronger investigative powers to oversee algorithms operating at the intersection of competition and consumer protection. As AI pricing strategies evolve and their deployment broadens, these debates will intensify.
Technically, AI-enabled price discrimination relies on sophisticated algorithmic models incorporating elasticity of demand estimates, consumer segmentation, and even psychological profiling. By utilizing machine learning techniques such as reinforcement learning and neural networks, these systems dynamically react to market trends and individual consumer signals to fine-tune prices in milliseconds. This technical sophistication both enhances precision and poses unique challenges for transparency and accountability because the underlying models are often proprietary and difficult to audit.
The paper by Marinova and Bergqvist serves as an urgent call to action for regulators, academics, and policymakers alike. The legal frameworks currently addressing abuse of dominance possess the foundational tools necessary to tackle such AI-driven practices, yet they have not fully evolved to address the opacity and complexity introduced by these technologies. A shift from theoretical legal deliberation to practical enforcement strategies is imperative as algorithmic pricing becomes a normative aspect of digital marketplaces.
In addition to legal scrutiny, the research implicitly points to the need for enhanced technical auditability of AI pricing systems. Transparent algorithmic governance would require firms to disclose, at least in summary form, the methods behind their price-setting mechanisms and provide affected consumers with understandable explanations of their price offers. Without such measures, the credibility of fair market competition risks rapid deterioration.
Consumer advocacy groups may also play a pivotal role in demanding clearer regulations and protections. As prices increasingly become personalized and hidden, public awareness campaigns are crucial for educating consumers about AI-driven pricing, equipping them to make informed choices and advocate for fairness. The interplay between consumer empowerment, regulatory oversight, and corporate responsibility will shape how personalized pricing develops in coming years.
This transformative junction in market dynamics raises fundamental questions about the societal values we want to preserve in the age of AI. Does convenience and efficiency justify the potential for exploitation? What boundaries should be set to protect vulnerable consumers from covert price manipulation? As AI weaves itself deeper into economic transactions, these questions must be confronted head-on to ensure that technological progress does not come at the expense of equity and trust.
Ultimately, Marinova and Bergqvist’s research underlines that AI-enabled price discrimination is not merely a futuristic possibility but an imminent reality demanding immediate and thoughtful discourse. The balance between innovation, profitability, and fairness hinges on adopting clear, enforceable rules that maintain market integrity while harnessing AI’s benefits. Regulators must prompt a transparent dialogue and deploy effective measures before personalized pricing becomes an unchecked force reshaping everyday commerce.
Subject of Research: AI-enabled price discrimination and competition law
Article Title: AI-enabled price discrimination as an exploitative abuse of dominance under EU competition law
News Publication Date: 24-Mar-2026
Web References: Journal of Competition Law & Economics, DOI: 10.1093/joclec/nhag006
References: Marinova, M. and Bergqvist, C. (2026), AI-enabled price discrimination as an exploitative abuse of dominance under EU competition law, Journal of Competition Law & Economics
Keywords: artificial intelligence, algorithmic pricing, personalized pricing, price discrimination, competition law, abuse of dominance, consumer fairness, machine learning, market transparency, regulatory challenges

