What Is Algorithmic Pricing?
In basic terms, algorithmic pricing is the practice of using a specially designed mathematical model to determine optimal prices. This model might only consider your own internal inputs, resources, and constraints, or it might include external factors such as competitor prices, market conditions, or other public data.
In the past, these models were largely built in-house in large companies by their finance or pricing teams. Many large tech companies such as Amazon have many PhD-level economists embedded in different product divisions building dynamic pricing models.
As these models have become more common, companies have emerged offering them as a product to firms without the internal capacity to build advanced models.
Today, pricing decisions in industries ranging from e-commerce and travel to ride-sharing and hospitality are increasingly automated, data-driven, and continuously adjusted in real time. The core objective remains consistent: maximize revenue, margin, or market share by responding quickly to changes in demand, competition, and cost structures. But as algorithms become more adaptive and autonomous, learning from historical data and user behavior, they introduce new layers of complexity, making it harder to pinpoint the rationale behind a given price point.
Understanding how these algorithms are designed, what data they rely on, and how they interact with one another in the marketplace is now essential for both business decision-makers and competition regulators alike.
The Antitrust Concerns Surrounding Algorithmic Pricing
Most companies have implemented some version of a pricing model at a basic level even if theyāre not Amazon (it can be as straightforward as an Excel model) but the rise of Software as a Service (SaaS) platforms and Application Programming Interfaces (API) have democratized access to advanced modeling technology.
However, these models are often opaque, with limited visibility into how they determine prices and little or no incentive for providers to offer transparency to anyone beyond their immediate clients. This can create the opportunity for a new take on a classic antitrust issue: price-fixing or collusion.
In the classic hub-and-spoke model of price-fixing, companies (the spokes) share sensitive pricing data with a central actor (the hub), who then sets the collusive prices based on inputs from all the conspirators. In todayās digital equivalent of the hub-and-spoke model, the algorithmic pricing software becomes the hub, taking in confidential data from participating firms/customers to generate optimized prices. By design, such software is capable of producing price outputs that, while not explicitly coordinated, can lead to the highest price possible thus closely resembling collusive behavior.
The concern is not only theoretical: when multiple competitors rely on the same algorithm or vendor, and the model is designed to maximize pricing efficiency, it takes very little for that system to begin aligning prices across the market in ways that reduce competition and harm consumers.
The Role of an Economic Expert
Economists can determine whether collusion is occurring in the market without any insights into the pricing model itself ā establishing that, under normal competitive conditions, prices would be more dispersed is relatively straightforward in theory. However, as some recent court decisions have highlighted ā such as U.S. v. Topkins (2015) and Duffy v. Yardi Systems (2023-24) ā clearly explaining how the modeling software may have served as the hub or a central coordinating mechanism is critical in persuading courts that an algorithmic pricing platform facilitated collusion. In these cases, pricing algorithms were found to synchronize competitor pricing, prompting greater scrutiny from regulators and the judiciary.
While disciplines such as computer science can describe how these programs work technically, economists with hands-on experience in building or auditing econometric models for financial/pricing optimization are uniquely positioned to interpret why certain modeling decisions were made and what those choices mean for price outcomes.
Understanding the market dynamics in which the models operate as well as the modeling architecture itself makes data scientists with a background in economics particularly well-suited to translate technical mechanisms into economic implications ā an essential skill when communicating with non-technical stakeholders, regulators, or judges.
In cases hinging on proving that the pricing model in use by the defendants facilitated collusion, these skills are critical. The dual expertise plays a vital role in bridging the gap between statistical output and economic interpretation.
Unlike traditional academic experts, those with both consulting and technical experience offer a crucial edge: they can link model mechanics to real-world business incentives, highlighting how algorithmic rules might suppress competition. Theyāre also equipped to run counterfactual simulationsātesting how prices would shift without the model or under different input assumptions. This rare combination of modeling acumen, strategic insight, and courtroom-ready communication makes their expertise especially valuable in algorithmic antitrust cases.
Analytical Tools and Methodologies Used by Experts

In antitrust cases, economists typically use defendant data to estimate the but-for price ā what price would have prevailed absent the alleged conspiracy ā to assess what damages were incurred. In a case where a dynamic pricing model or other pricing optimization algorithm was used, an economist should still be opining on prices in the but-for world and consequent damages. However, there is also the matter of proving that the pricing model itself is optimizing collusive pricing. The methods used to show this might be somewhat different from the but-for models being used for damages.
For example, a standard damages model might incorporate cost inputs, macroeconomic variables, and other market drivers to explain pricing patterns over time. The model would include an indicator for the period of the conspiracy that is designed to pick up the effects of that conspiracy. There may also be a competitive benchmark market that can be used as a comparator in the model to understand what prices would have been but-for the conspiracy. These models yield estimates of how much the conspiracy inflated the prices over time and, thus, the total conspiracy overcharges.
On the other hand, an expert explaining the function of a pricing algorithm will likely break down the code behind the model to understand how the model operates, what parameters it is optimizing, what variables it is using, and other elements of how the model is designed to function. Thus, the expertās task shifts from market behavior to model behavior which involves reverse-engineering the algorithm. The expert may then run simulations to demonstrate that absent confidential elements of other defendantsā pricing strategies, the model would not come up with the prices that were selected and would have produced materially differing pricing outputs.
The expert may also scrutinize the training dataset to understand whether confidential and competitively sensitive data was used or embedded in training the decision-making pattern in the algorithm that enabled the algorithm to internalize rivalsā strategies and show that absent this data, the model would recommend different prices. The presence of such confidential information helps support a finding that the model ālearnedā to collude.
Get Related Sources
Beyond traditional econometric analysis and code-level inspection, experts may also deploy advanced computational methods to assess the likelihood and mechanics of algorithmic coordination. Tools such as agent-based modeling can simulate how autonomous pricing agents interact under different rule sets, helping illustrate whether algorithmic coordination could emerge even without explicit agreements (Calvano et al., 2020). Network analysis can be used to detect shared platforms, consultants, or data vendors acting as coordination hubsāespecially when multiple firms rely on the same pricing engine or API provider (Ezrachi & Stucke, 2016). Sensitivity testing on assumptions related to shared vendors or algorithmic parameters can demonstrate how subtle inputs influence convergence toward collusive outcomes. These methods allow experts to go beyond traditional econometrics and articulate algorithmic effects in dynamic, system-level terms.
Notable Antitrust Cases Involving Algorithmic Pricing
A series of cases have been filed alleging that companies used shared pricing software to collude and set prices anti-competitively in a given industry. The most prominent of these cases is the RealPage litigation (In re RealPage, Inc. Rental Software Antitrust Litigation, 2023) where plaintiffs allege that landlords in Washington, DC and other U.S. cities collusively set rents using RealPageās centralized pricing algorithm. Similar allegations were made in the Las Vegas Hotels case (Richard Gibson and Heriberto Valiente v. MGM Resorts International et al, 2023), involving hotel room prices in Las Vegas, and the ongoing European Commission inquiry into airline ticket distribution platforms (EC Investigation into Airline Ticket Distribution Services), where pricing algorithms are suspected of harmonizing prices across competitors.
In all of these cases, the pricing software is alleged to serve as the hub in the pricing conspiracy, using confidential information from all defendants to optimize prices for maximum profit across the conspirators. This model of coordination does not rely on direct communication among firms but instead exploits a centralized algorithm that enables parallel conduct through shared incentives and confidential data inputs.
While the use of pricing software as hubs in price fixing cases is relatively new, there is at least one past case that can be considered a precursor to the new wave of antitrust litigation surrounding these models ā the Airline Tariff Pricing (ATPCO) case in the 1990s. Airlines were accused of using a common website to post future ticket prices, wherein competitors would immediately react by changing their own prices and posting to the same message board. Through continuous back-and-forth posting, collusive prices were established. That case was settled by consent decree and thus cannot provide any case law precedent but remains of interest if only to show that the use of technology to potentially collude is not new and wherever there is opportunity and incentive, companies will take advantage of it.
A recurring challenge in such cases is proving the intent of the algorithm. Unlike human conspirators, software does not explicitly āagreeā to collude. Plaintiffs must therefore show that the algorithmās structure, design objectives, and data access were knowingly configured to produce anticompetitive outcomes. This includes demonstrating that absent confidential competitor data or without centralized optimization objectives, the same prices would not have emerged.
Courts have been cautious and often skeptical of such arguments, largely due to the technical complexity involved and the lack of precedent. Judges are not always easily convinced that algorithmic similarity or parallel pricing necessarily equates to unlawful coordinationāespecially in markets where prices are naturally volatile or highly responsive to shared market conditions. Establishing a causal connection between the algorithmās design and collusive outcomes remains a key hurdle for plaintiffs and a central battleground in upcoming litigation.
Key Considerations for a Case Involving Algorithmic Pricing
In cases involving algorithmic pricing, antitrust economists are typically retained to assess liability, quantify harm, and calculate damages ā roles that remain essential. However, it is equally critical to retain an additional expert with deep technical knowledge who can break down the models used in the pricing software, to explain not just how the code works but what the likely economic or business reasons are behind the various decision points in the model code.
Crucially, this expert should be able to demonstrate how the model would perform under a variety of data conditions using simulations and synthetic datasets to demonstrate that the prices set could not be achieved without collusion.
In any case where new or emerging technologies such as AI-based pricing engines or real-time data pooling are central to the allegations, it is vital to engage experts capable of explaining that technology clearly and thoroughly to jurors and judges with limited technical fluency who have little to no understanding of how it works.
While traditional economists remain indispensable, an economist-programmer hybridāsomeone who builds, tests, or audits such modelsācan provide indispensable insights. They strengthen the narrative by showing how the tools used may reinforce or camouflage collusive outcomes, especially in markets characterized by high-frequency pricing and shared platforms.

Takeaways
Algorithmic pricing creates both opportunities and risks. When used competitively, such tools can enhance efficiency, enable dynamic responsiveness to demand, and support better consumer targeting. However, the same pricing software can be used as a central hub for collusion, wherein potential conspirators feed in confidential information and the algorithm optimizes to find the highest collusive price the market will bear. In such antitrust cases, economists with experience in pricing models, machine learning, and AI can be valuable experts. Ā Their role extends beyond traditional damages analysisāthey must also decode the design logic, demonstrate economic outcomes through simulations, and explain how algorithmic behavior could mimic or facilitate coordinated effects.
As regulators and courts adapt to the complexities of algorithmic coordination, early engagement of multidisciplinary experts combining economics, computer science, and market design can prove decisive.