AI already searches, compares, and recommends. It is now becoming able to buy, book, and pay. Instead of browsing websites and completing purchases yourself, AI can find the best option, compare prices, and guide a buyer to a decision. Within limits set by you, it can even complete the purchase.
The new AI-powered buyer, known as agentic commerce, foretells a potentially revolutionary shift that raises challenging regulatory and legal questions. Businesses may no longer compete mainly for human attention. They may need to compete for the attention of the AI actor.
Emerging AI commerce could become a platform for transatlantic cooperation, or fuel additional European concerns about “digital sovereignty.” American companies such as OpenAI are developing the models that power AI agents, while Google and Meta are building tools that let AI search for products, compare options, and move close to completing transactions.
If AI commerce takes hold, the risk is not simply that Europe falls behind. It is that decisions about what gets bought and how transactions happen are shaped by systems built elsewhere, even when the payment itself runs through European infrastructure.
But Europe brings important capabilities. Europe has built strong systems in areas like instant payments and open banking (Pay-by-Bank) and European payment networks and banks are working with Mastercard and Visa to allow AI agents to operate safely within existing payment systems. European payment providers such as Nexi are also working with Google Cloud to build the infrastructure that allows AI agents to execute secure, authorized payments.
Our financial systems were not designed for AI: they assume a person is making each payment. In Europe, strong customer authentication rules require the user to actively approve transactions, in contrast to the more flexible, largely risk-based approaches common in the US.
That works when a person clicks a button. It is hard to apply when AI acts within rules that users have already set. If an AI agent is allowed to act, what counts as approval? Does the user approve each payment, or give permission once? And if something goes wrong, who is responsible? These are simple questions, but regulators so far have not offered clear answers.
Companies are not waiting. They are building solutions to make AI-driven payments safe. Mastercard is working on systems that can record what a user has authorised AI to do and verify each step of a transaction. Visa is testing frameworks that allow AI agents to operate within secure payment environments.
These approaches use tools such as tokenization (replacing sensitive data with a secure digital token) and verification to make payments secure and traceable, so every action can be checked. This is as much about trust as it is about technology.
Although AI is becoming the layer that shapes decisions — determining what options are shown, compared, and selected — it does not yet fully control payments. Purchases still happen through existing platforms and systems. Early experiments, such as Walmart’s attempt to enable checkout inside an AI chat, show both the direction of travel and the limits of current models.
If AI begins to handle both decisions and transactions, it will become a powerful new layer in the economy. AI commerce does not remove payments. It changes when the choice is made. Today, when you pay, different payment providers compete to be used, including cards, wallets, or instant account-to-account. That choice happens at checkout. With AI, that choice can happen earlier. AI can decide what to buy and how to pay. The payment system then just processes the transaction.
Over time, this could shift the advantage toward payment options that are cheapest, fastest, and easiest for AI to use, whether that is cards, pay-by-bank, or even a digital euro if it is introduced.
Without a user actively approving each action, a potentially dangerous gap exists between how Europe’s payment systems work today and how they may need to work in the future. If that gap is not addressed, innovation will move elsewhere, and Europe will follow rather than lead.
But if Europe works with industry, it has a chance to shape how this new model develops. That means setting clear rules on how AI can act, how consent is given, and who is responsible when something goes wrong, focusing on outcomes like trust and accountability rather than prescribing how the technology should be built.
AI is moving from answering questions to taking action. The real question is no longer whether AI can help people shop. It is who will control the systems that decide what gets bought and how, and where those systems are built.
Padraig Nolan is a Fellow with the Tech Policy Program at the Center for European Policy Analysis. He serves as Chief Operating Officer of ETPPA, a prominent EU fintech association. He is also an advisory board member of the Lisbon-based Europe Startup Nations Alliance. Padraig holds a bachelor’s degree in law and economics (University of Galway) and a master’s degree in European law (Utrecht University).
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation