London, Time and again leading scientists, technologists and philosophers have made spectacularly dire predictions about the direction of innovation. Just ten years before the Enrico firm completed construction of the first fission reactor in Chicago, Eve Einstein claimed, "There is no indication that nuclear energy will ever be achieved". Soon after, the consensus turned to fears of imminent nuclear annihilation.

Similarly, today's experts warn that artificial general intelligence (AGI) doomsday is imminent. Others respond that large language models (LLMs) have already reached the peak of their powers.It is difficult to argue with David Collingridge's influential thesis that attempting to predict the risks posed by new technologies is a fool's errand, given that our leading scientists and technologists are usually so wrong about technological developments, our policy. What chance do manufacturers have to effectively regulate emerging technology? Risks from Artificial Intelligence (AI) We should heed Collingridge's warning that the technology evolves in uncertain ways. However, there is a class of AI risks that can generally be known in advance. These are risks arising from misalignment between a company's economic incentives to benefit from a company's proprietary AI models, how the AI ​​model should be monetized and deployed in a particular way in the interests of society.

The safest way to ignore such misalignment is to focus exclusively on technical questions about AI model capabilities, divorced from the socio-economic environment in which these models will operate and be designed to benefit. .

Focusing on the economic risks posed by AI is not just about preventing “monopoly,” “self-preference,” or “Big Tech dominance.”It's about making sure that the economic environment that facilitates innovation is not encouraging hard-to-predict technological risks as companies "move fast and break things" in the race for profit or market dominance. ".Degrading quality for more profit.

It is instructive to consider how the algorithmic technologies that underpin older aggregator platforms (such as Amazon, Google, and Facebook, among others) were initially deployed to benefit users, eventually repurposed to increase profits for the platform. : was programmed).

The problems posed by social media, search and recommendation algorithms were never an engineering issue, but rather one of financial incentives (profit growth not aligning with safe, effective and equitable deployment of algorithms). As the saying goes: History doesn't necessarily repeat itself but it does rhyme. To understand how platforms allocate value to themselves and what we can do about it, we asked users and producers on platforms about so-called Examines the role of algorithms and the unique insights of digital markets in extracting economic rents.In economic theory, rent is "super-normal profits (profits in excess of those obtained in a competitive market) that reflect control over some scarce resource."

Importantly, rents are the pure return of ownership or, to some extent, monopoly power, not the return earned from producing something in a competitive market (such as multiple manufacturers making and selling cars). Extracting digital rents for digital platforms typically involves reducing the quality of information shown to the user based on "owning" access to large amounts of subscribers.

For example, Amazon's millions of users rely on its product search algorithms to show them the best products available for sale, because they are unable to inspect each product individually. These algorithms save everyone time and money: b helping users navigate through thousands of products to find the highest quality and lowest priced products, and through Amazon's delivery infrastructure and vast customer network. expand market access to suppliers.These platforms made markets more efficient and provided enormous value to both users and product suppliers. But over time, a misalignment between their initial promise to provide user value and the need to expand profit margins as growth slows has led to poor platform behavior. Amazon's advertising business is an example of this.

amazon ad

In our research on Amazon, we found that users still click on product results at the top of the page, even though they are no longer the best results, but paid ad placements. Amazon abuses the habitual trust users place in its algorithms, and instead focuses user attention and clicks on low quality, sponsored information, allowing it to profit excessively.We found that, according to Amazon's own quality, price, and popularity optimization algorithms, on average the most clicked sponsored products (ads) were 17 percent more expensive and ranked 33 percent lower. And because product suppliers now have to pay for the product ranking that they previously earned through product quality and reputation, as Amazon grows, their profits decline, and prices rise as some of the costs are passed on to customers.

Amazon is the most striking example of a company that is moving away from its original "virtuous" mission ("to become the most customer-centric company on Earth") and toward an extractive business model. But it is far from alone.

Google, Meta, and virtually all other major online aggregators have, over time, come to prioritize their economic interests rather than their core promises to their users and content and product suppliers or application developers.Science fiction author and activist Cory Doctorow calls this the "enshitification" of Big Tech platforms. But not all fare is bad. According to economist Joseph Schumpeter, the rents received from a firm innovating can be beneficial to society. 'Big Tech' platforms advance through highly innovative, breakthrough, algorithmic breakthroughs. The current market leaders in AI are doing the same.

So while Schumpeterian rents are real and reasonable, over time, and under external financial pressure, market leaders have sought to capture a larger share of the value created by the ecosystem of advertisers, suppliers, and users in order to maintain profits. Algorithms began to harness market power.Increasing.

User preferences were downgraded in algorithmic importance in favor of more profitable content. For social media platforms, it was addictive content increasing time spent on the platform at any cost to the user's health. Meanwhile, the ultimate suppliers of value to their platforms – content creators, website owners and merchants – have had to cede more of their returns to th platform owner. In this process, profits and profit margins have become concentrated in the hands of a few platforms, making it harder for an outside company to innovate.A platform that forces its ecosystem of companies to pay ever-increasing fees (nothing in return for proportionate value on both sides of the platform) cannot be justified. This is a red light that the platform has some degree of market power that it is using to extract unearned rent. Amazon's most recent quarterly disclosures (Q4, 2023), show a 9 percent year-over-year increase in online sales, but a 20 percent (third-party seller services) and 27 percent (advertising sales) increase in fees. .

In the context of risk and innovation it is important to remember that the rent-extracting deployment of algorithmic technologies by Big Tech is not an unintended risk, as identified by Collingridge. This is a predictable economic risk.The story of making profits through the exploitation of scarce resources under one's control is as old as commerce.

Technical safeguards on the algorithms, as well as more detailed disclosures about how platforms were monetizing their algorithms, may have prevented such behavior from occurring. Algorithms have become market gatekeepers and price allocators, and are now becoming creators and arbiters of knowledge.Risks posed by the next generation of AI

The limits we place on algorithms and AI models will help direct economic activity and human attention toward productive goals. But how high are the risks for next-generation AI systems? They shape not only the information we are shown, but also how we think and express ourselves.Centralizing the power of AI in the hands of a few profit-driven entities, who may face future economic incentives for bad behavior, is certainly a bad idea.

Thankfully, society is not helpless in shaping the economic risks that arise after each new innovation. The risks posed by the economic environment in which innovation occurs are not irreversible. Marque structure is shaped by regulators and a platform's algorithmic institution (particularly its algorithms that make market-like allocations). Together, these factors influence how strong network effects and economies of scale are in a market, including the rewards for market dominance.Technical mandates such as interoperability, which refers to the ability of different digital systems to work together seamlessly; Or "side-loading", the practice of installing apps from sources other than the platform's official store, has shaped the fluidity of user mobility within and between markets, and I would argue for any major entity's own ecosystem of users. Permanently alters exploitability. The Internet Protocol helped keep the Internet open rather than closed. Open source software enabled it to escape from under the thumb of the dominant monopolies of the Pi era.

What role can interoperability and open source play in keeping the AI ​​industry a more competitive and inclusive market?

Disclosure is another powerful market shaping tool. Disclosure may require technology companies to provide transparent information and explanations about their products and monetization strategies.Mandatory disclosure of ad load and other operating metrics could help prevent Facebook, for example, from exploiting the privacy of its users to maximize advertising dollars from each user's data. But the lack of data portability, and The inability to independently audit Facebook's algorithms meant that Facebook continued to benefit from its surveillance system for longer than it should have. Today, OpenAI and other major AI model providers refuse to disclose their training data sets, while questions arise about copyright infringement and who should have the right to profit from AI-assisted creative works. Disclosure and open technology standards are important steps to try and ensure that the benefits from these emerging A platforms are shared as widely as possible.

Market structure, and its impact on "who gets what and why", develops as the technical basis for allowing companies to compete in the marketplace.Perhaps it is time to turn our regulatory gaze away from attempting to predict the specific risks that may arise as specific technologies are developed. After all, even Einstein could not do this.

Instead, we should strive to realign economic incentives based on today's innovations away from the risky use of AI technology and toward openly accountable, AI algorithms that equally support and disseminate value. The sooner we accept that technology risks are often the result of misdirected economic incentives, the sooner we can work to avoid repeating the mistakes of the past. Not opposed to providing advertising services.Appropriate amount of advertising space can really help lesser-known businesses or products gain popularity in a proper way with competitive offerings. But when advertising almost completely displaces top-ranking organic product results, advertising becomes a rent collection tool for the platform.(talk) GSP