Strong standards are vital to the growth of the digital economy. But strong standards can also lead to a world with no choice and all-powerful companies or governments. What is the best way to develop standards? How do we balance the tensions in strong standards? And where does the power in the process sit?
This short issues paper aims to stimulate discussion on IT standards and inform the decisions of four different groups around standards – consumers, businesses, technology companies and governments.
Without IT standards, software, hardware and networks wouldn’t easily link together. We would see higher prices, with fragmented markets, fewer economies of scale and more transaction costs. Indeed, the internet, and all the innovation around it, would not exist as we know it.
However, strong standards bring their own challenges. They can become entrenched and freeze innovation. They also confer substantial power and can lead to dominant businesses and reduced choice.
So, standards which effectively balance this tension are crucial to the success of the digital economy, but developing and adopting standards is a complex process, with many competing interests and practical challenges.
Panel one: What are standards?
Standards typically have one of four aims:
There are two different types of standard:
We see many examples of IT standards. Many standards are highly technical and enable interoperability between software, hardware and networks. There are also many quality standards related to the development and management of IT. They are rarely de jure, though. Instead, we see many ‘standards wars’ as products and services vie to become de facto standards in the marketplace.
In many cases, standards are incorporated into the products and services that we buy, so although they are vital to making things work, we don’t notice them.
However, consumers can have a powerful influence on the development of market standards in technology. When we decide which smartphone to buy, or whether to join a social network, we are taking sides in a war to become the leading standard in a particular area. So, ‘people power’ can decide the winners and losers in a standards war.
One of the reasons for this powerful influence is the presence of economic network effects in many technology markets and the way that they drive the adoption of de facto standards.
Panel two: Network effects and internet business models
Network effects mean that a product or service has greater value as it has more users, compatible features or complementary products. The telephone is a classic example of network effects – having a telephone in isolation has little value. But if lots of people have one, it becomes far more valuable, encouraging even more people to join the network.
Network effects have been central to the success of many of the internet business models we see today.
Social media websites, for example, are heavily driven by direct network effects (the number of users). For most users of a social network, the value of joining the network is likely to be limited when there are few other users on it. But once friends and others start joining, users can build up their contact lists and will get greater value from the network.
Secondary network effects (complementary products or compatible features) are also influential. An important part of the battle between smartphone platforms, for example, is the size and quality of the associated application stores.
Therefore, strong networks provide substantial value to users and they will influence choices to adopt technology and create de facto standards.
However, network effects are unpredictable, as networks can gain sudden momentum and experience very fast growth in users. This can often lead to a ‘winner takes all’ situation, as most users want to be on dominant networks, and smaller networks will often struggle to survive.
Furthermore, once a network of products or services becomes a dominant standard, it can freeze the status quo and discourage further innovation. This can lock in poor quality products and services, as shifting the entire network to a new platform may be too expensive to justify the effort.
Panel three: The QWERTY keyboard
The standard QWERTY keyboard was developed for the typewriters of the late 19th century. The keys of these machines often stuck, especially where they were close together. The QWERTY keyboard kept commonly used keys apart from each other and minimised keys sticking. As a result, it gained dominance and all typists learned to type that way.
As typewriters improved, potentially more efficient keyboard configurations were developed. Few people switched from the QWERTY keyboard, though, and it has been argued by academics such as Paul David that this is a classic example of technology lock in.
If everyone uses the same configuration, people only need to build up knowledge of one keyboard. Training and other supporting material is only required for that keyboard. Therefore, the cost of everyone moving to a new configuration would be prohibitively high. Users would also not want to move in isolation, for fear of being stranded on a new platform with limited support and skills. As a result, the keyboard is ‘locked in’, despite being inferior.
Others see QWERTY lock in as a bit of an urban myth. Stan Liebowitz and Stephen Margolis, for example, argue that the benefits of a different keyboard configuration were simply not substantial enough and the network would have shifted had the benefits of a better configuration been greater.
Regardless of the specific case, though, it’s clear that the economics of the network as a whole, and its development path, are highly influential on how a network develops in the future.
So, when do consumers influence the outcome of standards wars? How do we maximise the benefits of strong networks through the development of good standards? And how do we avoid locking in poor standards?
Panel four: XBRL
The data standard for financial reporting, XBRL, was first developed in the early 2000s, with many business benefits suggested. It was argued that standardising financial data would enable easier comparison and analysis, for example, and result in a wide variety of efficiencies in financial reporting processes. It would also lead to greater transparency and more integrated reporting.
However, even back in 2004, ICAEW cautioned about the likely widespread voluntary adoption of XBRL, observing in its Digital Reporting report, 'there is no "invisible hand" at work here…There is no inevitability about this at all, except in so far as it may be determined by the very visible pressures such as those exerted by regulators.'
This has been reflected in subsequent developments. Most businesses have not been sufficiently persuaded by the business case for the broad adoption of XBRL.
In contrast, regulators have seen extensive benefits from XBRL. Greater automation of processes has increased efficiency. Furthermore, they have been able to undertake far more sophisticated analysis of the filings, given the consistency of the data. Therefore, adoption in practice has been driven almost exclusively from a regulatory viewpoint.
There may also be institutional factors which shape decisions to adopt standards. Smaller organisations, for example, are less likely to see the voluntary adoption of standards as a particular priority. Instead, they will often be locked into the decisions of larger companies, and forced to adopt standards based on supply chain pressures.
Panel five: The UK government’s organisational standard for cyber security
In 2013, the UK government launched an initiative to support a single organisational standard for cyber security.
The market was blighted by a number of different standards, which was confusing for business. By endorsing a single standard, the government aimed to create greater clarity and encourage adoption, which would improve security standards and increase confidence in the digital economy.
Rather than using regulation, though, the government is using market forces to drive implementation e.g. using contracts to push it down the supply chain encouraging the insurance market, with reduced premiums where the standard is adopted; establishing a ‘badge’ scheme to demonstrate compliance and provide a market differentiator.
SMEs are a particular target of the government strategy as raising everyone to a basic level of security would have significant benefits.
However, this is challenging in practice. In the absence of compelling regulation or supply chain requirements, SMEs may also struggle to define a business case to adopt a standard, or put sufficient priority on it. Consequently, the success or otherwise of the government initiative will only be seen over time.
So, when do businesses drive the success of standards through their adoption decisions? And what are the particular challenges for smaller businesses?
Early engineering societies developed technical standards on the basis of co-operation between experts.
Standardisation today, by contrast, is often a highly competitive process. Most technology companies, from start-up to global giant, will have a strategy around standards and how they use them to build commercial advantage. In many cases, this may go to the heart of the business model.
There is a trade-off, though, between competition and collaboration in standards. While the process can be highly competitive, most companies would prefer to be seen to be co-operating with others, rather than building a monopoly or dominant market position.
Furthermore, no company is big enough to work in isolation, and any standardisation strategy will build alliances as well as compete with other companies.
Panel six: Internet browser wars
In the mid-1990s, as the popularity of the internet and the World Wide Web started to take off, browser technology became the subject of a major standards war.
First to gain advantage was the small start-up company Netscape, which quickly gained 80% market share. However, Microsoft recognised the potential value of being the standard way of accessing the internet and launched its Internet Explorer browser to compete with Netscape.
While using the same technology, Microsoft had a key strategic advantage – it already dominated the PC market and got manufacturers to bundle its browser into their standard desktop package. As a result, while Microsoft had a highly competitive strategy against Netscape, it made effective use of its supplier relationships to win the war, as market dominance quickly switched to Microsoft.
This supply chain strategy proved problematic, though, as Microsoft underwent many years of competition scrutiny as to whether it had abused its position as the standard in desktop software to become the standard in browser software.
Microsoft also became a victim itself of a further standards war as new competitors emerged, such as Google’s Chrome browser, which took significant market positions.
The timing of standardisation will often influence the level of collaboration. For example, there may be common interest across an industry to collaborate early in a product development cycle to focus research and avoid wasted resources on many competing products and standards. This can then provide greater stability for the growth of markets and development of associated products.
The level of collaboration will also depend on wider strategic decisions on whether to prioritise widespread adoption over short-term revenue, a common trade-off in technology markets.
Business can focus on gaining a high market share of users quickly at the expense of short-term revenue. This may lead to a strategy which is more collaborative and based around open standards or open source software to encourage adoption.
Alternatively, businesses can prioritise revenue and control over their products at the expense of short-term market dominance. In these cases, businesses may place less emphasis on collaboration and take a more proprietary approach.
Panel seven: Open standards and competitive strategy
An open standard means that is available for others to use. Similarly, open systems are typically developed by volunteers and made freely available to other users.
While it may be expected that technology businesses would prefer highly proprietary solutions to open ones, that is not always the case.
IBM, for example, strongly supported the open source operating system Linux, despite having its own competing and proprietary operating systems. Linux represented a better way of competing with Microsoft, which was dominating the server operating system at that time. Therefore, working with the open source community was a better strategy than competing head on.
Likewise, Google has taken an open approach to its platform, Android, enabling others to adopt and develop it freely. This has enabled Google to gain substantial market share. While they are not making any money from licensing Android and have little control over how it used by others, they are building up a dominant position in the smartphone market which they hope to exploit in other ways.
So, when is it best for the market for technology companies to collaborate in standards? When is it best for the market for them to compete? And how do strategies around standards best support wider competitive strategies?
In most cases, IT standards are developed as a result of market pressures and the work of technical experts. This is typically through formal standard-setting committees, such as the International Organization for Standardization (ISO), which balance representation from national standards bodies and other experts or interested parties. Increasingly, we also see ad hoc groups of companies work together to develop specific standards.
But governments sometimes have a keen interest in the outcome of standards processes and can get involved, for example:
Panel eight: Intervention in EU mobile standards
Communications technology in Europe has historically been dominated by national standards, leading to fragmented markets and difficulties in pan-European communications. Therefore, when a new generation of communications technology was being developed in the 1980s, the European authorities decided to select a standard which would operate across all of Europe.
As a result, the European Telecommunications Standards Institute (ETSI) endorsed and promoted the 2G mobile phone standard, GSM. This was supported by the European Commission, which then persuaded manufacturers and telecoms operators to develop products and services based on it.
Network effects led to GSM quickly dominating the marketplace in Europe. In contrast, the US government had taken a non-interventionist approach, so there was no leading standard and the US market remained fragmented. With GSM increasingly dominant across the world, European handset manufacturers such as Nokia and Eriksson had first mover advantage which they went onto exploit. Therefore, government intervention in this case is seen by most commentators to have been a success.
But government intervention has its risks and can lock in poor quality standards. Indeed, it is argued by some commentators that although the European approach was successful with 2G standards, the US ultimately ended up on a better technology path by allowing greater innovation.
Intervention can sometimes be totally or relatively ineffectual and waste resources. It can also slow the process and introduce new political negotiation. As a result, an industry-driven approach often dominates in practice.
Panel nine: Setting internet standards
The development of internet standards has been very different to traditional approaches to standard setting. Rather than using formal, balanced committees, the standards process has been driven far more informally by the technical community.
Oversight over many internet standards, including the core communication standard, TCP/IP, sits with the IETF (Internet Engineering Task Force). This is not democratically elected; rather it is a group of experienced technicians, divided into specialist areas. IETF meetings are open to anyone who is interested and the steering groups are akin to ‘councils of elders’. The development philosophy (‘rough consensus and running code’) is also practical and focused on implementation.
By contrast, the more formal ISO developed its seven-layer OSI model for network communications in the 1970s. This gained some momentum for adoption through the 1980s and 1990s as governments endorsed it and tried to stimulate markets for products based on the protocols. However, OSI was ultimately superseded by TCP/IP.
The practical emphasis was an important element of this success. The IETF represented those working in the field, many of whom disliked the bureaucratic, slow and political process of ISO.
However, the IETF has seen many battles over its future direction and as the internet continues to expand massively, it is under pressure from many sides.
Governments from some countries in Africa and Asia would also like to see internet standards move under the more formal control of bodies such as the International Telecommunications Union or ISO, which would ensure greater representation in the process from other countries.
So, when should governments get involved? What should they best do and refrain from doing? And when should they leave standards entirely to the market?
We welcome feedback from all interested parties on the questions raised in this issues paper.
We recognise that the answers will vary according to the context and there will be no one-size-fit-all answers. But building greater understanding around what works in different circumstances can help to improve decisions by consumers, businesses, technology companies and governments about the development and adoption of new standards.
To encourage learning and discussion of these issues, our next stage of work will build more detailed case studies of specific standards through interviews and roundtable discussions. If you are interested in participating in our research, please get in contact.