Barbara van Schewick posted a really thoughtful analysis about how about application-specific vs. application-agnostic discrimination directly affects innovation, and looks at an actual example of a Silicon Valley startup. I think her points are right on, and I strongly support the rationale for resisting “application-specific” discrimination.

In fact, Barbara’s point is the key to the whole debate. The future of the internet requires that applications be able to be invented by anyone and made available to everyone, and information shared on the net by anyone to be accessible to anyone. That property is “under fire” today by Internet access providers, by nation states, and by others who wish to limit the Internet’s reach and capabilities. I wholeheartedly support her points and her proposal.

I think it’s important to remind us of a further point that is quite obvious to those who have participated in the Internet on a technical level, from its original design to the present, so I thought I’d write a bit more, focusing on the fact that the Internet was designed in a way that makes application-specific discrimination difficult. Barbara knows this, since her work has been derived from that observation, but policymakers not steeped in the Internet’s design may not. As we pursue the process of making such rules, it is important to remind all of us that such rules are synergistic with the Internet’s own code, reinforcing the fundamental strengths of the Internet.

So I ask the question here: what do we need from the “law” when the “code” was designed to do most of the job? Well, the issue here is about the evolution of the Internet’s “code” – the implementation of the original architecture. The Internet’s code will continue to make it hard for application discrimination to be difficult as long as a key aspect of its original design is preserved – that the transport portion of the network need not know what the meaning of the bits being transported on any link is. We struggled to make all our design decisions so that would remain true. Barbara has made the case that this design choice is probably the most direct contribution to the success of the Internet as a platform for innovation.

My experience with both startups and large companies deciding to invest in building on general purpose platforms reinforces her point. Open platforms really stimulate innovation when it is clear that there is no risk of the platform being used as a point where the platform vendor can create uncertainty that affects a product’s ability to reach the market. This is especially true for network communications platforms, but was also true for operating systems platforms like Microsoft DOS and Windows, and hardware platforms like the Apple II and Macintosh in their early days. In their later days, there is a tendency for the entities that control the platform’s evolution to begin to compete with the innovators who have succeeded on the platform, and also to try to limit the opportunities of new entrants to the platform.

What makes the Internet different from an operating system, however, is that the Internet is not itself a product – it is a set of agreements and protocols that create a universal “utility” that runs on top of the broadest possible set of communications transport technologies, uniting them into a global framework that provides the ability for any application, product or service that involves communications to reach anyone on the globe who can gain access to a local part of the Internet.

The Internet is not owned by anyone (though the ISOC and ICANN and others play important governance roles). It’s growth is participatory – anyone can extend it and get the benefits in exchange for contributing to extending it. So controlling the Internet as a whole is incredibly hard. However certain areas of the Internet can control the Internet in limited ways. In particular, given that local authorities tend to restrict the right to deploy fiber, and countries tend to restrict the right to transmit or receive radio signals, the first or last mile of the Internet is often a de facto monopoly, controlled by a small number of large companies. Those companies have incentives and the ability to do certain kinds of discrimination.

However, a key part of the Internet’s design, worth repeating over and over, is that the role of the network is solely to deliver bits from one user of the Internet to another. The meaning of those bits never, under any circumstances, needs to be known to anyone other than the source or the destination for the bits to be delivered properly. In fact, it is part of the specification of the Internet that the source’s bits are to be delivered to the destination unmodified and with “best efforts” (a technical term that doesn’t matter for this post).

In the early days of the Internet design, my officemate at MIT, Steven T. Kent, who is now far more well known as one of the premier experts on secure and trustworthy systems, described how the Internet design could in fact, be designed so that all of the bits delivered from source to destination were encrypted with keys unknown to the intermediate nodes, and we jointly proposed that this be strongly promoted for all users of the Internet. While this proposal was not accepted, because encryption was thought to be too expensive to require for every use, the protocol design of TCP and all other standard protocols have carefully preserved the distinctions needed so that end-to-end encryption can be used. That forces the design to not depend in any way on the content, since encryption means that no one other than the source or destination can possibly understand the meaning of the bits, so the network must be able to do perfectly correct job without knowing same.

Similarly, while recommendations were made for standard “port numbers” to be associated with some common applications, the designers of the Internet recognized that they should not assign any semantic meaning to those port numbers that the network would require to do its job of delivering packets. Instead, we created a notion of labeling packets in their header for various options and handling types, including any prioritization that might be needed to do the job. This separation of functions in the design meant that the information needed for network delivery was always separate from application information.

Why did we do this? We did not do it to prevent some future operator from controlling the network, but for a far more important reason – we were certain that we could not predict what applications would be invented. So it was prudent to make the network layer be able to run any kind of application, without having to change the network to provide the facilities needed (including prioritization, which would be specified by the application code running at the endpoints controlled by the users).

So here’s a concern with Barbara’s latest post, and in fact with much of the policy debate at the FCC and so forth. The concern is that the Internet’s design requires that the network be application agnostic as a matter of “code”. More importantly, because applications don’t have to tell the network of their existence, the network can’t be application specific if it follows the Internet standards.

So why are we talking about this question at all, in the context of rules about the Open Internet at FCC? Well, it turns out that there are technologies out there that try to guess what applications generated particular packets, usually by relatively expensive add-on hardware that inspects every packet flowing through the network. Generically called “deep packet inspection” technologies and “smart firewall” technologies, they look at various properties of the packets between a source and destination, including the user data contents and port numbers, and make an inference about what the packet means. Statistically, given current popular applications, they can be pretty good at this. But they would be completely stymied by a new application they have never seen before, and also by encrypted data.

What’s most interesting about these technologies is that they are inherently unreliable, given the open design of the Internet, but they can be attractive for someone who wants to limit applications to a small known set, anyway. An access network that wants to charge extra for certain applications might be quite happy to block or to exclude any applications that generate packets its deep packet inspection technologies or smart firewall technologies cannot understand.

The support for such an idea is growing – allowing only very narrow sets of traffic through, and blocking everything else, including, by necessity, any novel or innovative applications. The gear to scan and block packets is now cheap enough, and the returns for charging application innovators for access to customers is thought to be incredibly high by many of those operators, who want a “piece of the pie”.

So here’s the thing: on the Internet, merely requiring those who offer Internet service to implement the Internet design as it was intended – without trying to assign meaning to the data content of the packets – would automatically be application agnostic.

In particular: We don’t need a complex rule defining “applications” in order to implement an application agnostic Internet. We have the basis of that rule – it’s in the “code” of the Internet. What we need from the “law” is merely a rule that says a network operator is not supposed to make routing decisions, packet delivery decisions, etc. based on contents of the packet. Only the source and destination addresses and the labels on the packet put there to tell the network about special handling, priority, etc. need to be understood by the network transport, and that is how things should stay, if we believe that Barbara is correct that only application-agnostic discrimination makes sense.

In other words, the rule would simply embody a statement of the “hourglass model” – that IP datagrams consist of an envelope that contains the information needed by the transport layer to deliver the packets, and that the contents of that envelope – the data itself, are private and to be delivered unchanged and unread to the destination. The other part of the hourglass model is that port numbers do not affect delivery – they merely tell the recipient which process is to receive the datagram, and have no other inherent meaning to the transport.

Such a rule would reinforce the actual Internet “code” because that original design is under attack by access providers who claim that discrimination against applications is important. A major claim that has been made is that “network management” and “congestion control” require application specific controls. That claim is false, but justified by complex hand-waving references to piracy and application-specific “hogging”. Upon examination, there is nothing specific about the applications that hog or the technologies used by pirates. Implementing policies that eliminate hogging or detect piracy don’t require changes to the transport layer of the Internet.

There has been a long tradition in the IETF of preserving the application-agnostic nature of the Internet transport layer. It is often invoked by the shorthand phrase “violating the end-to-end argument”. That phrase was meaningful in the “old days”, but to some extent the reasons why it was important have been lost to the younger members of the IETF community, many of whom were not even born when the Internet was designed. They need reminding, too – there is a temptation to throw application-specific “features” into the network transport by vendors of equipment, by representatives of network operators wanting to gain a handle to control competition against non-Internet providers, etc. as well as a constant tension driven by smart engineers wanting to make the Internet faster, better, and cheaper by questioning every aspect of the design. (this design tradition pushed designers to implement functions outside the network transport layer whenever possible, and to put only the most general and simple elements into the network to achieve the necessary goal, so for example, network congestion control is managed by having the routers merely detect and signal the existence of congestion back to the edges of the network, where the sources can decide to re-route traffic and the traffic engineers can decide to modify the network’s hardware connectivity. This decision means that the only function needed in the network transport itself is application-agnostic – congestion detection and signalling).

So I really value Barbara’s contribution, reminding us of two things:

  • Application specific discrimination harms everyone who uses the Internet, because it destroys the generativity of the Internet, and
  • The Internet’s design needs a little help these days, from the law, to reinforce what the original code was designed to do

The law needs to be worked out in synergy with the fundamental design notion of the Internet, and I believe it can be a small bit of law at this point in time, because the Internet is supposed to be that way, by design. If the Internet heads in a bad direction through new designs, perhaps we might want to seek more protections for this very important generativity is important to the world.

Note: My personal view is that the reason that this has become such an issue is that policymakers are trying to generalize from the Internet to a broad “network neutrality” for networks that are not the Internet, that don’t have the key design elements that the Internet has had from the start. For example, the telephone network’s design did not originally separate content from control – it used “in band signalling” to control routing of phone calls. To implement “neutrality” in the phone network would require actually struggling with the fact that certain sounds (2600 Hz, e.g.) were required to be observed by the network – in the user’s space. (this also led to security problems, but it was done to make “interconnect” between phone switches easy).

(2) Comments   

FCC Chair Genachowski released a statement today to announce that he will put forward a proposal to his fellow commissioners “Preserving a Free and Open Internet”, and it has already been reported on the NYTimes online.

It is possible to read Genachowski’s statement very, very carefully, and see a distinction between Internet and Broadband that *might* be there (or it might not). If it comes through in the rules, I will be happy, because the FCC at least got started on the right foot. For a while there it looked as if it was going to have two left feet (Broadband and InternetBroadband), and we know where that leads.

I’m sure many will write that there are zillions of “loopholes” in that ruling allowing for all kinds of bad behavior by Broadband providers in the services they offer to customers. And I would tend to agree with those concerns. There is nothing that preserving a free and open internet can do, nothing at all, for many of the problems of telecom and information services in the broad sense of the term Broadband. But when those broadband providers offer a connection to the Internet, that is, when they offer the ability to access all of the services that are made available anywhere in the world from any other user or company that connects to the Internet, the rules the FCC proposes will apply.

This is a good thing, in my opinion. It’s not world peace. But it’s a good step forward.

But I have some concerns – mostly not the ones about details and loopholes. If you read the NYTimes article, they did not read Genachowski’s statement carefully, starting with the headline on the piece: “F.C.C. Chairman Outlines Broadband Framework” (fact checking, please?). A large part of this confusion arises because most people in the policy space confuse Broadband with Internet – as if they were equivalent terms. Apples are not appleseeds. Genachowski, so far, perhaps because he read the comments I and others have provided to the FCC very closely (I hope), uses both words, but he gets them right so far. However, I would think that an agency full of lawyers would know that defining terms is critical to making sense in an architecture or in a regulation. I’m not a lawyer, but I’ve been a systems architect for nearly 40 years, and slang and ambiguity will crash a system just as it will crash a law. (all you digerati who conflate bandwidth with information delivery rate, thinking it’s cool to be wrong, can tune out here).

This may not be the best regulatory framework for the Internet ever proposed, but at least Genachowski (unlike most of the folks in DC) uses the term Internet and the term Broadband right – in his statement. It sure would be easy if he wrote two clear definitions and made this point crystal clear, as follows:

Broadband is a category of general-purpose “wide area network” data networking systems that provids many services to each customer over a single, digital, switched high-bitrate medium like fiber, coaxial cable, twisted copper pairs, or wideband digital radio links. There are two kinds of broadband systems – fixed and mobile – available today, where mobile services are characterized by the ability of the user to move from place to place. Examples of broadband wide area networks are:

  • Fixed: Verizon FiOS, Comcast’s hybrid-fiber-coax systems, fiber PON systems, and fixed digital wireless,
  • Mobile: wireless LTE, wireless HSPA, and WiMax.

The Internet is an evolving family of protocols and interconnected networks that span the planet’s surface, its sea floor, and some number of miles into space, that are based on the transport of IP Datagrams over many diverse underlying networks based on many diverse networking technologies.

The Internet can be used over Broadband facilities, but it is the same Internet that is used over many other facilities. When a Broadband service provider provides “Internet access” to its users, it is important that this access service preserve the *essential* characteristic of the Internet – its openness and generality.

There are lots of details to sort out before it’s clear that Genachowski’s principles are even a good first step. I don’t love the weaknesses of Genachowski’s proposal, especially the exemption of applying these principles to Internet over Mobile Broadband. There I think he’s confused the high degree of innovation of wireless technology with some idea that the Internet over that technology will have to be different. We didn’t need to make the Internet change when we moved it onto Fixed Broadband platforms. Why should we have to change anything about it as we move to run Internet connections over mobile broadband? The Internet was designed to work over any kind of network, and it works over mobile broadband platforms today! The exception is an invitation for the mobile industry to establish “facts on the ground” that falsely prove the Internet “doesn’t work”. I’ve seen that already, where bugs in HSPA deployment of Internet services by operators have been used to argue that the Internet “doesn’t work” on their networks. It would be wonderful if the FCC had a technical capability to investigate these questions honestly, rather than outsourcing its technical sources to engineers who are told that an “open mobile Internet” is “not in your employers’ or its customers best interest” (that’s a quote from an executive at a telecom equipment company.)

However, there is some danger that when the actual proposed rules are written, the focus on the Internet will be lost. Certainly there are lots of advocates out there, even some who claim to support Neutrality, who would abandon any effort to keep the Internet open and neutral, in order to gain some other political goal that sounds like “net neutrality”. For example, there are lots of worries about so-called “loopholes” in Genachowski’s rules that would allow Broadband service providers to sell different speed Internet access products to different customers where the faster speeds cost more money. I don’t understand this worry at all. It used to cost more to have a DSL phone line than to have an ISDN phone line, and more for ISDN than an ordinary POTS line. Each of these could be used as to place a call to an Internet Service Provider, but the faster ones cost more. Other loopholes suggested include the idea that implementing an ability for users to pay to have some of their packets be prioritized over others would be a huge problem – but to the extent that this facility exists in the Internet already as a standard, such a rule would be attempting to redesign how the Internet works,

The key distinction here is that letting the users, through their applications, choose what their packets do, and where their packets go, and specify whatever priorities they need is quite different from having one link in the chain, the last mile provider, be able to control what applications the user can effectively run, or constrain how well they run. It’s this edge based freedom to choose to whom you speak, with what services you connect, how you speak and what you can collaborate on that constitutes the freedom of the Internet, due to its openness. The last mile access provider (Verizon, ATT, Cogent on the consumer side, or various large interconnects, such as Level 3, that support business users like eBay or small businesses) should have no say in deciding what Internet applications do inside their packets.

The main risk here could be that a provider makes something that they call the Internet, but in fact is highly non-standard and is not at all a standard Internet connection. That would be clear interference with the Internet, and if folks are required to be transparent, their deviations will be apparent, and the relief can be sought (either by the user or by appeal to the regulator). By distinguishing the Internet from other Broadband offerings, we can at least clarify the discussion, and if there are specific guidelines keeping the Internet open, by making sure it is the same Internet all over the US and the world, then we have a basis for sensibly understanding what will destroy its value.

The key benefit of framing things this way is that the Internet connection sold to the user must provide the standard, commonly understood, consensus definition of the Internet service – the ability to address IP datagrams to any destination on the collective Internet, and have it delivered without the access provider interfering with that delivery for its own purposes, or doing something other than deliver the datagram as requested according to the standard understanding of how the Internet works.

Will this make Television better? Maybe Television will be revolutionized when it is delivered over the Internet connection, compared to television delivered over the same Broadband provider’s Broadband network as a “special service”. It certainly will allow more people to distribute programs, whether from their homes or at scale. I don’t know for sure, but I think this is exactly what will happen. I *do* know how that will be prevented, though. It will be prevented by allowing the TV provider to select which parts of the Internet you can access over its Internet service, or by the TV provider providing a defective or overpriced Internet service, while at the same time maintaining a de facto (or even state-enforced) monopoly over access to string broadband cables to your home.

However, I prefer to be hopeful, rather than discouraged, that at least the FCC has made the baby step of recognizing the Internet as an independent thing, one worth protecting, and not merely a product of Comcast, Verizon, or ATT that is some kind of narrow, walled “Internet clone”. In other words, the kind of thing AOL claimed was “better than the Internet” because it was selective – selecting cheap or profitable content, and leaving the creativity outside Fortress AOL. Those companies didn’t create the Internet – the Internet was and is created by its users (including user organizations like Google, Apple, eBay, Amazon, banks, small businesses, and national/state governmental agencies, and individual users like all of the users of email, blogging tools, Skype, Facebook, and Twitter). The access providers whine about not being able to “monetize” the Internet like they “monetize” their Cable TV offers, not mentioning that they *pay* for Cable TV shows, but pay nothing for the Internet that its users create for each other, share with each other, and sell to each other. The Internet has created a huge new economy, so I think if the Broadband guys merely connect us into it and keep growing their capacity and speeds, they will do a fine busioness, too, from what we are willing to pay them to do so.

Updated: 12/2/2010 to add a small clarification to the comments on prioritization.

(7) Comments