Ars Technica mentioned in a post that GoGo, the primary “airplane Internet Access provider” is breaking HTTPS security with a fake certificate in order to prevent access to YouTube over HTTPS when using GoGo to access the Internet. Many are already pointing out that this damages all of Internet security by playing on a known security “hole” in HTTPS deployment (not in the actual protocol itself).
I have a very strong reaction to this for two reasons: a) because most individuals believe that all content on the Internet must not be the subject of Surveillance by Internet Access Providers, and b) because GoGo’s “rationale” for solving their problem in this way is false. It’s not the only way to solve their problem, and in fact it is not even the best known way. The best known and tested way to deal with their underlying business problem doesn’t require content-type detection at all!
Fixing the surveillance problem
This story highlights a known weakness of the “https” design that most non-security people don’t understand. That is alluded to briefly in passing, but to be more specific – https security is “one way” – the destination you reach can decrypt the message, so your browser must tell you reliably whether the destination is the site you talk to. If that destination is pretending, your security is thin.
The thin reed your security depends on has two parts:
I point these out because the news stories (including the Ars Technica one) almost never highlight these weaknesses, which have been known for years, and which have been under attack for years by bad guys, so they should be fixed, which would leave GoGo looking for a less intrusive solution, anyway. A commercial company depending on a security hole is bad practice, at the very least.
So for one thing – we desperately need a better solution for browser-to-service integrity than HTTPS. The problem is not the encryption but the terrible weakness of the certificate-based authentication. (and DNSSEC doesn’t fix this… though it fixes another hole in HTTPS). Good solutions are known, but there is a hurdle to be overcome, and the hurdle is high because users and service providers don’t want to change away from HTTPS. [of course there is no one who has decided to make this a cause to get the engineering and evangelism needed to roll out a new standard, not even Google]
Fixing GoGo’s Overload Problem (caused by streaming or anything else)
I understand GoGo’s concern about streaming – they don’t have enough capacity for bulk data transfer to airplanes, whether streaming or not. Their service gets disrupted by video-rate streaming, making everyone unhappy.
But it is the wrong solution (not just because of my main point), and there’s a technically better one. I use GoGo a lot. I’ve discovered that their system architecture suffers from “bufferbloat” (the same problem that caused Comcast to deploy Sandvine DPI gear to discover and attack bittorrent with “forged TCP” packet attacks, and jump-started the political net neutrality movement by outraging the Internet user community). Why does that matter? Well, if GoGo eliminated bufferbloat, streaming to the airplane would not break others’ connections, but would not work at all, with no effort on Gogo’s part other than fixing the bufferbloat problem. [The reason is simple - solutions to bufferbloat eliminate excess queueing delay in the network, thereby creating "fair" distribution of capacity among flows. That means that email and web surfing would get a larger share than streaming or big FTP's, and would not be disrupted by user attempts to stream YouTube or Netflix. At the same time, YouTube and Netflix connections would get their fair share, which is *not enough* to sustain video rates - though lower-quality video might be acceptable, if those services would recode their video to low-bitrate for this limited rate access].
It should be noted that once the FCC slapped Comcast’s hands about their content/destination discrimination by Sandvine attacks, Comcast’s engineering team invested a lot of time in getting a “bufferbloat fix” into the DOCSIS 3 modem deployment standards suggested by CableLabs. Rich Woundy, an SVP of Engineering there, personally made this his issue to solve. (however, the rate of deployment of their fixes to the field left something to be desired, and most other Access Providers have not deployed these fixes with any urgency, probably because an alternate, partial solution, urging customer’s to buy “fatter pipes” is more directly profitable). GoGo could pursue this approach. (and they don’t have the ability to sell “fatter pipes”…)
What could be done, which would be a lot better
I’ve mentioned two problems. One for the Internet community, and one for GoGo and other Internet Access Providers who suffer from bufferbloat, but don’t understand it can be fixed. Here’s what can be done.
There are off-the-shelf solutions for GoGo to deploy, today. I use a router software package called CeroWRT that embodies a mechanism for buffer management called “FQ Codel”, based on some brilliant work in router queue management today. FQ Codel can also drive ECN signalling, a feature of IP that signals congestion. The combination of these two have been demonstrated in actual real-world experiments to eliminate bufferbloat, solving the problem GoGo currently experiences. GoGo should hire Dave Täht, perhaps, who is one of the ringleaders of CeroWRT development. That improvement would certainly make my life, and other’s, on airlines using GoGo better! Some days GoGo just really sucks, appearing to be completely unresponsive, and when I check why, it’s because I’m getting multi-second Ping times, but no lost packets – the defining signature of a bad bufferbloat state – which can be caused by many other things than streaming from YouTube or Netflix. There’s no need for it to be unresponsive when the connection is up – the problem is that when queues get to be multiple seconds in latency, nothing works well, especially the Web browser experience. This problem is caused by GoGo’s bufferbloat, not by the users….
PS: I’d hate to see GoGo brought before the FCC in another en banc hearing on Broadband Network Management where their hacking into customer’s secure connections is part of the evidence that would have to be considered – that sounds like a hearing GoGo would not enjoy. I would be available as an expert to testify, as I did before with Comcast, in the prior hearing. I could explain why breaking encryption when they don’t have any need to do so is not needed to manage their network. But … I’d rather they just solve the problem with good engineering. Perhaps their engineers have more sway, and their managers are less cocky than Comcast’s PR people were. Calling all users who upload too much data “pirates” is a PR move that just made many of us less inclined to help them than Comcast’s use of DPI-based attacks already did.
It’s remarkable to me that there are now two powerful agencies fighting to “govern” the Internet – the ITU and the FCC. On any given day, it’s hard to tell whether they are on the same side or different sides. The ITU process apparently began in earnest with the World Summit for the Information Society (WSIS) meetings, where the concept of “Internet Governance” became an urgent goal. The FCC process began when incumbent Internet Access Providers (IAPs) argued that “Net Neutrality” was a stalking horse for government control and definition of the Internet, followed by calls for regulatory definition of the Internet as a “Broadband Service” through the Regulation of “Broadband”. I recently signed a letter referring to the inconsistencies between the two efforts, which threaten, when combined, to destroy the whole idea of a “network of networks”, replacing it with a “vertically integrated service” concept, in the quest to “govern” something. But both efforts seem to have the Internet wrong in different ways.
As I’ve noted, the Internet is not Broadband, in my post What the Internet Is, and Should Continue to Be. But the FCC wants to view it as a “service” because for historical reasons, the FCC bureaucracy is organized around the idea that every activity or application of any sort in communications is a “service” that stands on its own. The Internet is a unification of all communications capabilities, so it just does not fit into the vertical integration idea that the FCC promotes by the structure of its regulations. (a reading of the enabling legislation does not require organization around “discrete services”, by the way)
The ITU also seems to be focused on “defining the Internet” as a “service”, but they focus on trans-national issues as well. In the US, the FCC does not deal with international communications – that is the province of the US State Department, which deals only with governments and quasi-governmental organizations, not with citizens of any countries. Though certain powerful multinational communications carriers are granted a seat at the table, largely because a number of countries including the US, have privatized their communications transport industries.
What is “defining the Internet” about? Well, largely it is about creating something to “govern” by inventing a governable entity, based on a lot of discussions with “stakeholders” [note: I am not considered a stakeholder because I represent myself. The technical language that defines stakeholder in both the FCC and ITU is someone who "represents an interest", where interest is a governmental agancy, a corporation, or an organized interest group dedicated to influencing legislation in the interests of its members.The Internet users themselves are not interests]. And they are trying to define it as a service that is provided by a “provider” who owns or otherwise controls the medium. In other words, the assumption is that the Internet is a “vertically integrated” concept, that starts with applications, and is supported by a variety of gear that the “service providers” pay for, and resell to users in the form of services. This gives them a thing to “govern”.
This is attractive to bureaucrats who seek power and control over communications activities, whether the bureaucrats are in governments, international quasi-governmental agencies, or corporations. The move is to define the activity, and then limit the activity to a particular physical resource (wires, fiber, switches, gateways, spectrum property rights, …), and then control from the bottom. This paradigm of “governance” by creating property rights in physical media and then controlling all services built on that property is extremely attractive, and has reached full flower in the POTS and radio communications arenas.
But as I began, the question is: does the Internet need governance? By design and history, the answer is no.
The Internet was designed as a “network of networks” that could easily extend across all networks, merely by finding a way to transport Internet Protocol Datagrams (IP datagrams, or IP packets) across each network, whereupon a gateway (switch or router that understands IP datagram addressing) then can forward the IP datagram towards the eventual destination. Since all destinations and sources have IP addresses, the Internet Protocol and the gateways provide sufficient glue to create a universally connected network of networks.
This design avoids the need for any governance whatsoever in the delivery of packets. Further, the design is such that the content and intent of the datagrams need not play any role whatever in the gateways’ function. Only the IP Datagram “header” is used to make decisions about where the datagrams go. Part of routing the datagrams is the ability of the gateways to decide what route to take to deliver the datagram to the intended destination. But again, no global “governor” is needed to carry out the function efficiently – as the network of networks grows, a distributed algorithm for routing is both more resilient and more effective at getting packets to where they need to be.
Since content plays no role in Internet delivery, encryption of each datagram’s content may be used to further protect and to authenticate content against forgery. A key part of the Internet’s design was and is the ability to carry encrypted content for this reason – it prevents malicious tampering and reading of datagrams, up to the strength of the encryption algorithm and the key management maintained by the source and destination.
It is this ability of the Internet to be a universal network of networks that does not depend on applications that has led to its ability to serve as the “lingua franca” that spans international and corporate boundaries, facilitating any application that wants to use it, and also incorporating any underlying technology for communications – starting with dedicated digital circuits and voice-grade switched lines using acoustic encoding (so-called “dialup”), and now including fiber, cable TV coax, wideband radio, mesh networks, etc.
So the Internet is not an application or “service” in the sense that the ITU and FCC would like to define it. It is not “Facebook + Google + Instagram + The Cloud + email + Twitter + Amazon + iTunes + Alibaba” – an amalgam of current popular services that happen to exploit the universal openness of the Internet.
Nor is it Broadband or LTE or GSM.
This is why the “network neutrality” discussion, framed as “who will govern the Internet” is wrongheaded from the beginning.
As Ive noted, the Internet “needs a little help from the Law”. But the key point here is that law is not the same thing as governance. Laws are rules that humans (and “persons” like corporations) must obey, or be punished. Not all laws come from governments. There is a whole body of “common law” that is generally accepted, transcending government. One such law is that you cannot steal a package that you’ve agreed to transport from point (a) to point (b). That is true whether or not there is a “contract”. It’s just not done, and courts in any jurisdiction, no matter what the government, will hold to that principle.
So reading and benefiting from a private communications that you happen to be carrying as part of the Internet should be covered by this standard principle. We don’t need the Internet to be “owned” as a whole, or “governed” as a whole to prevent that or to discipline those who might do so.
Similarly, discriminating at a hotel based on the color of some guest’s skin is equally noxious. There are those who think all laws should be based on absolute property rights who struggle to find such ideas acceptable – usually by defining people as non-persons due to their forbears’ genetic makeup. But in a modern society, we know that there is no basis for such discrimination.
There is a tendency to blame the Internet for the kinds of communications that go over it – and to try to hold the Internet liable. But the criminal behavior that happens over the Internet is not caused by the Internet transport of packets. Again the idea that the Internet is somehow a service is based on a fundamental confusion. Should we blame the English language (or the Pashto language) because people can conspire to commit crimes by speaking in English (Pashto)? Should we blame a culture’s literature and newspapers for the behavior of individuals who belong to that culture?
Trying to conceive of the Internet in terms of “governance” reflects a peculiar redefinition of what the Internet is about.
The Internet is a form of universal glue. It’s built by those who use it, and based on a design concept that allows a network of networks to scale to any size on any technology that can carry IP Datagrams.
What the Internet needs, however, is some help from the law. The help is required largely because governments create or subsidize monopolies. Examples include radio spectrum rights (you cannot get the right to operate a transmitter in the US or a receiver in countries like the UK without a very restrictive license that bars most modern communications techniques other than those of a small set of “incumbent” providers), and local “franchise rights” created and maintained by local and national governments (RCN was not allowed to build out Fiber in Philadelphia, the corporate home of Comcast, by a mayoral decision based on the claim that it would “cost jobs”).
The Internet can run fine across these monopoly platforms, but the temptation of some of the monopolies is to claim the right to muck with Internet packets – and this is not a theoretical claim. It is at the core of behavior that has been documented, including products from Phorm, NebuAd, Sandvine, Ellacoya and others that are designed to read all IP datagrams to analyze content, modify content, act as a “man-in-the-middle” to control connections unbeknownst to the endpoints, etc. Those companies are doing great business selling to access providers the tools to exploit what is inside of IP Datagrams, in most cases without disclosure, and if disclosed there is only a mention of the possibility in a Terms of Service, and maybe an obscure “opt-out” mechanism that can be offered when the exploitation is discovered.
If the state grants such monopolies, the state must be responsible to police those monopolies’ actions. And that’s one place where the Internet needs a little help from the law.
One could argue that the Constitutional Protections in the US for Free Speech and Free Assembly only protect against “government action” to block free speech – that companies who interfere with speech and assembly on the Internet do so privately, and therefore outside the purview of the US Constitution. But that is clearly wrong for a simple reason: the government created the monopolies! So the government is responsible for the curtailment of free speech and free assembly by its monopolies. That includes monopolies at the level of Towns, Cities, States, and other jurisdictions in the US – the constitutional rights bar those governments from mucking with speech and assembly rights. And this principle goes beyond America – many (if not most) countries guarantee freedom of speech and assembly, and most, if not all, countries grant monopoly rights to communications carriers.
This could be easily solved with a simple law: any company that handles Internet datagrams may not read or modify the content, nor infer intent or meaning for the purpose of deciding what datagrams to deliver or to not deliver.
That’s a pretty simple principle, and it happens also to be the design principle behind the Internet, and what has made it work.
If a Cable TV company chooses not to offer Internet service at all, that’s fine. Let them. They then would not have a “franchise” right for Internet service, and someone else who chooses to offer full Internet service could enter the market, which is awesomely large! There’s no risk there.
Similarly, if a wireless company chooses not to offer Internet service at all, great! Again, they would be wasting the value of their monopoly spectrum, and someone would find a way to offer Internet service.
But the law would merely exist to protect the simple rules about IP datagrams – no peeking, no changing, no routing some but not others based on content.
We might not even need the law if the governments would get out of the business of granting monopolies, as I have argued is possible (and needed) in the case of digital networked radio technologies. The argument that spectrum rights are needed for radio communications to function is technically wrong in a fundamental way. In fact, we would have vast improvements in wireless capability if we were to take advantage of the ability of digital techniques (modulation, sensing, cooperation, and interoperation as a network of networks) for radio. The major block here is that the current incumbents control the regulatory framework, because they like monopolies given out by the government. However, separating the necessity of monopolies from the question – the law can easily say that the monopolists that offer Internet access and transport must not peek, modify or discriminate, as a condition of holding the monopoly.
We might not need the law if we could adopt a universal framework for encryption and secure routing among all the glue parts of the Internet, such that there would be no ability to peek, modify, or discriminate. I find this less likely to happen, because it would require a significant effort on the part of vendors and applications that use the Internet to ensure that all these parts get built and implemented widely.
To me, this NYT article suggests that Susan Crawford “doesn’t get it” about the Internet, in a an amazingly extreme way (an “epic fail”). She focuses on a specific hardware technology (fiber), when in fact the whole point of the Internet was to focus on interoperability among ALL transport technologies and all end-to-end communications. She focuses on Municipal ownership, as if local towns are somehow the most “fair” – these are the towns that sold exclusive franchises in the first place, and continue to maintain them (viz. Philadelphia). And these are the governments that are often least tolerant of those who think and act differently or are different (consider Salem in U.S. colonial times, or many Southern towns in times up to the present).
So I think it’s worth it to ask two questions:
Reframing the issue using a near-non sequitur is a classic Bernays-style approach to manufacturing public opinion. Bernays is famous for calling Lucky Strikes “freedom torches” and organizing a movement themed around women’s rights to promote smoking. We’ve already seen the industry-driven diversion of attention from High-Speed Internet Access to “Broadband” by a rhetorical device of never mentioning “Internet” as an architecture. Now we see the issue being renamed as “building Fiber” – with no real connection to opening up access to the Internet at the edge, or access to continue to create the Internet, as all of us do, whether we label ourselves providers or not.
Why the new story? Cui bono? Does the Obama Administration and its policy folks (Crawford and Werbach wrote the original plan with Larry Summers) really just “not get” the Internet? Or is this a somewhat more deliberate political move – perhaps trying to get some political win while ignoring the biggest issue and allowing the Internet as a concept to die?
Further, when one looks around the world, citizens are fighting to keep their countries’ Internet access as open as possible against government intervention. Yet in our country, we focus on public works projects to deploy a particular kind of cable (call it Broadband or now Fiber). What’s one got to do with the other? What about the next technology down the road that opens up communications? Must we depend on our cities and towns to innovate?
This is not a partisan issue. All US citizens deserve the freedom to speak and the freedom to assemble. We take that for granted, whether we are demonstrating support for a particular meaning of the Second Amendment, or discuss the relative merits of different companies’ products and services in a free market. We don’t spend our time discussing the architecture of public meeting rooms or the dimensions of town squares, or whether every town should have a shopping mall, as if they somehow substitute for those freedoms. (Note, I am not talking about the Bill of Rights here. This is not about government’s power, but the concentration of cultural and commercial power to limit communications being renamed as a discussion about the management of town water and sewer systems).
The Wire Next Time
By SUSAN CRAWFORD
Apr 27 2014
CAMBRIDGE, Mass. — LAST week’s proposal by the Federal
Communications Commission to allow Internet service providers to charge
different rates to different online content companies — effectively
ending the government’s commitment to net neutrality — set off a
flurry of protest.
The uproar is appropriate: In bowing before an onslaught of corporate
lobbying, the commission has chosen short-term political expediency over
the long-term interest of the country.
But if this is the end of net neutrality as we know it, it is not the
end of the line for fair and equitable Internet access. Indeed, the
commission’s decision frees Americans to focus on a real long-term
solution: supporting open municipal-level fiber networks.
Such networks typically provide a superior and less expensive option to
wholly private networks operated by Internet service providers like
Comcast and Time Warner.
Occasionally, people ask my perspective on the Internet, since I often object to confusing it with things like the telephone or Cable TV. Recently I composed a response that captures my perspective, as one of the participants in its genesis, and as an advocate for sustaining its fundamental initial design principles. I hope these words clarify what I believe many of those who continue to create the Internet continue to do, even though most of them are not aware of it. I also hope many will see their interest in keeping the core principles of the Internet alive.
The Internet must be fit to be the best medium of discourse and intercourse [not just one of many media, and not just limited to democratic discourse among humans]. It must be fit to be the best medium for commercial intercourse as well, though that might be subsumed as a proper subset of discourse and intercourse.
Which implies interoperability and non-balkanization of the medium, of course. But it also implies flexibility and evolvability – which *must* be permissionless and as capable as possible of adapting to as-yet-unforeseen uses and incorporating as-yet-unforeseen technologies.
I’ve used the notion of a major language of inter-cultural interaction, like English, Chinese, or Arabic, as an explicit predecessor and model for the Internet’s elements – it’s protocols and subject matter, it’s mechanism of self-extension, and it’s role as a “universal solvent”.
We create English or Chinese or Arabic merely by using it well. We build laws in those frameworks, protocols of all sorts in those frameworks, etc.
But those frameworks are inadequate to include all subjects and practices of discourse and intercourse in our modern digital world. So we invented the Internet – a set of protocols that are extraordinarily simple and extraordinarily independent of medium, while extensible and infinitely complex. THey are mature, but they have run into a limit: they cannot be a framework for all forms of (digital information). One cannot encode a photograph for transmission in English, yet one can in the framework we have built beginning with the Internet’s IP datagrams, addressing scheme, and agreed-upon mechanics.
The Internet and its protocols are sufficient to support an evolving and ultimately ramifying set of protocols and intercourse forms – one’s that have *real* impact beyond jurisdiction or “standards body”.
The key is that the Internet is created by its users, because its users are free to create it. There is no “governor” who has the power to say “no” – you cannot technically communicate that way or about that.
And the other key is that we (the ones who began it, and the ones who now add to it every day, making it better) have proven that we don’t need a system that draws boundaries, says no, and proscribes evolution in order to have a system that flourishes.
It just works.
This is a shock to those who seem to think that one needs to hand all the keys to a powerful company like the old AT&T or to a powerful central “coordinating body” like the ITU, in order for it not to fall apart.
The Internet has proven that the “Tower of Babel” is not inevitable (and it never was), because communications is an increasing returns system – you can’t opt out and hope to improve your lot. Also because “assembly” (that is, group-forming) is an increasing returns system. Whether economically or culturally, the joint creation of systems of discourse and intercourse *by the users* of those systems creates coherence while also supporting innovation.
The problem (if we have any) is those who are either blind to that, or willfully reject what has been shown now for at least 30 years – that the Internet works.
Also there is too much (mis)use of the Fallacy of Composition that has allowed the Internet to be represented as merely what happens when you have packets rather than circuits, or merely what happens when you choose to adopt certain formats and bit layouts. That’s what the “OSI model” is often taken to mean: a specific design document that sits sterile on a shelf, ignoring the dynamic and actual phenomenon of the Internet. A thing is not what it is, at the moment, made of. A river is not the water molecules that currently sit in the river. This is why the neither owners of the fibers and switches nor the IETF can make the Internet safe or secure – that idea is just another Fallacy of Composition. [footnote: many instances of the "end-to-end argument" are arguments based on a Fallacy of Composition].
The Internet is not the wires. It’s not the wires and the fibers. It’s never been the same thing as “Broadband”, though there has been an active effort to confuse the two. It’s not the packets. It’s not the W3C standards document or the IETF’s meetings. It’s NONE of these things – because those things are merely epiphenomena that enable the Internet itself.
The Internet is an abstract noun, not a physical thing. It is not a frequency band or a “service” that should be regulated by one of the service-specific offices of the FCC. It is not a “product” that is “provided” by a provider.
But the Internet is itself, and it includes and is defined by those who have used it, those who are using it and those who will use it.
Bob Frankston suggested I post some of my recent remarks and references to Orbital Angular Momentum and its potential value in increasing the capacity of “spectrum” (the word I think misleads everyone, but that’s a subject I’ve talked about a lot). So I did put up a brief page here, with some references also. I believe my comments made it onto Dave Farber’s Interesting People list, though I do not follow that list.
This is part of my general quest to get the wireless community to stop thinking about radio as “colors” and “rays” and “channels”, since they are poor approximations to the physics of EM waves, based on Marconi’s techniques for CW multiplexing of morse code. I wish the regulators were far more cognizant of the fundamental physics and our modern Computational Radio toolkit – the path I started following back in 1994, when I got fascinated with wavelet analysis and UWB Impulse Radio and their bases in physics and connection to information theory.
What do we teach when we teach science in school? And really, why do we teach science that way?
I’ve personally never been quite sure whether I’m more of a scientist, engineer, or mathematician. The public lumps these all together for some reason, perhaps because they all appear to deal with concepts that are expressed in terms that must be learned carefully, because they certainly are not wired into our brains, and they aren’t the basis of popular culture either, unlike literature, art, history, sports, etc.
The science we teach is pretty old. Mostly 19th century ideas about the world around us are taught as “facts” with little but anecdotal data to support it. We teach it via an ontology that replays the history of science, thus the newest and most powerful scientific understandings are viewed as “too advanced”. If it weren’t so widely discredited, we would be teaching K-12 kids about phlogiston first, and general chemical oxidation reactions second, just as one example of that.
We don’t teach kids ancient Greek and French cave drawing first, do we?
I’ve heard lots of reasons for teaching “old” science, and old models and definitions taught as facts, over the years, and the reasonsnever really made a lot of sense. They certainly don’t make sense today. If we rethink the whole enterprise of what we teach as science, we have a real opportunity – perhaps a bigger opportunity than the big postwar drive towards science surrounding the Kennedy space program.
What got me bugged about this issue was reading a really fascinating book I’ve recommended to many people recently: The Blind Spot: Science and the Crisis of Uncertainty by William Byers. In it, as part of Byers’ main theme, he points out that the 20th century was marked by a profound re-learning of much we thought certain scientifically. Yet our school systems teach much of that old science as “certain” today.
Two deep questions about physics will suffice to make my point:
1) Is the Universe’s geometry Euclidean? We teach Euclidean geometry under the unadorned name “Geometry” as if it were foundationally correct. And we teach physics (including elementary astrophysics and cosmology) as if the universe were Euclidean. But by the end of the 20th century it’s become pretty clear that the universe’s geometry is non-Euclidean. That’s what Einstein proposed in the first decade of the 20th century, and more than 100 years later, it’s pretty clear that he was right in rejecting Euclid. Many (but not necessarily all) of the theorems taught in High School geometry also can be derived without using the parallel postulate in their proof – and non-Euclidean geometry need *only* be complicated to people who have learned Euclidean geometry first.. That postulate, however, is almost certainly *scientifically* wrong…. do we teach geometry according to Euclid because it’s always been taught that way? If so, we should put a caveat in the front of the book that we are teaching something that kids should forget about when they learn more about physics? [Euclidean geometry is a consistent formal theory, but teaching it first is profoundly misleading, if our universe is not that way.]
It would be quite easy to start with a course in the Geometry that actually matches the Universe as we know it. Just as rigorous, it still generates the same answers for normal problems, and conceptually it would be much easier to understand cosmology and astrophysics as they are today.
2) Is there nothing? One of the most painful parts of physics teaching these days is the conceit used in its teaching that a true “vacuum” exists or that experiments can be perfectly isolated… Teaching kids the idea that they can create conditions of complete vacuum, with no fields, no matter/energy, … except for, say, a couple of billiard balls representing masses has a real downside. The downside is that we have no evidence that such a vacuum exists or can exist. My own sense is that we got to this place by inventing the “number” zero, and generalizing. However the number zero, and the null set, are *far more complicated* than almost any other concepts we shove down kids throats without explaining them. Worse yet is the idea of “nothing”. When we reason about “nothing” or include it in our models, it creates havoc – just think about this: is “no goats” the same as “no apples”? No goats is a much simpler concept than “nothing”, because it implies goats exist somewhere. “Nothing” implies the absence of real categories of things, and categories of things that cannot be real, and there are clearly far more of those hypothetical things than there are kinds of real things. This is far from science – more like mysticism.
We use “nothing” and “zero” in *very* sophisticated ways, and in actual science, we have to be *very careful* lest we trip up and misuse these concepts. It’s very hard to differentiate “nothing” from “I haven’t found any way to measure all aspects of reality yet, but maybe there is something there that we haven’t managed to notice.”
We have no scientific evidence for “nothing” being real. (and some experimental evidence that the “quantum vacuum” is far from “nothing”, which would call almost all we teach about physics into question…)
Kids tend to be very concrete, of course. If you only provide them with examples of things where everything is simple and fits the “theory du jour”, most kids will assume that their authoritative teachers and textbooks are telling them everything they need to know. But in fact, schools teach a lot of science that is just plain known to be wrong, and even some big howlers at that. It’s not just oversimplification – we teach things we now know to be wrong.
The law of conservation of mass and the law of conservation of energy are known to be wrong. And now that we know how to measure mass and energy, we don’t need nuclear reactions to show it is true. That doesn’t mean we can’t talk about the “engineering” concept of conservation of mass, which is true in most fields of engineering that don’t involve high speeds or nuclear reactions – but it’s not science, it’s engineering. (and don’t get me started on “equilibrium” thinking – the universe is far from equilibrium, as Prigogine showed us – it’s what he calls a dissipative system. Dissipative systems are a far better way to think about biology, work, what the politicians call “energy”, … so we should teach kids science around dissipative systems, not around equilibrium states)
The universe is not laid out in a 3D grid with Cartesian coordinates, either. There is no “0″ point on the axes – no center of the universe, no “omniscient point of view.”
I picked physics. But the same holds for chemistry, biology, …
We teach the notion of “species” in biology as a primary concept. But practicing biologists know that “species” is pretty useless framework for describing biological reality – they are not stable (we’ve demonstrated both Darwinian evolution and Lamarckian evolution are real and common phenomena, once we started looking deeply at more and more of the world), they are not well-defined, and they represent only one perspective on life. When we stop making “species” a central concept, we can see that many kinds of distinctions are more biologically relevant than gamete compatibility.
We teach “genetic determinism” in K-12 biology – but in fact, we know that developmental processes (particularly epigenetic processes of methylation, etc.) play huge roles in all aspects of biology. Is it just the huge egos of Watson and Crick (the same egos that cut Rosalind Franklin out of the Nobel prize) that wanted to call DNA the sole, fundamental “code of life”, inspiring popular culture to invent paranoias about “cloning individuals”? Or was it just an overzealous media that ignored and continues to ignore the evidence that evolution passes information on through means other than DNA? And that DNA is pretty slippery and unreliable stuff (not at all like a computer program)?
We teach that there are “hardwired” parts of physiology and anatomy in every species. Yet we see that “hardwired” things are built by processes that many times don’t work very well. So “hardwired” is actually not hardwired at all. We see only what we want to believe, and science often is the result of removing those blinders by holding our “proof” up to scrutiny.
One great example of completely ignoring what we started learning around the time of the Civil War from James Clerk Maxwell, and on up through the 20th century is the way we teach K-12 kids about electromagnetism and light. Here we fail our kids enormously: we teach kids about the idea that light travels in “rays” from a source like the sun, and can be “bent” by lenses and prisms, while we’ve known for centuries that there is no such thing as a light “ray” at all. What we know is that light is something novel involving electrical and magnetic interactions that behaves like a combination of “waves” and “particles” – but *never* as rays. The experiments that would show this are easy to do for kids, and effective models exist of both (wave tanks and “billiard tables” that vary the speed of “photons” on different areas by having them climb slopes). Rays are a “calculational” device but are not scientific ways to understand light, yet we teach it as if light were “rays” and not something else.
(and of course there’s a great opportunity to combine physics of light with biology and congnitive psychology in exploring how our eyes and mind *actually* perceive things, which is full of opportunity to dispel many “false” facts about “seeing is believing”, replacing them with a better understanding of understanding: “believing is seeing”, a much more scientifically sound notion).
To me, as Byers suggests, we need to embrace *un*-certainty, while learning to understand the cumulative process of discovery and exploration(!) that is Science. This can’t be taught in a way that can be measured by multiple choice tests about *facts*. Here’s a good test question for a science test, if you want to see if a kid gets what science is:
You can even make this “multiple choice”, of course. Instead we have questions like:
Why do all the lawyers in DC assume that “spectrum is like real estate”? I suspect it is because their experience of science in school was to learn a bunch of definitions and metaphors, dumbed down and essentially false, confusing science, engineering, and math thinking with memorizing a list of concepts that can be regurgitated on tests, and memorizing algorithms for solving simple mathematical problems posed as physics and chemistry.
Since more than half of the MIT undergraduates I’ve met in my life are confused about what science is, and they are supposedly the ones most educated in science in the country, I suspect it is an endemic problem… What I do know is that many, many people want to see the US get ahead in STEM. Why not engage that energy? Why not endeavor to teach our kids real science? Today’s science, developed without teaching facts that are known not to be facts. Discuss the way that we struggle with uncovering what we know, the challenges of testing what we think is true (not some sterile “hypothesis” made up so that all kids can pass a test on a “method” that misses the point that science is always subject to revision, but is nonetheless built on hundreds of years of human experience… humanity’s most profound collective, constructive effort).
My view is that there is a *huge* opportunity available to the first culture/country/whatever that actually sets out to create a *real* science curriculum. That is *certainly* not what I see the politicians doing. Sadly, neither are most science educators willing to proceed do do that, though most scientists would probably agree with most of what I’ve just said. The idea of teaching current science is perceived as “too high risk”. Good lord, is teaching a zillion things that “t’aint so” a “low risk”???
Yeah, I know that this makes me sound like a lunatic who doesn’t understand the value of teaching irrelevant, but difficult and formal texts for “critical thinking” purposes. But is it teaching critical thinking to *uncritically* teach a curriculum that makes students learn stuff we know not to be true, then regurgitate it on SATs? That smacks of obscurantism, not at all what science is about!
PS: for the record, I did fantastically on my Physics and Bio SATs and APs, despite my knowing that many of the questions I was being asked were based on things already disproved. But fortunately, I had learned what kind of answers they, the College Board, wanted on the test. So I put myself in the frame of mind of the “science educator community” and spouted the answers they wanted. Got excellent scores. But it’s bass-ackwards to require a student to regurgitate stuff they will have to “un learn” if they go on, as I did, to a career in STEM, and it probably means that most people who pursue non-STEM careers end up learning a lot of things that make it much harder for them to engage with real scientific knowledge when the chance comes up. People accept whatever “just so story” they feel comfortable with about the world, because they have been taught nothing about how discovery and exploration actually work.
Barbara van Schewick posted a really thoughtful analysis about how about application-specific vs. application-agnostic discrimination directly affects innovation, and looks at an actual example of a Silicon Valley startup. I think her points are right on, and I strongly support the rationale for resisting “application-specific” discrimination.
In fact, Barbara’s point is the key to the whole debate. The future of the internet requires that applications be able to be invented by anyone and made available to everyone, and information shared on the net by anyone to be accessible to anyone. That property is “under fire” today by Internet access providers, by nation states, and by others who wish to limit the Internet’s reach and capabilities. I wholeheartedly support her points and her proposal.
I think it’s important to remind us of a further point that is quite obvious to those who have participated in the Internet on a technical level, from its original design to the present, so I thought I’d write a bit more, focusing on the fact that the Internet was designed in a way that makes application-specific discrimination difficult. Barbara knows this, since her work has been derived from that observation, but policymakers not steeped in the Internet’s design may not. As we pursue the process of making such rules, it is important to remind all of us that such rules are synergistic with the Internet’s own code, reinforcing the fundamental strengths of the Internet.
So I ask the question here: what do we need from the “law” when the “code” was designed to do most of the job? Well, the issue here is about the evolution of the Internet’s “code” – the implementation of the original architecture. The Internet’s code will continue to make it hard for application discrimination to be difficult as long as a key aspect of its original design is preserved – that the transport portion of the network need not know what the meaning of the bits being transported on any link is. We struggled to make all our design decisions so that would remain true. Barbara has made the case that this design choice is probably the most direct contribution to the success of the Internet as a platform for innovation.
My experience with both startups and large companies deciding to invest in building on general purpose platforms reinforces her point. Open platforms really stimulate innovation when it is clear that there is no risk of the platform being used as a point where the platform vendor can create uncertainty that affects a product’s ability to reach the market. This is especially true for network communications platforms, but was also true for operating systems platforms like Microsoft DOS and Windows, and hardware platforms like the Apple II and Macintosh in their early days. In their later days, there is a tendency for the entities that control the platform’s evolution to begin to compete with the innovators who have succeeded on the platform, and also to try to limit the opportunities of new entrants to the platform.
What makes the Internet different from an operating system, however, is that the Internet is not itself a product – it is a set of agreements and protocols that create a universal “utility” that runs on top of the broadest possible set of communications transport technologies, uniting them into a global framework that provides the ability for any application, product or service that involves communications to reach anyone on the globe who can gain access to a local part of the Internet.
The Internet is not owned by anyone (though the ISOC and ICANN and others play important governance roles). It’s growth is participatory – anyone can extend it and get the benefits in exchange for contributing to extending it. So controlling the Internet as a whole is incredibly hard. However certain areas of the Internet can control the Internet in limited ways. In particular, given that local authorities tend to restrict the right to deploy fiber, and countries tend to restrict the right to transmit or receive radio signals, the first or last mile of the Internet is often a de facto monopoly, controlled by a small number of large companies. Those companies have incentives and the ability to do certain kinds of discrimination.
However, a key part of the Internet’s design, worth repeating over and over, is that the role of the network is solely to deliver bits from one user of the Internet to another. The meaning of those bits never, under any circumstances, needs to be known to anyone other than the source or the destination for the bits to be delivered properly. In fact, it is part of the specification of the Internet that the source’s bits are to be delivered to the destination unmodified and with “best efforts” (a technical term that doesn’t matter for this post).
In the early days of the Internet design, my officemate at MIT, Steven T. Kent, who is now far more well known as one of the premier experts on secure and trustworthy systems, described how the Internet design could in fact, be designed so that all of the bits delivered from source to destination were encrypted with keys unknown to the intermediate nodes, and we jointly proposed that this be strongly promoted for all users of the Internet. While this proposal was not accepted, because encryption was thought to be too expensive to require for every use, the protocol design of TCP and all other standard protocols have carefully preserved the distinctions needed so that end-to-end encryption can be used. That forces the design to not depend in any way on the content, since encryption means that no one other than the source or destination can possibly understand the meaning of the bits, so the network must be able to do perfectly correct job without knowing same.
Similarly, while recommendations were made for standard “port numbers” to be associated with some common applications, the designers of the Internet recognized that they should not assign any semantic meaning to those port numbers that the network would require to do its job of delivering packets. Instead, we created a notion of labeling packets in their header for various options and handling types, including any prioritization that might be needed to do the job. This separation of functions in the design meant that the information needed for network delivery was always separate from application information.
Why did we do this? We did not do it to prevent some future operator from controlling the network, but for a far more important reason – we were certain that we could not predict what applications would be invented. So it was prudent to make the network layer be able to run any kind of application, without having to change the network to provide the facilities needed (including prioritization, which would be specified by the application code running at the endpoints controlled by the users).
So here’s a concern with Barbara’s latest post, and in fact with much of the policy debate at the FCC and so forth. The concern is that the Internet’s design requires that the network be application agnostic as a matter of “code”. More importantly, because applications don’t have to tell the network of their existence, the network can’t be application specific if it follows the Internet standards.
So why are we talking about this question at all, in the context of rules about the Open Internet at FCC? Well, it turns out that there are technologies out there that try to guess what applications generated particular packets, usually by relatively expensive add-on hardware that inspects every packet flowing through the network. Generically called “deep packet inspection” technologies and “smart firewall” technologies, they look at various properties of the packets between a source and destination, including the user data contents and port numbers, and make an inference about what the packet means. Statistically, given current popular applications, they can be pretty good at this. But they would be completely stymied by a new application they have never seen before, and also by encrypted data.
What’s most interesting about these technologies is that they are inherently unreliable, given the open design of the Internet, but they can be attractive for someone who wants to limit applications to a small known set, anyway. An access network that wants to charge extra for certain applications might be quite happy to block or to exclude any applications that generate packets its deep packet inspection technologies or smart firewall technologies cannot understand.
The support for such an idea is growing – allowing only very narrow sets of traffic through, and blocking everything else, including, by necessity, any novel or innovative applications. The gear to scan and block packets is now cheap enough, and the returns for charging application innovators for access to customers is thought to be incredibly high by many of those operators, who want a “piece of the pie”.
So here’s the thing: on the Internet, merely requiring those who offer Internet service to implement the Internet design as it was intended – without trying to assign meaning to the data content of the packets – would automatically be application agnostic.
In particular: We don’t need a complex rule defining “applications” in order to implement an application agnostic Internet. We have the basis of that rule – it’s in the “code” of the Internet. What we need from the “law” is merely a rule that says a network operator is not supposed to make routing decisions, packet delivery decisions, etc. based on contents of the packet. Only the source and destination addresses and the labels on the packet put there to tell the network about special handling, priority, etc. need to be understood by the network transport, and that is how things should stay, if we believe that Barbara is correct that only application-agnostic discrimination makes sense.
In other words, the rule would simply embody a statement of the “hourglass model” – that IP datagrams consist of an envelope that contains the information needed by the transport layer to deliver the packets, and that the contents of that envelope – the data itself, are private and to be delivered unchanged and unread to the destination. The other part of the hourglass model is that port numbers do not affect delivery – they merely tell the recipient which process is to receive the datagram, and have no other inherent meaning to the transport.
Such a rule would reinforce the actual Internet “code” because that original design is under attack by access providers who claim that discrimination against applications is important. A major claim that has been made is that “network management” and “congestion control” require application specific controls. That claim is false, but justified by complex hand-waving references to piracy and application-specific “hogging”. Upon examination, there is nothing specific about the applications that hog or the technologies used by pirates. Implementing policies that eliminate hogging or detect piracy don’t require changes to the transport layer of the Internet.
There has been a long tradition in the IETF of preserving the application-agnostic nature of the Internet transport layer. It is often invoked by the shorthand phrase “violating the end-to-end argument”. That phrase was meaningful in the “old days”, but to some extent the reasons why it was important have been lost to the younger members of the IETF community, many of whom were not even born when the Internet was designed. They need reminding, too – there is a temptation to throw application-specific “features” into the network transport by vendors of equipment, by representatives of network operators wanting to gain a handle to control competition against non-Internet providers, etc. as well as a constant tension driven by smart engineers wanting to make the Internet faster, better, and cheaper by questioning every aspect of the design. (this design tradition pushed designers to implement functions outside the network transport layer whenever possible, and to put only the most general and simple elements into the network to achieve the necessary goal, so for example, network congestion control is managed by having the routers merely detect and signal the existence of congestion back to the edges of the network, where the sources can decide to re-route traffic and the traffic engineers can decide to modify the network’s hardware connectivity. This decision means that the only function needed in the network transport itself is application-agnostic – congestion detection and signalling).
So I really value Barbara’s contribution, reminding us of two things:
The law needs to be worked out in synergy with the fundamental design notion of the Internet, and I believe it can be a small bit of law at this point in time, because the Internet is supposed to be that way, by design. If the Internet heads in a bad direction through new designs, perhaps we might want to seek more protections for this very important generativity is important to the world.
Note: My personal view is that the reason that this has become such an issue is that policymakers are trying to generalize from the Internet to a broad “network neutrality” for networks that are not the Internet, that don’t have the key design elements that the Internet has had from the start. For example, the telephone network’s design did not originally separate content from control – it used “in band signalling” to control routing of phone calls. To implement “neutrality” in the phone network would require actually struggling with the fact that certain sounds (2600 Hz, e.g.) were required to be observed by the network – in the user’s space. (this also led to security problems, but it was done to make “interconnect” between phone switches easy).
FCC Chair Genachowski released a statement today to announce that he will put forward a proposal to his fellow commissioners “Preserving a Free and Open Internet”, and it has already been reported on the NYTimes online.
It is possible to read Genachowski’s statement very, very carefully, and see a distinction between Internet and Broadband that *might* be there (or it might not). If it comes through in the rules, I will be happy, because the FCC at least got started on the right foot. For a while there it looked as if it was going to have two left feet (Broadband and
InternetBroadband), and we know where that leads.
I’m sure many will write that there are zillions of “loopholes” in that ruling allowing for all kinds of bad behavior by Broadband providers in the services they offer to customers. And I would tend to agree with those concerns. There is nothing that preserving a free and open internet can do, nothing at all, for many of the problems of telecom and information services in the broad sense of the term Broadband. But when those broadband providers offer a connection to the Internet, that is, when they offer the ability to access all of the services that are made available anywhere in the world from any other user or company that connects to the Internet, the rules the FCC proposes will apply.
This is a good thing, in my opinion. It’s not world peace. But it’s a good step forward.
But I have some concerns – mostly not the ones about details and loopholes. If you read the NYTimes article, they did not read Genachowski’s statement carefully, starting with the headline on the piece: “F.C.C. Chairman Outlines Broadband Framework” (fact checking, please?). A large part of this confusion arises because most people in the policy space confuse Broadband with Internet – as if they were equivalent terms. Apples are not appleseeds. Genachowski, so far, perhaps because he read the comments I and others have provided to the FCC very closely (I hope), uses both words, but he gets them right so far. However, I would think that an agency full of lawyers would know that defining terms is critical to making sense in an architecture or in a regulation. I’m not a lawyer, but I’ve been a systems architect for nearly 40 years, and slang and ambiguity will crash a system just as it will crash a law. (all you digerati who conflate bandwidth with information delivery rate, thinking it’s cool to be wrong, can tune out here).
This may not be the best regulatory framework for the Internet ever proposed, but at least Genachowski (unlike most of the folks in DC) uses the term Internet and the term Broadband right – in his statement. It sure would be easy if he wrote two clear definitions and made this point crystal clear, as follows:
Broadband is a category of general-purpose “wide area network” data networking systems that provids many services to each customer over a single, digital, switched high-bitrate medium like fiber, coaxial cable, twisted copper pairs, or wideband digital radio links. There are two kinds of broadband systems – fixed and mobile – available today, where mobile services are characterized by the ability of the user to move from place to place. Examples of broadband wide area networks are:
- Fixed: Verizon FiOS, Comcast’s hybrid-fiber-coax systems, fiber PON systems, and fixed digital wireless,
- Mobile: wireless LTE, wireless HSPA, and WiMax.
The Internet is an evolving family of protocols and interconnected networks that span the planet’s surface, its sea floor, and some number of miles into space, that are based on the transport of IP Datagrams over many diverse underlying networks based on many diverse networking technologies.
The Internet can be used over Broadband facilities, but it is the same Internet that is used over many other facilities. When a Broadband service provider provides “Internet access” to its users, it is important that this access service preserve the *essential* characteristic of the Internet – its openness and generality.
There are lots of details to sort out before it’s clear that Genachowski’s principles are even a good first step. I don’t love the weaknesses of Genachowski’s proposal, especially the exemption of applying these principles to Internet over Mobile Broadband. There I think he’s confused the high degree of innovation of wireless technology with some idea that the Internet over that technology will have to be different. We didn’t need to make the Internet change when we moved it onto Fixed Broadband platforms. Why should we have to change anything about it as we move to run Internet connections over mobile broadband? The Internet was designed to work over any kind of network, and it works over mobile broadband platforms today! The exception is an invitation for the mobile industry to establish “facts on the ground” that falsely prove the Internet “doesn’t work”. I’ve seen that already, where bugs in HSPA deployment of Internet services by operators have been used to argue that the Internet “doesn’t work” on their networks. It would be wonderful if the FCC had a technical capability to investigate these questions honestly, rather than outsourcing its technical sources to engineers who are told that an “open mobile Internet” is “not in your employers’ or its customers best interest” (that’s a quote from an executive at a telecom equipment company.)
However, there is some danger that when the actual proposed rules are written, the focus on the Internet will be lost. Certainly there are lots of advocates out there, even some who claim to support Neutrality, who would abandon any effort to keep the Internet open and neutral, in order to gain some other political goal that sounds like “net neutrality”. For example, there are lots of worries about so-called “loopholes” in Genachowski’s rules that would allow Broadband service providers to sell different speed Internet access products to different customers where the faster speeds cost more money. I don’t understand this worry at all. It used to cost more to have a DSL phone line than to have an ISDN phone line, and more for ISDN than an ordinary POTS line. Each of these could be used as to place a call to an Internet Service Provider, but the faster ones cost more. Other loopholes suggested include the idea that implementing an ability for users to pay to have some of their packets be prioritized over others would be a huge problem – but to the extent that this facility exists in the Internet already as a standard, such a rule would be attempting to redesign how the Internet works,
The key distinction here is that letting the users, through their applications, choose what their packets do, and where their packets go, and specify whatever priorities they need is quite different from having one link in the chain, the last mile provider, be able to control what applications the user can effectively run, or constrain how well they run. It’s this edge based freedom to choose to whom you speak, with what services you connect, how you speak and what you can collaborate on that constitutes the freedom of the Internet, due to its openness. The last mile access provider (Verizon, ATT, Cogent on the consumer side, or various large interconnects, such as Level 3, that support business users like eBay or small businesses) should have no say in deciding what Internet applications do inside their packets.
The main risk here could be that a provider makes something that they call the Internet, but in fact is highly non-standard and is not at all a standard Internet connection. That would be clear interference with the Internet, and if folks are required to be transparent, their deviations will be apparent, and the relief can be sought (either by the user or by appeal to the regulator). By distinguishing the Internet from other Broadband offerings, we can at least clarify the discussion, and if there are specific guidelines keeping the Internet open, by making sure it is the same Internet all over the US and the world, then we have a basis for sensibly understanding what will destroy its value.
The key benefit of framing things this way is that the Internet connection sold to the user must provide the standard, commonly understood, consensus definition of the Internet service – the ability to address IP datagrams to any destination on the collective Internet, and have it delivered without the access provider interfering with that delivery for its own purposes, or doing something other than deliver the datagram as requested according to the standard understanding of how the Internet works.
Will this make Television better? Maybe Television will be revolutionized when it is delivered over the Internet connection, compared to television delivered over the same Broadband provider’s Broadband network as a “special service”. It certainly will allow more people to distribute programs, whether from their homes or at scale. I don’t know for sure, but I think this is exactly what will happen. I *do* know how that will be prevented, though. It will be prevented by allowing the TV provider to select which parts of the Internet you can access over its Internet service, or by the TV provider providing a defective or overpriced Internet service, while at the same time maintaining a de facto (or even state-enforced) monopoly over access to string broadband cables to your home.
However, I prefer to be hopeful, rather than discouraged, that at least the FCC has made the baby step of recognizing the Internet as an independent thing, one worth protecting, and not merely a product of Comcast, Verizon, or ATT that is some kind of narrow, walled “Internet clone”. In other words, the kind of thing AOL claimed was “better than the Internet” because it was selective – selecting cheap or profitable content, and leaving the creativity outside Fortress AOL. Those companies didn’t create the Internet – the Internet was and is created by its users (including user organizations like Google, Apple, eBay, Amazon, banks, small businesses, and national/state governmental agencies, and individual users like all of the users of email, blogging tools, Skype, Facebook, and Twitter). The access providers whine about not being able to “monetize” the Internet like they “monetize” their Cable TV offers, not mentioning that they *pay* for Cable TV shows, but pay nothing for the Internet that its users create for each other, share with each other, and sell to each other. The Internet has created a huge new economy, so I think if the Broadband guys merely connect us into it and keep growing their capacity and speeds, they will do a fine busioness, too, from what we are willing to pay them to do so.
Updated: 12/2/2010 to add a small clarification to the comments on prioritization.
Today a relatively large and diverse group of advocates for the Open Internet filed a statement with the FCC under their notice of proposed rulemaking entitled Further Inquiry into Two Under-developed Issues in the Open Internet Proceeding. Here’s why I signed on early to support this statement, and why I hope you will support it, too. Other advocates may have other reasons, but we share a common ground that led us to sign a joint declaration despite detailed differences.
I’ve stated often and repeatedly that the Internet was created to solve a very specific design challenge – creating a way to allow all computer-mediated communication to interoperate in any way that made sense, no matter what type of computer or what medium of communications (even homing pigeons have been discussed as potential transport media). The Open Internet was designed as the one communications framework to rule them all. Very much as vocalizations evolved into a universal human communications framework, or ink on paper evolved into a universal repository for human knowledge. That’s what we tried to create when we designed the Internet protocols and the resulting thing we call the Internet. The Internet is not the fiber, not the copper, not the switches and not the cellular networks that bear its signals. It is universal, and in order to be universal, it must be open.
However, the FCC historically organizes itself around “services”, which are tightly bound to particular technologies. Satellite systems are not “radio” and telephony over radio is not the same service as telephony over wires. While this structure has been made to work, it cannot work for the Internet, because the Internet is the first communications framework defined deliberately without reference to a particular technological medium or low-level transport. (The Internet architecture was designed from the very beginning to be able to absorb technological change in its implementation and also in its uses, so unanticipated innovations in technology and in new applications would not make its fundamental design obsolete)
So it is historic and critical that this little spark of activity at the FCC, and the two companies, Google and Verizon, finally recognize the existence of “the Open Internet” as a living entity that is distinct from all of the services and the Bureaus, all of the underlying technologies, and all of the services into which the FCC historically has partitioned little fiefdoms of control, which it then can sell and regulate.
The Internet really is “one ring to rule them all” – a framework unto itself, one that cannot be measured against its “wirelessness” or its “terrestriality”. It has been a clean slate, not in terms of starting from scratch, but as a framework that collects and connects, rather than dividing, partitioning, disconnecting, and controlling.
It was carefully organized to incorporate innovations in transport of information, along with innovations in uses of such transport.
And having been recognized as something that is not merely a “threat to telephone monopolies” or a “threat to Cable TV” but as a thing unto itself, by the FCC, by Verizon, and by Google, we have an opportunity to fan that spark and encourage it.
What would happen if the FCC were to begin to recognize that all communications are to a large extent interchangeable? That one does not need to impose emergency services on AM and FM Radio Broadcasters that are architecturally distinct and unconnected to 911 on telephone systems and E911 on cellular phone systems? That there really is a competition between cellular and landline audio communications, and that cable-TV based video conferencing is just another form of person-to-person communications that can link a smartphone to a living room TV or a laptop?
This recognition, this spark, could be the beginning of rethinking the FCC’s mission to be a mission of unifying and improving the way we in America communicate digitally, to embrace innovation rather than stagnation, to embrace enhancing our civic culture rather than dividing and selling it to corporations in pieces that don’t play well together. It starts with the recognition that there is a value to an Open Internet (what we call the Public Internet, sometimes, today). And that that Open Internet is not “owned” by anyone, but is instead collectively created by many innovators at the edges, contributing services, content, communicating among themselves, and sharing a common culture across traditional jurisdictions, language boundaries, etc. In other words, the Open Internet is not a closed “service platform” or a “walled garden”, but an open interchange that crosses cultures, languages, and other traditional barriers. It would be sad if ATT, Verizon, Comcast, Google, or any other corporation were deemed to have the right to “own” your participation in the Internet, or to decide which tiny subset of content, which tiny part of the world you are paying to communicate with.
What would have happened if to use the “English language”, to read books in English, you would have to get an account with the corporate owner of the English language? When you say that ATT’s “culture” is distinct from Verizon’s “culture” – as if a “culture” is a “bookstore” that chooses which books to carry, that is the result. And until now, that is what the FCC has said the Internet was – whatever ATT offers to its customers over its pipes may not be whatever Verizon offers to its customers over its pipes. The Internet is not a private bookstore, but until this proceeding, the FCC had not acknowledged that the Internet was anything more than a minor service type, subject to modification and interpretation by those who merely provide an access connection, yet invest little or nothing in creating the rich culture and conversation that we users around the world construct for ourselves. That is what “open” means – the Internet fostering connection among us all..
This statement to the FCC doesn’t say this quite so explicitly. But it celebrates the spark that the FCC has ignited. Let’s keep that spark burning, kindle it, and recognize the gift of fire that is the Open Internet.
Due to a complex double failure, old posts were lost about 9 months ago, and while I’ve reconstructed a couple from network caches, I’ve given up on efforts to reconstruct old blog entries from remnants.
However, I will be posting new material, largely focused on ideas I’m exploring outside of “work” (I’m now at SAP), that some might find interesting, and perhaps a few postings on Internet and radio architecture and policy matters.