Chapter 4: Languages
Jon Postel was one of the engineers working on the ARPANET, the precursor to the internet. He wanted to make sure that the packets—or “datagrams”—being shuttled around the network were delivered in the most efficient way. He came to realise that a lax approach to errors was crucial to effective packet switching.
If a node on the network receives a datagram that has errors, but is still understandable, then the packet should be processed anyway. Conversely every node on the network should attempt to send well‐formed packets. This line of thinking was enshrined in the Robustness Principle, also known as Postel’s Law:
Be conservative in what you send; be liberal in what you accept.
If that sounds familiar, it’s because that’s the way that web browsers deal with HTML and CSS. Even if there are errors in the HTML or CSS, the browser will still attempt to process the information, skipping over any pieces that it can’t parse.
HTML and CSS are both examples of declarative languages. An author writing in a declarative language describes a desired outcome without providing step‐by‐step instructions to the computer processing the file. With HTML, you can describe the nature of the content—paragraphs, headings, form fields, etc.—without having to explain exactly what the browser should do with that information. With CSS, you can describe the desired appearance of the content—colours, borders, etc.—without having to write a program to apply those styles.
Most programming languages are not declarative, they are imperative. Perl, Java, C++ …these are all examples of imperative languages. If you’re writing in one of those languages, you must provide precise instructions to the computer interpreting your code.
Imperative languages provide you with more power and precision than declarative languages. That comes at a price. Imperative languages tend to be harder to learn than declarative languages. It’s also harder to apply Postel’s Law to imperative languages. If you make a single mistake—one misplaced comma or semi‐colon—the entire program may fail. A misspelt tag in HTML or a missing curly brace in CSS can also cause headaches, but imperative programs must be well‐formed or they won’t run at all.
Imperative languages such as PHP, Ruby, and Python can be found on the servers powering the World Wide Web, reading and writing database records, processing input, and running complex algorithms. You can choose just about any programming language you want when writing server‐side code. Unlike the unknowability of the end user’s web browser, you have control over your server’s capabilities.
The idea of executing a program from within a web page is as old as the web itself. Here’s an early email to the www‐talk mailing list:
I would like to know, whether anybody has extended WWW such, that it is possible to start arbitrary programs by hitting a button in a WWW browser.
Tim Berners‐Lee, creator of the World Wide Web, responded:
Very good question. The problem is that of programming language. You need something really powerful, but at the same time ubiquitous. Remember a facet of the web is universal readership. There is no universal interpreted programming language.
That was in 1992. The universal interpreted programming language finally arrived in 1996. It was written in ten days by a programmer at Netscape named Brendan Eich.
Patterns of progress
Swapping out images when someone hovers their cursor over a link might not seem like a sensible use of a brand new programming language, but back in the nineties there was no other way of creating hover effects.
:hover pseudo‐class in CSS. You can validate form fields using the
TYPE attributes in HTML.
That’s a pattern that repeats again and again: a solution is created in an imperative language and if it’s popular enough, it migrates to a declarative language over time. When a feature is available in a declarative language, not only is it easier to write, it’s also more robust.
Twenty years later, Zuckerman wrote:
I wrote the code to launch the window and run an ad in it. I’m sorry.
Pop‐up (and pop‐under) windows became so intolerable that browsers had to provide people with a means to block them.
Web designers would do well to remember what the advertising industry chose to ignore: on the web, the user has the final say.
Whatever its exact meaning, the term Web 2.0 captured a mood and a feeling. Everything was going to be different now. The old ways of thinking about building for the web could be cast aside. Treating the web as a limitless collection of hyperlinked documents was passé. The age of web apps was at hand.
In the 1964 supreme court case Jacobellis versus Ohio, Justice Potter Stewart provided this definition of obscenity:
I know it when I see it.
The same could be said for Web 2.0, or for the term “web app.” We can all point to examples of web apps, but it’s trickier to provide a definition for the term. Web apps allow people to create, edit, and delete content. But these tasks were common long before web apps arrived. People could fill in forms and submit them to a web server for processing. Ajax removed the necessity for that round trip to the server.
Perhaps the definition of a web app requires some circular reasoning:
HTML’s loose error‐handling allowed it to grow in power over time. It also ensured that the language was easy to learn. Even if you made a mistake, the browser’s implementation of Postel’s Law ensured that you’d still get a result. Surprisingly, there was an attempt to remove this superpower from HTML.
After the standardisation of HTML version 4 in 1999, the World Wide Web Consortium published XHTML 1.0. This reformulated HTML according to the rules of the XML data format. Whereas HTML can have uppercase or lowercase tag names and attributes, XML requires them to be all lowercase. There were some other differences: all attributes had to be quoted, and standalone elements like
BR required a closing slash.
XHTML 1.0 didn’t add any new features to the language. It was simply a stricter way of writing markup. XHTML 2.0 was a different proposition. Not only would it remove established elements like
IMG, it would also implement XML’s draconian error‐handling model. If there is a single error in an XML document—one unquoted attribute or missing closing slash—then the parser should stop immediately and refuse to render anything.
XHTML 2.0 died on the vine. Its theoretical purity was roundly rejected by the people who actually made websites for a living. Web designers rightly refused to publish in a format that would fail completely instead of trying to recover from an error.
Many of those problems would also affect HTML and CSS files, but because of Postel’s Law, they can recover gracefully.
Web designers who ignored the message of John Allsopp’s A Dao of Web Design made the mistake of treating the web like print. The history of print has much to offer—hierarchy, typography, colour theory—but the web is a fundamentally different medium. The history of software development also has much to offer—architecture, testing, process—but again, the web remains its own medium.
It’s tempting to apply the knowledge and learnings from another medium to the web. But it is more structurally honest to uncover the web’s own unique strengths and weaknesses.
The language we use can subtly influence our thinking. In his book Metaphors We Live By, George Lakoff highlights the dangers of political language. Obvious examples are “friendly fire” and “collateral damage”, but a more insidious example is “tax relief”—before a debate has even begun, taxation has been framed as something requiring relief.
On the face of it, the term “web platform” seems harmless. Describing the web as a platform puts it on par with other software environments. Flash was a platform. Android is a platform. iOS is a platform. But the web is not a platform. The whole point of the web is that it is cross‐platform.
A platform provides a controlled runtime environment for software. As long as the user has that runtime environment, you can be confident that they will get exactly what you’ve designed. If you build an iOS app and someone has an iOS device, you know that they will get 100% of your software. But if you build an iOS app and someone has an Android device, they will get 0% of your software. You can’t install an iOS app on an Android device. It’s all or nothing.
The web isn’t as binary as that. If you build something using web technologies, and someone visits with a web browser, you can’t be sure how many of the web technologies will be supported. It probably won’t be 100%. But it’s also unlikely to be 0%. Some people will visit with iOS devices. Others will visit with Android devices. Some people will get 80% or 90% of what you’ve designed. Others will get just 20%, 30%, or 50%. The web isn’t a platform. It’s a continuum.
Thinking of the web as a platform is a category error. A platform like Flash, iOS, or Android provides stability and certainty, but only under a very specific set of circumstances—your software must be accessed with the right platform‐specific runtime environment. The web provides no such certainty, but it also doesn’t restrict the possible runtime environments.
Platforms are controlled and predictable. The World Wide Web is chaotic and unpredictable.
The web is a hot mess.