Published on

What we didn't learn from the past

Authors
  • avatar
    Name
    Chris Armstrong
    Twitter

I started writing this some time ago, but the subject was still bouncing around in my head

The HN debate around this article on Plan 9 namespaces was interesting to me, because I saw the parallels with the way older technologies are spruiked as having lessons for us that we forgot when designing modern software.

These stories have a way of capturing a nostalgic part of the zeitgeist, running on a formula that roughly translates to:

there was this fork in the road with technology. We could have taken the simpler path to richer and more reusable software, but we went down the road of complicated and unmaintained hacks...

This carries an implicit message: how nice it would be if took the right turn; shame on us, we (as software developers) never learn, if only we could turn back and start again.

I think this wistful perspective on software hides a conceited idea of old technologies, because it only focuses on the lessons we (think) should have learnt. It narrowly prioritises simplicity: as humans we look first to simple narratives to explain the difficult and non-obvious. It ignores why technology evolves a certain way, and more specifically, the dangers of overly generic and austere interfaces in constructing complex software systems.

What we forgot

In the case of Plan 9 namespaces, it's the way it builds on Plan 9's idea of "everything as a file", that gives each process its own filesystem namespace, which is the simple primitive that enables a whole set of use cases naturally through composition, like remote desktop, or seamlessly grafting network resources into your workspace, or easy containerisation.

The benefit of composability is two-fold: not only is their less API surface to learn, but we also can reuse more to reduce the duplication of core functionality and the amount of code to maintain (and compile, test and deploy).

There is a similar argument being made with HTMX: that we forgot what REST and Hypermedia is, which enabled us to build a simpler web that is easier to maintain through primitives in HTML and HTTP and the ecosystem of technology used to make it fast and scalable.1

The idea is that React and similar frameworks are overkill for what most applications need, and not only is all this JavaScript making pages slow, it's bogging down developers with complex build tooling and bloated design patterns. To make matters worse, developers are doubling down on React by introducing more components (and failure modes) with Server-Side Rendering (SSR) and React Server Components, instead of questioning the necessity of all this complexity to serve a web page2.

Recovering the past

Everything is a file

In the case of Plan 9, a lot of emphasis is put on what we have adapted from research operating systems like it, such as Unicode and /procfs, and this is a good way of thinking about it. Plan 9 was an experiment and an avenue of research, and while it didn't itself become the baseline for technology we have today, we learned a lot from it and recreated some of those technologies where they made sense, in ways that suited today's technology and ways of working.

Because it was useful, I think as developers we can become unstuck thinking that Plan 9 was a failure, and that we need to go back, because it's easy to look at projects like it with a rich legacy of ideas, and wonder else we should have adapted (and hopefully pondering why we didn't).

For namespaces and "everything-is-file", the composable interface starts to break down when we try to cram the interface we need through the interface that is given (such as a file handle, and we lose the strictness that a more holistically designed interface gives us.

When everything is an instance of the same abstraction, our interfaces rely on convention. So if everything is a file, we need to know which paths to open and how to structure the records we write to those files to enable the right behaviour.3 Developers still need to build libraries and layers of code above all that to make its interface easier to use, more understandable and "documented".

As a convention, it's already a miracle that the Linux filesystem layout has held together: could you imagine if you had to program against a loosely defined rules that varied a little bit between operating systems, each with slightly different names and things appearing in different places because some application developers thought moving things around or tweaking flags here and there made sense.4

Developers deal with this by introducing well-specified, strongly typed interfaces in the language layer, which work well with their development tooling and leave no ambiguity to the other developers on their team. Across network boundaries, API primitives like HTTP methods and JSON provide reusability, while OpenAPI and JSON-Schema let us be exact in how they should be used and what they do.5

Everything is a interactive control

With htmx, it's the idea that hypermedia was forgotten (and the evolution of the web with it), in favour of rich but complex thick-client applications that are enabled with JavaScript. Developers mistakenly tried to decouple the server and the client through JSON or GraphQL APIs, but made the web slower and more complex instead.

For all but the most sophisticated applications, the idea is that we should be first building straightforward web pages, but progressively enhancing them through more Web 2.0 style interactivity where the server returns HTML partials directly (i.e. the client and server are coupled) and part of the page is replaced.

Because HTML sort of stopped with interactivity somewhere in the 90s (only forms can submit data, they can only do POST, hyperlinks replace the whole page, etc.), HTMX acts as a straightforward JavaScript library that leverages a new set of attributes that can be attached to any element to specify their behaviour.

These attributes prefixed with hx- specify what HTML tag gets replaced, how it is replaced (e.g. the entire body or just part of the DOM), how the browser history is affected, the HTTP method to use, what event triggers the interactivity, etc.

HTMX's generality, and the low-level nature of its primitives also has the same problem as "everything-as-a-file". They can be applied anywhere, are low-level and exceedingly flexible, and usually more than a couple are needed to be combined to achieve common patterns.

For example:

  • routing equires htmx-target to specify what HTML element is swapped out with new content, and probably hx-push-url to update the local history object6.
  • updating a component in place will consider the same again, but also add hx-swap (if you're swapping content out or appending) and hx-select (if your server returns more than what you need and you want to select child element). You may also need to add hx-ext (do you want a DOM morphing handler to preserve state?).
  • hx- attributes can be inherited7, which can complicate reasoning about them and how they apply (and what needs to be overridden)

The attractiveness of a such a model is clear - no more useState() and useEffect() hooks sprinkled through my code. I think they're onto something, but (as always) the devil is in the practice it.

The first thing to notice is that the server is now doing more of the heavy lifting. But the idea is that you can use any language on the server, but now we're restricted to what the server supports in terms of managing our HTML rendering.

First consider how your server is going to render these "partial" HTML fragments. The stock answer tends to be HTML templating frameworks, which are glorified string concatenators with slightly friendlier syntax (think modern incarnations like LiquidJS or Handlebars, but PHP and JSP are similar precursors). These can be manageable for relatively simple requirements, but can quickly degenerate as your system grows in complexity.

Even if your templating supports some crude partials imports, you'll usually have two sets of template files (the full page ones and the partials), which you better have co-located if you want to find them again in your codebase. Your server is now also checking headers to see if it should render a full or partial HTML page or adding new endpoints for partials. Those templates probably aren't unit testable either (you'll want simulate the DOM to be able to do that with confidence - may as well check everything in the browser!).8

Developers then, like now, specialised in either the frontend or the backend, so it was the rare few who understood how this hetrogenous stack fitted together and how things should be organised. Not bad for smaller projects where most people worked across the stack and understood all of it, but anyone who was there circa 2010 would remember how difficult it was to maintain order in large projects with dozens of developers, and how difficult refactors could be when your pages and their parts were distributed across multiple files not linked together in an obvious way (there is no typechecker for string concatenation).

Reasoning about the past (future)

As annoying as React might be, it introduced many developers to new9 ideas that improved our ability to more safely deliver interactive HTML:

  1. Our HTML could be properly componentised, and the different ways it generated were co-located
  2. HTML was not expressed as a type-unsafe string, but as a data structure (e.g. through JSX) we could test against
  3. By expressing our HTML as a data structure, we could generalise state changes by simply re-rendering each state and letting an engine update the DOM for us
  4. Updating multiple parts of the page at the same time in response to the same data change is naturally expressed in the language and can be unit tested

Although what gets adopted by developers is typically social and mimetic10, React promised (and delivered) on freeing a generation of developers who were still living with the pain of trying to develop interactive applications with DOM-manipulators like JQuery. Finally JavaScript and HTML could be mixed together in a coherent & testable way, which was attractive enough to senior developers (so anyone arriving later in the industry was swept along with their choices).

HTMX at least tries to improve on the horrible situation where JavaScript and HTML and other code would be mixed in the same file, but it pretty much left it to developers to figure out the server-side, where the state-of-the-art for the past 25 years11 has been String.concat() (or if you're lucky, high quality string interpolation). The exception being JavaScript/TypeScript, which has a rich ecosystem of server-side rendering libraries designed with rich component and update models.

The irony is that most boring, safe, server-side side languages, have always had rich-component based libraries for GUIs (C++, Python, Java/Kotlin, Swift, Ruby, PHP), but this thinking never translated to their use on the web.12

Where this leaves us

I'm actually ambivalent about the legacy of dead-ends in software research. I've always been fascinated by what never caught on, and what the lessons for history were.13

I think much of the lessons of old operating systems, lost frameworks, forgotten languages and deprecated networking paradigms have carried forward, and are occasionally reborn in other ways. I think that we also hold on too much to bad ways of working and coding14 without reflection.

But I also think we need to be careful about relitigating the arguments from old technology, without considering the social, business or technical reasons why they didn't work at the time, and why they may (still) not work now.

Footnotes

  1. The essential idea behind web scalability is that the server is much faster when it is simply serving up HTML that can be cached on CDNs, and that the selection and querying of data to display is on a database right next to the server.

  2. Notice the shift in language between "web application" and "page" - what was once thought of as a relatively static page, but enhanced with hyperlinks and simple forms, has evolved to an application with rich functionality and interactivity.

  3. For example, in Plan 9, the graphical user interface is exposed as a set of special driver files which are polled (for events) or written to (to create windows), however, without a client-side library, it relies on convention, and (probably) a lot of runtime data validation.

  4. Of course I'm kidding: systems programmers targeting cross-UNIX compatibility do this all the time with POSIX

  5. I agree that a lot of the web still leaves much to convention, but it's usually solidified at the language layer where static validation can occur, or testable specification. Plan 9 never did the same thing for filesystem layouts, but if it did, we might have come up with another strong API convention for transparent cross-process and cross-network communication.

  6. HTMX covers this obvious combination of routing & history stack with hx-boost

  7. this was covered in this critique

  8. Tongue-and-cheek, but in-browser testing is the most "bang-for-buck" way to validate your frontend works correctly

  9. Old ideas really - Elm was doing this around the same time, virtual DOM originated in older internal Facebook framework, which was inspired by an an older spec for JS that went nowhere

  10. i.e. developers choose what other developers use and talk about (i.e. Facebook's developers on Twitter circa 2015), not what is "objectively" better for their requirements from a smörgåsbord of options

  11. JSP was born in 1999, and server-side rendering using templates hasn't got much better in newer frameworks.

  12. Not that a straight lift of Java Swing or QT+ is the right idea (some of us still remember JavaServerFaces). I don't think all server-side languages are this bad: there are pockets of the ecosystems of other languages which have tried to do better. For example, the Rust ecosystem is sufficiently outward looking to borrow ideas from other languages (this article has good coverage of "inspired" designs). Another to look at is the OCaml ecosystem with (see the overkill type-safety of tyxml, or simpler dream-html with links to embedded HTML DSLs in other language ecosystems).

  13. I haven't discussed BeOS, OpenSTEP, OLE, Smalltalk, or even my own minor involvement with the now defunct EtoileOS.

  14. I'll spare you a diatribe on Object-Oriented Programming; enough has been written about it since it began