A Brief Summary of late 90s web technology

Image for post
Image for post
Essential Architecture of (archive link) was a system designed to provide personalized local information to users in several major US cities and select international markets (Sydney actually launched). This is a summary of how it was built.

Development for Sidewalk can be reasonably considered to have begun on August 24 of 1995. That was the day I accepted the job as development manager, and I was the first development hire and the date is easy to remember because it was Windows 95 launch day :). Its code name was “Cityscape” but I’ll call it Sidewalk throughout, even though that name was chosen much later.

When I started there were some really great prototypes, I’ll call them code-free because they were intended to give you a sense of the experience we wanted to create without actually building the technology. IIRC they were done in Adobe Flash or Macromedia. The vision was really compelling. I don’t want to dwell too much on the details of the experience because that would be an article in-and-of-itself, but there were many technical decisions to be made based on what was desired. And in considering this, I think it’s helpful to remember that it was 1995 and a lot of what might seem obvious now was much less obvious then.

Question: Where to store the data?

A great deal of thought went into this, and while we settled on what might be considered the obvious choice, SQL Server, it was anything but obvious to us. For context, consider that this project was happening in the Consumer Division and we were quite used to shipping CD’s full of content on custom software and had little experience with production databases at all. MS SQL Server was anything but a symbol of excellence and reliability — generally it barely hobbled along.

We considered, the FOX database system (and indeed although we did not go with FOXPro that team probably provided the most value in terms of education and helping us to understand the considerations of any team), the Access DB Engine (JET RED), and the less well known variant (BLUE) which ultimately powered Exchange Server. BLUE was not ready for additional customers and none of the others were multi-threaded at the time which made SQL the obvious choice. Also we knew we were going to want a replication story and SQL promised one. And so that’s what we went with.

Question: What form should the delivered content take?

Again we explored a lot of choices. This was the world of walled-garden online services and one not-so-crazy choice was to incorporate all of this into the budding MSN service and deliver the content with custom MSN applications like the rest of MSN. We looked at many MSN options and we looked at the Blackbird online system which, again, IIRC, was looking more like it would wind down by then than not. We also looked very seriously at the possibility of generating PDF dynamically to get the typeset and layout quality we were looking for, but this proved very uneconomical. Ultimately we made what would now be considered the obvious choice, HTML. But not so obvious them: general font support was a long way off, many browsers didn’t support tables much less what you would consider general CSS layout. And Javascript was, well, weak. Still that seemed like the most likely way to succeed so that’s what we did.

Question: How to compute the HTML?

It was soon clear that we would want to do personalized content and that meant dynamically rendering literally every page in the system. This was simply not done in 1995/1996. Virtually everything but the search engines was pre-printed HTML and most of it was script free. Most of it would barely be recognizable was web pages now. The were script languages used to generate web pages but even ASP was not yet a real thing and they seemed far too slow for what we were going to need. The only thing that had a chance to be fast enough was native code via ISAPI so that’s what we selected. It’s possible that this was popular because the dev manager had just left the Visual C++ team and was not scared of native languages :D

So with these big rocks in place, now what?

Sourcing Information

We needed a source of information about local businesses, and we needed to ensure they were geocoded, or geocodeable. It seeemed like ABI had the only decent source of this information at that time, or at least the only one you could reasonably by. It would turn out that most geocoding was crap and virtually everything would have to be recoded using map data from Navteq based on street addresses to get anything that looked normal and aligned with the maps we could make.

We needed the ability to attach editorials, as this was to be a deeply editorial product, to any business as well as write standalone stories. These needs kickstarted our “Entity Editor” aka Entitor which was to become the chief content creation tool. It ultimately subsumed the other page layout tool we were using “Shadow” (it was overshadowed by Entitor?) In any case data was considered THE asset from the beginning and we went through many rounds of working on Entitor implementations to create what we needed. We started with simple Access applications and ended with a full on VB Client. And then VB+MFC extensions.

Among the things Entitor had to do was let users create story fragments for use in various places in the product. Local HTML controls were not a thing in 1995/1996 (and by now it was well into 1996) but RTF controls were, so we had a system for converting our HTML subset (“CSML” for Cityscape markup language) into RTF and back losslessly. This is one of the few bits of code I directly contributed. Entitor was well set up for doing new entity subtypes from the master entity table and there were a lot of subtypes…


Which brings us to Schema. Early on we had little idea how important this was going to be but, it turns out that managing your schema is of cosmic importance. We were fortunate to have some expertise on the team early on and we had sufficient interest that we soon had a team affectionately called the Schema Nazis (akin to the Soup Nazi, not the other kind). These people kept things consistent and encouraged people to use structures the way they were intended rather than rolling their own. As a consequence you could reasonably understand the schema pretty much throughout. Upgrading schema had notoriously been a problem for other products so we decided to do it so often that we could do it blindfolded. Every release included a schema upgrade step whether we needed one or not, just for the practice. This proved invaluable.


Unfortunately zip codes are too big (or sometimes too small) to use for finding things, and people don’t think about zip codes anyway. They want dinner in “Hell’s Kitchen” or “Belltown” or something like that. So one of the unexpected needs was the ability to draw polygon based neighborhoods, durably store them, and classify businesses as belonging or not. Of course polygon testing against these weird boundaries (!) would have been far too slow for most searches so instead we recomputed membership from time to time. In the schema, the test was just a simple join from Business to BusinessNeighbourhood (you could be in more than one of course because non-overlapping would have been far too reasonable for the real world.).


A brief word on maps. We had software from Encarta that would help draw our maps (onto in memory bitmaps) and we had licensed a third-party gif compression library (I don’t remember who made it, but they were famous for this sort of thing at the time). Interestingly none of this code was threadsafe. We needed to do it in a thread-safe way so we did a cute thing. We compiled the code into a .DLL we could load (still not threadsafe) then we made 10 copies of the DLL (just exactly binary copies X1, X2, X3 etc.) and then we arranged for any given thread to have its one-and-only library it would use. In this arrangement there were no shared globals and no shared stack. Problem finessed. :)

The fact that we had maps at all was pretty amazing.


In addition to great content about the businesses, the so called “evergreen” content (which was really only mostly evergreen but whatever) we had regular articles on a schedule. e.g. Jan Evan (who we got from Seattle Weekly) wrote Jan’s Tips and had to redirect to the latest version. It turned out this was essential out our publication strategy, in some sense it defined it. On the production web servers, content that was not yet published would return an error as though it didn’t exist (even though it did) and on a regular schedule the friendly-name-redirection would snap to new IDs for all the well known articles. So you could write articles arbitrarily far in the future, you could see your articles on the preview server (which ignored the time rules) and let the update system fire on a schedule to make your content visible. Generally the only thing that had to happen at publication time was the quick flip to the “on” state, but there was drama in that system for some time. More on that later.


This was at the center of everything. The idea was that by giving us a few preferences we could use the newfangled “cookie” features to assign you an id (we could even let you recover your lost cookie state with some questions or an email address, wow right?) Based on your id we could then show you say “Italian” restaurants preferably when offering suggestions rather than anything else. And of course we could show you advertisements you were more likely to be interested in. And ads were rendered right in the page (because CDNs and third-party ad servers were a million years away). Ads really were content.

To do all this we had a variety of page templates and data sources and we could use these in various ways. One of the most interesting things we built was a system called “Schematizer” which basically let you specify the columns you needed (by ID) and it would automatically figure out what to join to get all those columns to you. Over the course of many years this system turned into “Method and apparatus for operating on data with a conceptual data manipulation language” which you can read about if you dare. Interestingly it is indistinguishable from GraphQL except in the syntax details — conceptually they are the same thing. So you might say we used something like what you would now call GraphQL to read our database and form content.

It turns out, using something like this to read your content makes your life hella simpler when you upgrade the schema. Most of your code doesn’t care.

Users personal details were stored in a single R/W production database (not shown in the diagram) that was accessible to all servers.

To put this all in perspective, it was already cool that we actually looked that the domain name you had used to decide what content to show you because any server could serve every city. More newfangled HTTP header rocket-science. The fine-grained personalization and browser detection we did had never been seen before.

Newsletters, Privacy, and the Early Days of Ads

It started small, but newsletters turned out to be a hugely popular feature. Based on your preferences we could email you updates with deep links. You know you have a winner of a newsletter when customers complain if they don’t get it.

Though they were wildly successful, they illustrate some of the problems of being in this space in 1997. People simply didn’t understand their value.

Consider this real example. People told us about their commute so they would know which freeway they should take. They even told us when to send the email. Now, we knew their browsing history on our site and by 1997 we had cars dealerships as sponsors and we could tell who was reading about cars, even which car. Now imagine the following conversation that never happened: “I know a person, I can’t tell you their name, but they were looking for the cars you sell, they are going to be driving right past your dealership at about 5:25pm today (we know this). What will you pay me to include a coupon for your business on the email I’m about to send them?”

Now why did we never make that sale? Sure, it seems like such a lead would be worth a fortune but how would you even begin to sell such a thing in any kind of volume? Advertisers were barely understanding impressions, and click-through-rate was totally new. Our notions of what might be considered good use of data or bad were immature to say the least. In principle we could have made that sale in 1997. In practice… hell nobody knew if it was even ethical, much less how to sell it. So it remained a thought experiment. “The dealer gets no data unless they walk in right? Surely that’s ok?” It was all moot anyway.


We had so much to learn. It would not be an exaggeration to say this project was saved by some excellent ops people who gave us the technology and discipline we lacked to make this actually stand up and work regularly.

Many of the systems I have just outlined really had only very loose contracts and it was hard to know how they all depended on each other. Our ops folks helped us to get that right. Publication, which had been based on SQL Server Replication was far too unreliable and we changed to a much simply strategy where we would bulk copy (in fact at first backup and restore) database images to the production databases on a schedule. Three servers were more than enough to take load, we could run with one down. Eventually we upgraded to four so that we could have an outage and maintenance and still have two going with which to serve the world. Yes, four, crazy right? Four machines of 1996 vintage, 64M on the big ones. We had the performance up to 22 pages per second per server even though we were computing all the content.

So full backup/restore and then bulk copy, and then what you might call “minimal bulk copy” were the order of the day. With that tech we went from a lot of drama (and a guy in the data center to push the button every few hours) to something fully automated over the course of a few months.

Soon we were building a custom setup program to deploy the various server types and upgrade them. Ultimately we had it to a point where we would put all the needed software on one CD, and then you could put that CD in any box and it would “know” what had to happen on that box and do it. But that was a long way from our original many-page operation plays. But at least those were fairly well controlled. Powershell-like automation of installations on Windows was a long way away so we wrote custom tools to use the native APIs wherever possible so that UI could be as simple as possible. Also not shown on the diagram: the monitoring service that was soon in place.


And so with all that context. I’ll describe the diagram to you left to right.

Editors on MS Corpnet used Entitor (mostly), they create content and edit businesses.

Editors can use the preview HTML server too see their content in context, content fragments are displayed in real-time by conversion to RTF.

Other bulk processing tools (such as the regular update from ABI) work on the content staging database, which is authoritative.

Content is regularly copied from the staging database to the production databases. Those databases are not on corpnet and so administration was “tricky”, we basically had to go on premises to get at anything on the interior networks and that level of isolation was the order of the day to protect corpnet assets.

The production web servers are paired, one each to a database, we tried cross connecting but that made things worse. Thread load on those servers was tuned regularly to maximize throughput. Though it turned out the very most productive settings had too much variability (but that was much later).

Not show in the diagram. The personalization database, which was really very lightly-loaded, and the monitoring systems.

To a first approximation exactly six servers ran all of on the end-user side. Crazytown.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store