My History of Visual Studio

[This was originally posted 10/5/2009; Last week was the 25th anniversary of Visual C++ and so I thought this was a good time to dust this off. I’ve combined all the parts into one posting.]

Part 1

I wrote in the teaser that there is no one “History of Visual Studio”, there are as many histories as there were people involved. If I may repurpose a famous quote, “There are eight million stories in the Naked City…” This is one of them.

Visual Studio’s history probably begins in 1975 with Bill and Paul’s decision to create Altair BASIC. Maybe you could start before that but I think that’s as far back as you could go and still say it’s about Visual Studio at all — it’s at that time that Microsoft decided it was going to have software development products and not, say, toaster automation.

Old timers like me used MS-BASIC on many different classic microcomputers, for me it was mostly the Commodore PET (I started on the PET 2001 model, whose name is especially ironic considering how unrepresentative it is of a 2001 era computer but I digress). You could have used MS BASIC on an Apple, or a TRS-80. Many of us cut our teeth on those computers, and the idea of rich immediate feedback, a sort of personal programming experience was branded on us.

Those of us that really wanted to know what made a computer tick (literally) would spend countless hours tearing apart the software that made them run; I’m fond of saying that Bill Gates taught me 6502 assembly language by proxy and so was training me for my job at Microsoft years later.

The 80s were a time of genetic diversity in computers, and so in programming tools as well. Lots of people were making programming languages and I’m going to avoid dropping names to stay out of trouble with trademark holders but I think you can remember many different first class systems for the computers of the time. I can think of at least 5 off the top of my head.

If you were creating tools for the PC, which was starting to be the dominant machine by the middle of the 80s, your life was especially difficult. The x86 instruction set wasn’t terribly fun; the machine architecture with its 64k memory segments was enough to give you a migraine. These challenges, and the need to milk every bit of performance out of processors, resulted in very bizarre PC-specific language constructs.

It’s 1988 and I landed at Microsoft fresh out of college, I had my own PASCAL compiler under my belt (built with 4 friends in school), and an assembler and linker to boot. But I was not ready for:

char _near * far pascal GetString(char far * far * lplpch);

For starters, what the heck was a Pascal keyword doing in my C programming language? The only thing Visual about this thing was that the nears and fars were making me wish for bifocals (recently granted, and I was better off without them).

I need to rewind a bit.

The compiler that I was using to do my job in 1988 was Microsoft C 5.1 — a great little language and maybe one of our more successful releases. Microsoft had enjoyed considerable success in the languages space leading up to that time but in recent years, and for some time to come, a certain company, whose name starts with a B, was eating our lunch. Great tools, great prices, it was very motivational in Buildings 3 & 4 on the Redmond campus.

So the C product was already on its 5th major release. The basic compiler “BC6” had just shipped and Quick Basic “QB4” was going out the door.

You may have noticed I’m not mentioning C++ yet. We’re still a goodly distance from that part of the story.

So where am I? Ah yes, 1988. The project I’d been hired to work on was cancelled after a few months (I expect I’m in good company on that score), that project by the way was a cute variant on the C language designed for incremental compilation — it was called, cough, C#. Strangely, through the lens of 2009, it looks remarkably like what you would get if you tried to make C.Net.

The mainstream project at that time was C6.0, it featured a bunch of cool things, new compiler optimizations, better debugging, and more. Its primary target was an operating system called OS/2 — you may have heard of it — but it also had to run well on DOS. A tall order that.

I was working on the programming environment, in particular on the source browser, largely because I had worked on the source browser for the C# system and people liked it and many of its ideas had found their way into the C6 product already. I suppose nobody will be surprised that one of the first things I had to do was improve the performance of the thing.

Anyway the programming environment, arguably the first IDE we ever shipped, was called “PWB” or programmer’s workbench. It was all character mode graphics but it used that funky Character Windows thing that was prevalent in the flagship applications Microsoft was shipping at the time. CW, or just COW as we lovingly called it, was a delightful environment that provided all kinds of automatic code swapping (but not data) to allow you to have more than 640k of total code while still running on a regular PC. Its swapping system bears a lot of resemblance to what was in the real Windows of that era (2.4.x I think it was).

Now the thing about having only 640k of memory and trying to have an IDE is that you can’t actually expect the thing to be loaded and running while you’re doing something like a build, or debugging, or basically anything other than editing really because you simply don’t have the memory. So this beauty used some very slick tricks — like for instance to do a build it would first figure out what build steps were needed, write them to a file, then exit, leaving only a small stub to execute those steps, run the steps, and then as the last step restore itself to exactly where it had been when it exited having created the illusion that it was resident the whole time, which it was not.

Debugging used even more sleight of hand.

I think the Codeview debugger may be one of the first, and possibly the most important DOS Extended applications ever written (because of influence). You see the debugger is in a difficult position because it needs to have symbols and other supporting data active at the same time as your program is running, and it doesn’t want to disturb that program very much if it can avoid it. This is quite a challenge given that memory is as tight as it is — but there was a bit of an escape clause. Even in say 1989 you could use features on your 386 processor (if you had one) to get access to memory above the 640k mark. These kinds of shenanigans were commonly called “using a DOS extender” and I think the Codeview debugger probably had one of the first ever written, and I think a bunch of that code later inspired (who knows) the extension code in another product you may be familiar with — that would be Windows 3.0. But that is really another story.

All right so optionally dos extended character mode debugger with character mode editor and build system that makes the product exit to do anything. Get all the bugs out of it and presto you have the first MS IDE.

Lots of languages built on PWB, it was designed to support a variety of them. I know the Browser formats supported Basic, Pascal, FORTRAN, COBOL, and C style symbols out of the box. Most of those actually saw the light of day at one time or another.

That was spring of 1990 and that was C6.0. That IDE was the basis for the compiled languages for some time.

However, things were not standing still.

C++ was taking the world by storm, and having a high quality optimizing C compiler was old news, but lucky for us we had not been idle during this time. While some of us had been busy getting C6.0 ready others had been busily working on a C++ front end for our compilation system. I think mostly everyone was sure we could finish up that C++ compiler in no time at all (I say mostly because there were people who knew better).

Wow, were we wrong. I mean, seriously, majorly, what-were-we-THINKING wrong.

It turns out C++ is a hard language to compile; heck it’s a hard language to even understand. I remember this one particular conversation about how tricky pointers-to-members are with eye-popping results when it was pointed out that one of those things could point to a member defined in a virtual base… C++ is like that, a lot of things seem easy until you combine them with other things and then they get hard.

Meanwhile we were struggling to get the needed features into the debugger stack, it turns out that creating a C++ expression evaluator is no easy feat either. Expression evaluators are lovely things — they are frequently called upon to evaluate expressions that would be illegal in any actual compilation context (e.g. find me the value of a static variable that is currently out of scope, or the value of a global defined in another module). Expression evaluators have to do all these things while still retaining the feel of the language and remaining responsive.

Did I mention all this could be slow?

I was working on a new development environment, targeting the new 3.0 release of Windows — another project that never saw the light of day — but we were having fits trying to get windows.h to compile in anything like a decent amount of time.

That’s when the precompiled headers miracle happened.

I call it that because the .pch literally revolutionized the way we built things with our tools. Other system had used similar systems in the past but ours had some very clever notions. The most important of which was that since it was snapshot based it guaranteed that the net effect of the headers up to the PCH point was absolutely identical in every compiland. That meant that, for instance, you could exactly share the debugging and browsing information as well as all the compiler internal state. The fact that when you #include something you may or may not get the same net effect in one file as in another is the bane of your existence as a tools person and this was immediate relief! And it was fast!!!

I’m not name-dropping but suffice to say I know the person who did this work very well and I was one of those weekend miracle deals that you read about, it couldn’t be done, can’t be done, oh wait there it is.

Meanwhile, yet another team was working on a little something called Quick C for Windows which turned out to be hugely important. A lot of ground breaking work went into that product, it was the first IDE for windows with real debugging, but I’d have to say it was incomplete and I’ll talk more about why that is hard in just a second.

Meanwhile that other company was not standing still and they were delivering great C++ compilers. It was 1992 before we had anything at all. In those months we delivered C7 on top of PWB again (PWB 2.0) and blink-and-you-missed it, we delivered Quick C for Windows (QCW).

My project was cancelled. Again. It’s a good thing had my fingers in a lot of pots 🙂

By the way, the C7 product was, by mass, I believe, the heaviest thing we ever shipped. I don’t think we ever tried to deliver that many books ever again.

So we shipped a bookshelf and now things were getting interesting.

We couldn’t ship PWB again, we needed a graphical development environment, our basis for this was going to be QCW and work was already underway to generalize it but oh-my-goodness there was a lot of work there. Also, we were missing some critical C++ language features; the competition was not standing still. We had a very limited class library that shipped with C7, MFC 1.0. We needed an answer there, too. And did I mention that we needed a speed boost?

Any of these bits of work would be daunting, but let me talk about just a few of them. First, debugging.

Debugging 16 bit windows (it would be Win3.1 by the time we were done) is nothing short of a miracle. Win16 is cooperatively scheduled, it has no “threads” per se, there is just one thread of execution. Now think about that, what that means is that you can’t ever actually stop a process, if you do the entire world stops. So if you’re trying to write a GUI debugger actually stopping the debuggee is sort of counterproductive. So, instead, you have to make it LOOK like you stopped the debuggee but actually you didn’t, you let it keep running, only it isn’t running any of the user’s code it is running some fairly (hopefully) innocuous debugger code that keeps it dispatching messages, lets the debugger run and doesn’t actually proceed with the user’s whatever-it-was-they-were-doing.

A tiny part of this miracle is that when the debuggee “stops” you have to, on the fly, subclass each and every one of its windows and replace its window proc with something that draws white if asked, queues up important messages to be delivered later, and mostly does a lot of default processing that is hopefully not too awful. That’s quite a trick of course when any running process could be trying to send messages to the thing for say DDE or OLE or something. It’s the miracle of what we call “soft mode debugging” and that’s what we delivered.

Meanwhile, the tools… Well there was this thing called Windows NT going on, maybe you’ve heard of it, we didn’t want to produce all different kinds of binaries for hosting in different environments so we needed to change our dos extension technology to be something that allowed us to run Windows NT character mode binaries on DOS. That’s exciting. And it had to be regular vanilla DOS or DOS as running inside of Windows. Double the excitement. But sure, we did that too (with quite a bit of help from a 3rd party that again I’m not naming because I don’t want to go there).

And the tools were too slow. Yet another effort went into putting codeview information into .pdb files to alleviate the painful de-duplication of debug info that was the cvpack step; those steps were then fused directly into the linker so that we didn’t write out the uncompressed stuff only to read it back in and compress it, so we could write it out again. Add that to some practical compiler changes and we were, for the first time in a very long time, the fastest C++ compiler out there (at least according to our own internal labs, YMMV).

Meanwhile MFC was coming along very nicely thank you very much. And there was a designer and a couple of critical wizards and wow this thing was starting to feel like VB: draw, click, wire done.

I really should say something about VB.

The code name for Visual Basic 1.0 was Thunder. I thought it was arrogant when I first heard it. I thought their “Feel the Thunder” attitude was just some cocky boy swagger. I was wrong.

There was a reason every product wanted to be like Visual Basic, they changed their names to Visual <whatever> and tried to get that feel for their programmers. It was because it was just that good. Not everyone had that hot-while-you-type interpreter action going on in their environment but boy was everyone trying to recreate the key elements in their space. We certainly were. By the time we were done Visual Basic had profoundly affected the design experience and the framework — it wasn’t VB but it was Visual — it was Visual C++ (that’s C8 and MFC2 if you’re keeping score.)

We did a very fun follow-on release where we got the 16 bit tools working under Windows NT in 16 bit mode (kudos to their compat people, we were not an easy application to port) and we added more OLE and ODBC support. People were really liking this thing and for the first time since I had been at Microsoft I felt like we had the edge against our competitors in the C/C++ tools space. We were still missing language features but what we had was pretty darn cool.

While that was going on, a few of our number were doing something else that was pretty darn cool. They were porting all this stuff to 32 bits and getting it to run natively on Windows NT. That would be around the Windows NT 3.5 time-frame. That and getting a Japanese version out and generally fixing all of our bad “that doesn’t work internationally” practices. Very important, totally underappreciated work that was.

So Visual C++ “1.1” the NT version was available at about the same time as 1.5, as was the J version.

I guess I would be remiss if I just left it at that. Internally some people thought “1.1” wasn’t a real product it was “just a port.” These are people who clearly know nothing about development tools. The 1.1 system included a totally new 32 bit compiler back end. We all know just how portable back-ends are, especially against totally diverse machine architectures. Oh and it also included a totally new debugger back end with a totally different theory of operation — because of course Windows NT *does* have threads that you can stop. Oh and of course the memory model for the whole program was different — far and near were gone. But ya, other than those few minor things, it was just a port.

That was 1993 and we called the product “Barracuda” — I had not much to do with it personally but those guys deserve a salute in the History.

Things are about to get really exciting though; The most important release of the C++ toolset in my memory is VC++ 1.0 “Caviar” — without it I think our tools would have died. But almost as important, is the next release which I’ll write about in the next installment. VC++ 2.0 “Dolphin” which truly integrated things for the first time.

[I would really be tickled if other people would write their own “My History of VS”, either in the comments or on their blog or anywhere they like]

Part 2

Visual C++ 2.0, “Dolphin” was a very ambitious release. We were really happy with VC1 but there were quite a few things that were entirely unsatisfactory. One of them, maybe the most important, was the fact that window management in the thing was just a nightmare. VC1 used the standard MDI interface for all the windows, including the tool windows like registers, watch, output, and so forth — that was just not adequate because you found key display windows were swimming in a sea of documents many of which had been opened by the debugger and so forth. It was quite bad.

But let’s talk about some of the other things that had been going on before we get into that.

The BASIC compiler team (the non-visual basic?) delivered another solid release, but that didn’t integrate with the VC shell. It was for making standalone executables ala the other command line tools. On the other hand the FORTRAN team delivered the first Fortran Powerstation release — a little miracle in its own right. It used a lot of Dos Extension technique and some slick remote debugging tricks to debug in it from the GUI and tweaked the visual project system and yowsa you get a modern looking (at that time) Fortran release. Pretty impressive stuff. COBOL was still humming along at a regular cadence and let’s not forget MASM, the ultimate (literally) release I think and the little known other assembler ML that I think had only one release — but it was quite a feat in and of itself. Try rationalizing all the ad hoc parsing rules for MASM and ML is what comes out. But maybe we didn’t really need another macro assembler very badly by then; it’s late 1993 now remember.

Speaking of 1993, there were new versions of Office out there and they had a slick feature — docking toolbars. We decided that was the answer to the windowing problems we had in VC — go one up on docking toolbars and allow docking windows. Now you could dock your window to the side, bottom, top… really slick.

I say “we decided” but I should probably say that the foundation of this ambitious effort didn’t actually come directly from the VC team, it was the App Studio team that suggested it and built the initial scaffolding. And the ambition was even bigger.

Previously our IDE was a suite of integrated applications but we wanted to change that. The shell’s plans were to allow us to host all our features in one application that included resource editing (forms too), plus text, project, debug, build, and help all in one application. And — wait for it — it had to be modular so that other languages like Fortran, Basic, and COBOL could plug in. This shell provided a core set of services those packages could use, like menus, toolbars, toolwindows, and so forth. Does this sound at all familiar VS Extenders? More on that later.

But wait, that’s not all!

We also wanted to deliver tools for the Macintosh, anybody remember WLM? I wrote some Mac apps with it myself! PowerPC, 680x0 compilers and debuggers, no problem. Remote debugging to the rescue! Oh and let’s not forget support for the MIPS and Alpha as well.

Do you think we’re nuts yet? Because we aren’t done.

While we’re at it, the Visual C++ codebase, which was written in C (remember the C++ compiler wasn’t yet done when it started and it began life as Quick C for Windows), had to be first ported to C++ (which helpfully found lots of bugs) and then turned from a regular windows application into an MFC library that could be used by the new shell. And the whole product had to basically keep working while this was going on because VC2 had to be able to build and debug itself.

Oh and naturally we’re delivering new libraries, and a new MFC (MFC 3.0) plus since we’re going to touch all that code anyway may as well port it all to 32 bits (we had Barracuda to start with but many of the other pieces that were merging in were only 16 bit).

I’m still not done yet.

We also had some big themes around “edit and go”, “simplifying OLE”, and “beyond files” — meaning we wanted the build cycle to be tight and easy, and fast, we wanted to make all those new OLE features like embedding and whatnot simpler, and we wanted you to be able to navigate your codebase as much as possible without worrying about what files you had put it in. I added more browsing features, many C++ oriented ones at that time; we did some cool debugging across OLE RPC channels, and more. I remember a lot of debugging features because I was close to them. Things like data tips, and autoexpand, and the automatic watch window — I was the lead debugger developer during this time.

Think that’s enough for one release? We’re still not done!

More language features hit the tools, just in case you think those guys are slacking off — namespaces was the first I think but this version also lands the major language features — Templates and Exception Handling. Hugely valuable and EH was fully integrated into the Windows NT SEH model which was no mean feat!

We’re going to have one worldwide binary, the same bits ship in Japan, Europe and the US except for localized resources — the J version and the English version are the same! That had never been done before to my knowledge.

And, the cherry on top, there was a little operating system release coming out that you may have heard of, we called it “Chicago”, but it hit the market as Windows 95. We had to be the main SDK for that operating system, which meant fixing all kinds of bugs because Chicago was much less forgiving than Windows NT.

Wow, that seems totally nuts but we did that, and more. We did only minor things to the 16 bit tools from that point forward, servicing VC 1.52 (1.52a, b, and c) but we did wicked demos for the 32 bit suite, so many platforms, and a truly unified IDE, all the tools in one place.

We caught some breaks in the industry at that time as well, 32 bit was hot, 1.52 was good enough, that other tool vendor was spending a lot of time thinking about (ironically) OS/2, which, being who we were, we thought was a losing strategy. But wow, the Delphi demos! Amazing stuff! It’s a good thing the VB team was getting busy too — 32 bit tools, and OCX story to compliment their VBX story, a great ecosystem.

I think Visual C++ 1.0 may have been the most technically challenging IDE we ever delivered because of the amazingly weird things we had to do to make it work on that operating system. But I think Visual C++ 2.0 was just as important, if not more so, because of how much we moved our own technology forward and the sheer massive volume of deliverables. Both were a lot of long hours; and there were subscription releases 2.1 and 2.2 yet to come.

Part 3

I was going to go forward again in this installment but I got some requests to talk about some older things again before I did that. You might be getting bored of the distant past so I’ll try to keep that part short.

There were some other key pieces of technology floating around back in the early 90s: One of them was this thing called Codeview for Windows (CVW). This thing was substantially written by my racquetball partner (no name dropping, but I haven’t forgotten) and it was the “hard mode” debugger for windows. Remember how I said you couldn’t debug Win16 the normal way because you can’t stop it? Well you can if you have a debugger that has a whole alternate UI based on “COW” that runs on its own monochrome character-mode-only monitor (usually a Hercules Graphics Card, remember those?)

You thought multi-mon was a new feature but no, in this flavor it’s very, very, old. 🙂

This baby could give you true symbolic debugging for all kinds of Windows programs for which you had source code and debug info, but if you wanted even lower level stuff you could always hook up a serial cable to a dumb terminal and use wdeb386. Wdeb386 is your buddy!

CVW was the main debugger that we used to debug “Sequoia”, which I alluded to but didn’t go into the details of earlier. “Sequoia” (that’s a lot of vowels, huh) was a before-its-time IDE that we worked on around the time of PWB 2.0. Ultimately it was cancelled because it was too ahead of its time I think, but, as I’m fond of telling my wife, being ahead of your time is just one of the more creative ways of being fundamentally wrong. That quip didn’t make me feel any better then, either.

Anywho, Sequoia was interesting for lots of reasons but one of the most interesting things about it was that it’s commanding was entirely based on the same BASIC engine that drives VB. Everything, everywhere, was a basic command in disguise so you could record anything. It had a flexible editor that used a piece-table structure for space and had cool font support, coloring, and outlining. It had a source code debugger with the beginnings of a soft-mode solution, and it had graphical build environment, and graphical program visualizations. It was pretty slick. Of course it was seriously incomplete and it had a tight dependence on some technologies (especially that BASIC) that were not going to be available on the 32 bit OS anytime soon and we needed those to ship. We also needed all hands on deck for VC++ 1.0 “Caviar” and so Sequoia got scrapped.

A lot of the Sequoia ideas came back later in different versions of the IDE, some of them are just happening now in VS2010 but I guess that’s par for the course in cancelled projects. Perhaps, when viewed as a research project, it wasn’t so dumb. It wasn’t that many people and it wasn’t for that long, and it was quite an education. Unlike QCW, it was written in C++, with its own framework, so it provided a great test case for the C++ compiler (C7) and it influenced both COM and MFC.

I planted a Giant Sequoia in my backyard in its honor (I called it Sequoia 0.1) and, true to its name, it’s quite big after 15 years.

Another funny story about Sequoia is the series of codename changes as we cut features before finally cancelling it altogether, it went from “Sequoia”, to “Bonsai”, to “Bonfire” — One of the best things about that project was how nice the specifications were, I still have some. Thanks for those — you know who you are :)

Speaking of codenames, did you know that the various milestones of C7 had interesting codenames? There were far too many of them of course because that was a hard release to get under control: John, Paul, George, and Ringo were 4 of them; We were still in need of Beatles after that so we added Stu and then “The Long and Winding Road” (though I bet the memos all said LWR) and finally shipped with “Let It Be.”

Ok let’s get back to the more recent past.

After “Dolphin” we had achieved a pretty significant miracle I think. The shell actually worked as intended and we shipped other languages that lit up the splash screen (and more) — multi-language project creation, browsing, and debugging all in the same core bits. Basically all of the compiled languages could participate but we still had a lot of ambition.

The next release began as “Olympus” Visual C++ 3.0 — but we soon renumbered it to 4.0 to align it with MFC so that we didn’t have to say VC++ 1.0 with MFC 2.0, then VC++ 2.0 with MFC 3.0… This time it was going to be VC++ 4.0 with MFC 4.0. In retrospect this was highly stupid, especially because MFC ended up locking down on version 4.2 when backwards compatibility became a more valuable feature than anything else we could add — and so it stayed for a very long time.

I handed off responsibility for the debugging components in Olympus and with a few close friends we put together some very cool improvements for the tool chain. Incremental Linking, Incremental Compilation, and Minimal Rebuild (I was mostly involved with the latter). We had done something like this before in the original 1988 C# — it had all of those features — so we had some idea how we would want to do them in the mainstream tools, and actually ship them this time.

In case you don’t know what those things are let me quickly describe them: Incremental Linking is where you make a modest change to some small fraction of your source files, creating a few new .obj files, and instead of totally relinking the resulting executable you “simply” patch it in place. Sounds not so hard, the thing is already logically organized and the linker knows where it put stuff, you just leave a little room for additions in the .exe and away you go right? Err, not so much no, what are you going to do about all the debug information, that has to be patched too, new offsets, new line numbers, etc. What about the browser info? You could easily spend as much time on other collateral pieces figuring out what to patch as you would have just redoing the whole thing. Some cleverness is required.

Incremental Compilation is where you make a minor changing in one or more source files and rather than regenerating the entire .obj file you “simply” recompile only those portions (usually methods) that have changed and leave the other object code in place, thereby giving the back end much less work and saving you parsing time. This is especially tricky because it has the same “what about the auxiliary files?” problem as incremental linking and you already have PCH helping you to do this fast, so the .obj files need to be largish to come out ahead.

And last, but not least, Minimal Rebuild. This is motivated by the kinds of edits that Class Wizard made. This guy notices that you’ve changed a popular .h file, and you’ve changed a single .cpp file (usually adding a method in both the code and header file) and although many files depend on the .h file only one file actually depends on the specific change you made, usually the one you modified. The others could be skipped. This has great potential to save compilation time and paid off big in larger projects but it too has complications associated with the auxiliary files — the debug information has to be patched in files you didn’t touch and so forth. We actually started with a version of this feature that had Class Wizard telling us what files had been changed for Class Wizard edits and then generalized it so that we could determine what was different by looking at the debug information for the previous version of a build and then the new version of a build and from there conclude what we could skip.

The slick thing about Minimal Rebuild is that it turns out that keeping the full dependency tree for a large project around is just too expensive. We kept a set of approximately correct dependencies with a Bloom Filter data structure and by tolerating a small false positive rate we were able to realize the bulk of the savings. Using the existing debugger information to do the change analysis was probably the most clever bit of all.

While all of this was going on, an equal effort was going into the compiler back-end and they were wanting to unify their system on a tuple representation so that they could use more of the same optimizations in more places and generally get better code quality. Such is the way of compilers, huge efforts to squeeze out maybe another 5%. 10% improvements are a thing of dreams.

Olympus was getting quite large and it now used enough memory that it was affecting the compiler’s ability to get its job done. If there isn’t enough RAM (we’re talking about say 4M here) to hold the IDE and the PCH in the disk cache then performance suffers. I went on a working set jihad and got the IDE’s working set (while building) under 64k (it was 13 pages). Back then you did really insane stuff to squeeze out every page — I even turned off the carat flashing in the output window when I found it caused extra code to run during the build!

Other things were going on in the industry but one thing was really on our minds often, mentioned at many offsite retreats — C++ programming is too hard. We really could use a different language/system that would make this a lot easier. Delphi was being fairly successful. Visual Basic was still strong. But now it’s 1995… The web is coming, and with it, Java.

Part 4

We’re up to the part of the story where I was off working in MSN which means all I can give you are the first- and second-hand stories that I’ve heard over the years. So I guess that makes them second- and third- hand to you. This would be a great time to watch the Channel Nine Documentary for more authoritative information :)

In the last part I wrote a little bit about what was happening in the VC++ IDE — “edit and continue” for C++ programmers — pretty amazing stuff.

At about this time the Internet happened. I mention that like it’s in passing but, even looking at the phenomenon super-narrowly, just from the perspective of how it affected our programming tools it had some pretty big effects.

Of course probably the most major one was the creation of Visual Interdev to help people create great web applications. You know that shell is going to be central to the story in a way nobody expected at the time (ok, maybe the VID guys expected it, but nobody else J) but also script and dynamic languages made a huge resurgence. Perl, Python — remember web sites in Perl? And of course jscript, at first as a client-side language in the browser, but then more generally, and its twin in our world: vbscript.

But for the web server needs and the browser needs, we might not have driven these languages within Microsoft as hard as we did. They helped our tools grow up in unexpected ways, like for instance, I think in VC5 you could finally call the editor “grown up” — it had been originally fairly underpowered as programmer’s editors go — but in VC5 full macro support was added (not just little recordings) using the vbscript engine. It still needed polish but you could now no longer argue that it was lacking in raw potential/power. Like other MS applications the macro language encompassed not just the editor but virtually all aspects of the product. I remember making a command line debugger interface in an editor window with macros as a example for a friend.

Speaking of BASIC, during all this time, our friends in VB weren’t exactly idle either. In the VB5 IDE, Intellisense makes its appearance.

Can you imagine programming without Intellisense? Well I guess you probably all can because many of you did without it in the past, but certainly a good amount of cursing happens in my office when I find myself without it. It isn’t just that it saves you typing — it fundamentally changes the way you work. For many applications Intellisense makes API documentation obsolete because it used to be that the #1 use for that stuff was to get method signatures, it is easily the #1 way people get “help” — blowing away the F1 key for usage. But it’s more important than that… it goes as far as freeing you to use better naming techniques, especially in frameworks. Maybe Dennis Ritchie would not have left off the ‘e’ in ‘creat()’ if he’d had Intellisense. And what about what it did for Office? The Visual Basic for Applications (VBA) engine got Intellisense too and that meant Office programmers got much easier automation scripting.

At any rate, Intellisense was a total game-changer and was soon in the entire Visual Suite. And by soon I mean what you might call Visual Studio 6.0 — by an amazing coincidence the language version numbers had synchronized or been forced to synchronize (VJ and VID).

Now VS6, and more importantly its parts, VB6 and VC6 are widely considered to be the best we ever made. Not universally but certainly there are factions that think we should have stopped right there, or stayed on that track. I think this is true because they represent the last of that evolutionary line, for both languages. The IDE that would become Visual Studio became available in its first incarnation at that time, as I have alluded to in the past, and it was Visual Interdev. In some ways, from a consolidation perspective it looked like we had taken a step backwards — there were now four major shells rather than three. The respective teams polished their products to a bright shine and Intellisense bloomed like daisies across the suite.

For those of you that just lost me on shell count, I went to four with the addition of the VID shell (used by VJ++), add VB and VC++ and that makes three. What is this fourth shell you speak of Rico?

The problem is that in this entire history I’ve been completely silent on another suite member — Visual FoxPro which also had its own shell. Now there’s a reason for this — and that is that I know painfully little about that product other than when I did a study of database technologies for Sidewalk in 1995 I was stunned by how many advanced features were already in the FoxPro product. Who remembers “Rushmore” — I remember getting some pretty slick demos! Little known fact: it was the FoxPro team that taught me the basics of database programming — thank you, you know who you are 🙂

But, with apologies, I must go on, paying only very little homage to that shell. It would be great if someone else posted a My History that included more FoxPro stuff. Or any other My Histories for that matter…

I feel like I should mention Office a little bit more at this point. I’ve hardly mentioned Office but it’s important to note that Office, and the needs of Office Programmability, were often key forces in language and tools design, to say nothing of the needs of the Office programmers themselves — which frequently pushed our tools to the breaking-point. IMHO, probably more than any other single force, it was Office that was driving BASIC — from as early on as the Embedded Basic which then went on to power VB, and then later VBA, Office was a huge influence. I think specifically it was Office that drove how application automation had to work and that in turn drove OLE Automation generally and that in turn drove BASIC. Those of you who have ever had to program with VARIANTs a lot can probably thank or curse this dependency cycle.

One other big thing to mention and then I’ll wrap up for this installment. Another notable change in this time period was the maturation of the data access layers. I think this, too, was partially internet-needs driven but also I think the technologies were just ripe. OLEDB was born during these years and with it the BASIC access mechanism — ADO. These things seem pretty mundane on the surface but they offered much richer data access than ever before and inspired the first of increasingly popular and powerful datagrid mechanisms for both display and editing of data. When compared to the clumsy DDX/DDV type things we had been doing it was a breath of fresh air for data programming and forms over data. All of this would go on to influence many different types of designers and data management services, but I think the spark for that was here, in say 1997.

Visual Studio 6.0 — synchronized across the versions and starting to show some (distant) signs of consolidation between its various parts appeared in 1998. It was a great release, as I wrote above, it’s still the favorite release of many. After that, the developer division “went dark” for about 4 years, they were working on what would become “Visual Studio .NET”.

Part 5

The years 1998 to 2002 were very busy ones in the Developer Division. I’ve previously written about “Dolphin” and I tried to give a sense of exactly how much changed during the Dolphin release and the sheer volume of work it required, but I think this period dwarfs that one in a number of ways.

Visual C++ 2.0 “Dolphin” was, by definition, “just” a C++ product. Putting aside the fact that it was designed to allow extensions for multiple languages, the scope of the system was limited to only one part of the Developer Division. In contrast, Visual Studio .NET, which spans the four years we’re talking about, and maybe a little more if you count the foundational parts that were already in VS6, was an effort that required the engagement of the entire division. Arguably it was the first time we ever attempted such a thing in the developer space.

Just thinking about it from the perspective of the number of people involved is telling — VC2 was the work of 100ish people. VS.NET required more like 20 times that, for more than twice as long.

What kinds of things happened? Well for starters the entire managed stack of what we’d call the .NET Framework had to be invented. Not completely from scratch but pretty close. That’s not just the runtime and the framework but also:

It’s quite an eye-opening experience to consider that all of these things were required to succeed and you can sort of understand why it took so long — some of the things on that list really can’t be started until others of them are substantially done, but of course the more complex later projects are likely to highlight problems in the earlier stages. With so many people involved just the communication overhead could be daunting, and the tasks above are hardly easy in the first place.

Well I’d like this to not read quite so much like a marketing bullet list so allow me to recall a few stories about all this from my own perspective — remember I was still in MSN here.

When I first heard about the next version of COM+, which became the .NET Framework, it was in the context of a pitch from the developer division to my MSN group asking that maybe we should consider moving some properties to ASP.NET to give feedback. There was a lot of information to disseminate and many things were still preliminary but I remember that there was one thing that I fundamentally did not “get” about the whole thing. It was because they were still calling it COM+ at that time and I had assumed that they were trying to come up with a new COM+ framework that was backwards compatible with the old COM+ stuff like what we used in the DTC. When they told me that they were trying to do a compacting memory scheme in that world I thought they were completely nuts, at minimum you’d have to have proxy objects for everything so that you could move the real objects without anybody knowing. We were maybe 45 minutes into the meeting before I realized the magnitude of what they were proposing — they wanted an entirely new object model with an entirely different memory management strategy in all new languages. Wow. Just, wow.

The next demo hit close to home. It was a managed client for a web service. They just pointed the tool at a web service and instantly had Intellisense over of it using VB.NET. It had some rough spots but definitely another wow. The next was a quick demo where they imported a COM component, the kind we used on our web pages for ad delivery and so forth, and then started calling that with Intellisense support sweet-as-you-please using nothing but the component’s TLB.

I was so excited I got myself an early drop of the thing and started writing benchmarks. I wrote applications that did the kind of string manipulations that our web sites usual did and sent them feedback based on my results. A lot of them made a difference. I had a sabbatical coming up and I ended up spending most of my six weeks reading the base documentation for what they had built, partly to provide feedback, but even more because I thought it was going to be really important to learn it. It would turn into my next job two years later.

What about some of those other items on the list? There’s some pretty meaty stuff there; I’d like to pick off a few and talk about them a little bit, and I’m so fond of debuggers; let’s start there.

It turns out that debugging managed code can be pretty tricky. I alluded to the fact that it’s a soft-mode debugger (like VC1). I say this because it has the key property of soft-mode debuggers which is that the debuggee isn’t really stopped when you stop it. Both the VC1 debugger and the .NET managed debugger share this but they accomplish it totally differently and for different reasons. In managed debugging “your” code really does stop normally, but there is a debugger helper thread in the debuggee that provides access to key structures and otherwise relays important information back to the debugger, so technically the debuggee isn’t completely stopped.

So far so good, but if that were the extent of the situation, then this wouldn’t be a very interesting discussion — maybe you could call that “nearly hard” mode or something — the real situation is a lot more complicated. One complexity is the fact that when if the user tries to stop the debuggee because it’s say in an infinite loop, it’s possible that the debuggee is in the middle of some runtime call and not directly executing the code the user wrote. It could be in the middle of a garbage collection for instance. If that happens you don’t really want to stop the program right away do you? If you did, you’d find that your universe looks wrong — some of the objects have been moved some have not, some pointers may still need to be corrected. In short, the world is not in a good state and there are lots of these temporarily-bad states that could be visible to a debugger. Ouch. So this is another typical soft-mode problem, when you try to stop, you kind of have to skid, get the debuggee somewhere sensible, then stop (or pretend to stop).

If that wasn’t bad enough, the managed debugging has to work in a hybrid program that’s partly managed and partly unmanaged, maybe with some threads having different combinations. So if you try to stop one thread that’s unmanaged you should be able to do the usual hard-mode thing but if you then try to inspect a different thread that is managed well that could then cause you problems unless you’re treating each thread as it needs to be treated for the kind of code its running at the time its stopped… Oh my…

Add to this fun the fact that you often have to actually run managed code to do normal debugger things like evaluate properties and you find that our poor debugger folks had a few things on their minds while they were integrating all of this. It’s so easy to assume those call stacks and parameter values are easy J

Let’s pick a couple more of the more technically interesting problems from that bulleted list I started with. One very interesting one is the Winforms designer. Now this particular designer is interesting (perhaps unique at the time) because it needs to provide a full-fidelity visualization of the form you are authoring, including, for instance, controls on that form you have written yourself in the very same project. These are what I like to call the “first party” controls. It’s easy to show that in general the only way you can provide that kind of fidelity is to actually run the code in the designer. So now we have to take the code you are writing, compile it on the sneak, load it within the IDE itself (using the very same framework of course) and then you can see your whole form, panels and all, just as it will appear in your application. Wrap it with handles and rulers and so forth to allow direct manipulation and you have yourself one very slick designer! Not easy to get that right.

What about all that new IDE integration and extensibility? Well to make that all happen you have to painstakingly go over all of the extensibility features that are in each of the existing shells, formalize them with clean COM contracts — usually after conducting personal interviews to find out what the existing informal contracts “really mean” — and then fit those into an all new framework of interfaces and components that is itself extensible by 3rd parties. Naturally this is a totally thankless job and it’s as likely as not that everyone involved will say you did it wrong no matter how careful you are. It’s kind of like tax-assessment: you know you have it perfectly fair when everyone hates it equally. But in the end it was super-successful; there are literally dozens (if not hundreds) of language extensions available for Visual Studio — you really could do this even if you didn’t work in Redmond!

I could keep writing about how technically impressive Visual Studio .NET was, maybe I could even win a debate with the thesis “Visual Studio .NET was the most technically impressive release ever” but the fact of the matter is a lot of people didn’t like it; history isn’t all sunshine and roses after all. I think I’d be remiss if I didn’t talk about at least some of the sore points so I’m going to hit one squarely on the head.

A lot of VB programmers did not want VB.NET at all and liked VS.NET just as much (i.e. they didn’t like it at all).

Why? Well the answer to that question is probably a whole book right there but let me boldly make some guesses.

First, VB.NET was not the language they wanted. The runtime changes presented a challenge, there was Winforms to learn for instance, but I think those might have been more acceptable if the language itself had been more VB-ish. Traditionally there had always been a compiled version of BASIC and an interpreted version at the same time. VB.NET decidedly had that compiled-language feel and that didn’t sit well with those that wanted an interpreted feel. I think a language that was more like “Iron Basic” (e.g. like our IronPython language but Basic instead of Python, still targeting .NET) would have been well received. It scripts like a dream, it has direct access to .NET objects, you can run it in a little immediate window if you like, and change anything at all about your program on the fly. I suspect we would have loved to deliver such a language but during that time we simply didn’t yet know how to do so.

Second, VS.NET was not the IDE they wanted. They were used to something smaller, tailor-made for VB that had genetically evolved for VB users, and this wasn’t it. In the initial version of the integrated shell, Edit and Continue wasn’t working for the .NET languages leading to the astonishing first-time situation that the C++ system had edit and continue and the RAD programming languages did not!

I think the net of all this was that there were divided loyalties among that generation of VS users — I think that still persists actually. It was impossible to not acknowledge VS.NET as a great technical achievement but it was also impossible to say that every customer was pleased with the direction.

Whatever else you say though, the 2002 offering became the new foundation for tools innovation at Microsoft and it began a new era in IDE development here. One in which a key distinguishing factor was the presence of rich graphical designers for virtually every development task. It was in this release that the notion that it was enough to simply throw up a text editor and call it good became insufficient. Arguably, even now, Visual Studio’s bevy of designers are a key aspect of its success.

Not long after Visual Studio .NET I returned to the Developer Division, I’ll pick up the story at the “Whidbey” release in the next part.

Part 7

[I know I promised to talk about “Whidbey” in this installment but I realized I needed a bridge to get there or else I’d totally skip over “Everett” — so this is that bridge.]

In MSN the arrival of the .NET Framework and Visual Studio .NET was like a breath of fresh air. My team was working on (among other things) a COM based object model for a content management system. I once estimated that fully half the code was associated with the implementation of IUnknown in direct or aggregated cases (let’s hear it for punkOuter) or else tricky ref count management — to say nothing of the similar code that was in every client.

Using the old code as a reference, just one developer was able to re-implement the entirety of the interfaces — save one class which we chose to wrap with COM interop instead — in about 3 months. While it’s true that I have a high opinion of her (you know who you are) I think this was a great example of the .NET Framework shining in its strong areas.

The resulting libraries integrated seamlessly with ASP.NET (naturally) and were maybe ¼ the size with superior performance. One should never discount the cost of all those interlocked increments in COM objects and all the small allocations and long term fragmentation.

Fragmentation, or rather lack of it, was another key benefit we got from the .NET stack in that time frame. Until then it had been the bane of our existence, with 72 hour tests frequently showing performance degrading over time — we found that we got 5x the speed of regular ASP and stable performance over time.

We had other systems, formerly written in Perl that we were again able to rewrite in C# in a fraction of the time. A key benefit was the strong UNICODE support that .NET offered — we just weren’t getting what we needed out of the Perl implementations we had, to say nothing of the threading benefits. Debugging it was a dream compared to the old world.

In less than 6 months we had an all new data pump and all new managed content management system both scalable and more reliable than either of the previous systems. Life was good.

So I decided I should give up on MSN and go work on the CLR; I don’t think they ever forgave me. 🙂

By the time I joined the CLR, fall of 2002, their work on “Everett” (aka Visual Studio .NET and .NET Framework 1.1) was nearly done, it didn’t release until the spring but things were winding down there and so I started working on “Whidbey” almost right away, but I don’t want to just gloss over Everett because some important stuff happened.

C++ is a moving target, I guess all non-dead languages share that property but perhaps C++ is a lot more robust in this regard than many other modern languages, as a result there is almost always standards-catch-up work a compiler team should be doing. In this case it was around partial templates — this stuff makes my brain hurt — but I guess it should hurt: C++ templates are Turing Complete. Frankly I was amazed by what the C++ team had managed to accomplish in the time they had, the needs of managed C++ and IJW (It Just Works) managed support.

But there was a lot more in “Everett” despite the fact that it was supposed to be a minor update. For instance, Everett introduced support for managed code on mobile devices. I can’t believe I’m saying that like you just push the “go mobile” button on .NET and out pops .NET CF and all the compilers, debuggers, emulators, and deployment tools just like that. I think they made it look too easy.

Everett also introduced the Enterprise Architect version with the first UML support in our systems, maybe foreshadowing VSTS. Designers were definitely here to stay.

But truthfully I can’t say very much about “Everett” because I was desperately trying to learn about the runtime and getting ready for “Whidbey”. Meanwhile “managed code mania” was at its zenith at Microsoft; sometimes it seemed like every product was trying to find a way to incorporate the .NET Framework into their plans.

Part 8

I can’t really talk about what was going on in the IDE without covering what was happening in the runtime because their fates are so intertwined, so even though it’s off topic a little bit, allow me to cover some details from Framework 2.0.

“Longhorn”, which became Windows Vista, was probably the single greatest influence on the Developer Division during the years when .NET Framework 2.0 and Visual Studio 2005 (collectively “Whidbey”) were being developed. I think I could write several books on those years as a study in being too successful for your own good.

As I wrote earlier there was a certain mania in managed code adoption during that time and, though ultimately some of those efforts had to be scrapped, many were a positive influence on the tool chain.

Anecdote: It’s not really important to the story but just for fun I can’t resist telling you about my first day on the job when I joined the CLR performance team. I had been working for the last 7 years in the MSN area, with a great emphasis on server workloads, data architectures, and so forth. So I was a bit surprised when on day one I was told I was going to be working on our client performance problems — that’s a Whisky Tango Foxtrot moment. Pays to be flexible I guess J

Client workloads were a lot harder on the .NET Framework at that time. We did have an ngen story but it needed a lot of work. Putting as much code as possible into readily shared DLLs is fundamentally necessary to getting reasonable memory usage on the client — much less of an issue on dedicated servers for instance. Contrariwise, code sharing is even more essential on a Terminal Server with potentially hundreds of users running client applications.

Sharing also vitally important to flagship applications, like Visual Studio, that are trying to get their code loaded as quickly as possible using technologies like Superfetch. It’s a pretty simple chain of events, jitted code can’t be shared, unshared code uses more memory, use too much memory and you’re dead. Sharing is good. We had to do more of it. It was doubly important because major new Framework elements were being developed while this was all going on: things like Windows Presentation Foundation (WPF) and Windows Communication Foundation (WCF) to name a few.

The Base Class Library (BCL) was gaining support for Generics (starring my favorite Nullable<T>) and XML use was exploding in all parts of the stack. I used to joke that if angle brackets <> had never been invented I would have no performance issues to work on. If only it were true.

In the universe of technologies requiring tool support we really should add at least two more. There were yet more improvements to ASP.NET, there were compelling 64 bit architectures (x64 and ia64), and, probably most difficult of all the considerations, there was SQL Server 2005 “Yukon” which introduced SQL CLR allowing users to write stored procedures in managed code.

Again, I could write whole books on what was going on in the runtime components, but I’m trying to stay focused on Visual Studio for this series so I think that’s probably enough landscape setting, as it is with all those changes it’s easy to see why Whidbey took nearly three years.

For starters, there were whole new project and deployment types needed to support the new ASP.NET with its own, new, developer server; and likewise for the SQL scenarios. And while I’m on the topic of project and deployment this is also where “ClickOnce” makes its debut.

ClickOnce had to permeate our project systems but its introduction also highlighted an inherent weakness: despite the extensibility offered in the project space there was (and indeed is) no one central place ClickOnce deployment could be added so that all project types would benefit. This is a classic example of why ongoing refactoring/remodeling of architecture is so important.

Personal note: my first direct interaction with the Visual Studio team during this time period was working on the performance of the Create New Project dialog — something which can never quite go fast enough to make us happy.

Ok so the project system needed work. What else? Well, anytime you add pretty much anything you can expect the debugger to be affected, this was no exception. Generics — check, needs debugging support, SQL CLR — check needs debugging support, it was a long list. But there was even more than just that going on: I have written earlier about the challenges of what we call “interop debugging” (where you debug both managed and unmanaged code at the same time) — in this release both the debugger and runtime teams put considerable effort into making interop debugging more reliable. That meant taking a very hard look at all the communications logic, the locking model, the safe-stopping points vs. “skid” points as I called them. It was a huge endeavor, but as a result interop debugging got noticeably better in VS2005.

All this hard work actually had an unexpected extra bonus. We also introduced a managed code profiler in this timeframe and as it turns out, trying to stop managed code as it runs so that you can walk the stack the way profilers like to do tends to have all the same kinds of problems as trying to stop the code in a debugging context. As a result, much of the hard work that went into getting the debugger working resulted in a more effective profiler solution.

I should come back to SQL 2005 support — it isn’t enough to just add project and debugging features but ADO.NET 2.0 was making its appearance and that requires suitable designers. Visual Studio perhaps earns its name more so than its antecedent Visual Basic because it’s Visual in many different ways — just visual forms no longer suffices. New SQL engine means data designers are a must.

As if all of that wasn’t enough, there were two brand new pieces of technology that appeared during this time as well. These were part of our Office programming story: VSTA and VSTO. In a few words, VSTA is tooling for ISV’s to provide scripting features to their users, allowing their users to write managed language programs that automating their product. The primary user is clearly Office but in principle anyone can participate — you get a complete authoring experience with it. VSTO, in contrast, allows you to write office extensions, intended to be written as native code against a COM API, using managed languages instead. That poor sentence hardly does justice to the difficulty of achieving this result, and I’m glossing over the fact that there were other VSTO solutions as far back as 2003 but this is where I remember the spotlight first shining on that technology with any kind of luminosity.

But, even with all this good work, there was one thing that stands out in my memory as being more important than all the others.

Edit and Continue was back — and there was great rejoicing.

Part 9

In the last posting I talked about the “Whidbey” release, VS2005, but I feel like I left out two really important aspects so I’d like to start this part by rewinding a bit for those two topics.

I had mentioned in Part 7 that some of the UML support in the “Everett” time-frame was starting to foreshadow the release of VSTS. Well, in Whidbey there is no foreshadowing disclaimer necessary — VSTS arrives and I think at that point it’s clear that Microsoft is going to try to have a real offering in the ALM space. I think it’s fair to say that the venerable VSS was not going to be the product to carry us into the 21st century.

I don’t want to understate the difficulty of bringing just TFS to market, and that’s only one part of the equation, but even giving those hard-working folks a well-deserved nod, I felt like the first VSTS was just a taste of ALM at this time — whetting the appetite as it were. But even so, with so many shops wanting to formalize their practices in some way — even lightweight practices like Scrum — this kind of support was welcome relief. I think those years included a much greater level of attention to the process of creating software than at any other time in my experience.

Just 4 years after its initial 2005 release, VSTS is a huge part of what we deliver — and in 2002 it didn’t even exist.

I started my discussion about the Whidbey release with an observation about how big of an influence Windows Vista was on our tools. I think I should have ended with my single biggest regret about that release because it’s highly related. Despite its huge influence, our support for Windows Vista was lousy in the initial release, like really lousy. It wasn’t until we delivered SP1 and especially the Vista support for SP1 pack that we got to a reasonable level of support. I could make a bunch of excuses but they all sound pathetic to me so I’ll just stick with “It’s my #1 regret.”

OK so much for the downer part of the article; let’s move on to Visual Studio 2008 “Orcas.”

The thing that I most remember about “Orcas” is “MQ” — the Quality Milestone. This is where we spent the equivalent of an entire coding just taking care of nagging issues that otherwise seem to not get handled. I’ll name-drop again since I just linked her video, I’d have to say it was Carol more than anyone else that was keeping us honest about the “debt” we had accumulated in our product and popularizing that meme in our division. Debt is a great way to think about trade-offs: every time you make a choice that isn’t right in the long term, any short-cut, accumulates some debt. Any bug you choose to defer, that’s debt. Some debt you should write-off, that fix just isn’t happening, some you should address, but always you should be aware that you’ll have to deal with it sooner or later, and it may as well be sooner.

That mind-set, which was the focus of MQ was pervasive for the whole release, and I think it shows. From a stability and performance perspective VS2008 was universally better than 2005 — a very hard thing to achieve in any major release with significant additions.

I think the “star” of VS2008 was Linq — my favorite flavor was “Linq to SQL” which I’ve written about at some length before. Perhaps the most amazing thing about Linq is that its introduction into both C# and VB allowed mainstream programmers to start using functional concepts in a natural way, often without even realizing it! But a close second are two other amazing things:

1) Linq variants (Linq to XXX) started appearing like crazy — the notation was incredibly useful

2) You could actually get great performance out of this kind of layer without inflicting craziness on users (see my article again for the Linq to SQL discussion)

In the same way that the needs of high quality designers drove advances like partial classes (critical for code separation) Linq variants and their needs, drove the type inference, extension methods, and expression tree features, including concise lambda notation, necessary for its success into the mainstream languages. This brings great collateral benefits because those exact same notions are helpful in so many other scenarios — I use them pervasively in my WPF code for event handling for instance.

As usual this all comes at a cost, it’s not just language notation: it’s also the debugging support, profiling, garbage collection, FXCOP, and so on. After all, just because a method is dynamically generated, has no name, and no .pdb file information, doesn’t mean you don’t want to analyze it like the others. And of course the ability to generate expression trees in addition to IL represents another significant undertaking for the compilers. All of this is fundamentally necessary for Linq to succeed but it’s kind of invisible — it’s the stuff you expect to just work.

“Orcas” also included the “Cider” designer for WPF. Remember that during this period there had been several runtime releases including 3.0 and it shipped alongside 3.5 of the framework. Well by the WPF was a first class application target and it needed designer support and the richness of WPF resulted in perhaps one of the most complicated designers ever created. It’s doubtful that a successful designer could have been created if not for the lessons from “Sparkle” aka Expression Blend, and language advances like partial classes to ease things along.

The abundance of framework subsets led to the need for multi-targeting — that is the ability of the tool to target any one of several framework versions or to have solutions with projects targeting arbitrary combinations. This was tricky enough in Orcas but at least the frameworks all shared a common core — simplifying things significantly. That grace would be absent in the follow-on release VS2010.

Other application models were popping up as well — Silverlight had made its debut, taking WPF concepts to the web, and Windows Workflow (WF) also delivered a release and a designer. If it wasn’t already de facto reality, the general theme that “every application model needs a suitable design experience” was certainly cemented during this period.

Debugging technology takes itself to new amazing places like XSLT debugging (XSLT makes my head hurt even more than APL) and gains the ability to debug into the BCL and dynamically download the source as needed to do the job. Very slick, but by now you know I’m partial to debugging technology J

And on the native code front, MFC 9 makes its appearance — adding support for new Vista concepts, and representing the first significant update to MFC in a very long time.

But for me the biggest memories are just working on performance across the board, from MQ to the finish. In the framework, in the tools, we made big dents in many scenarios and overall, even with lots of new features, Orcas felt snappier than Whidbey which was a great accomplishment. I loved using it!

I just hit 1200 words so I think it’s time to stop, having skipped vast regions and barely mentioned VSTS, I’ll pick up again in a bit.

Thank you for all the feedback and comments, they’re much appreciated.

Part 10

Visual Studio 2008 Winds Down, Visual Studio 2010 Begins

Just as things were starting to wind down on VS2008, I got a new assignment. I was going to be Chief Architect of Visual Studio. Wow. So here this series takes a bit of a turn because “my” history is quite a bit different at this point — I actually stopped working on the current product at all, and much as I would have liked to write about what I was doing back then, you can imagine there wasn’t a whole lot of enthusiasm about starting blog discussions on what we should do in “Dev10” (the term “Hawaii” never stuck and was soon abandoned) while we were trying to launch VS2008.


From a technical perspective, the idea of VS2010 was to begin a remodeling process that would span several versions. Making key investments that would help long-standing problems and at the same time offer some immediately compelling features as well. I gave a lot of thought to what enabling steps we would have to take to make our experience be something great in say three versions. It was hard to see exactly what that future might be, so I tried not to think about specific features at all. I had seen all sorts of “flashware” concepts for things we might do and I tried to think about what key problems we would have to solve to implement things of that nature. Using all that, and what I already knew, I came up with a list of fairly low-level considerations for a “VS of the Future” that I thought we should work on, and why.

I think the actual document I created is pretty dry, it’s full of observations about the size of processor caches, the number of cores, the memory wall, the locality of our data structures, blah, blah, blah. It goes on like this for pages, but I think I can summarize it into a couple of major themes.


The first major theme is concurrency, and the need to increase our ability to use it successfully. You might think that with all of those threads Visual Studio spawns that there is a great deal of parallelism but, for the most part, it’s only superficial (there are exceptions), what happens more often than not is that those threads run something like relay race with only one thread having the baton at any given moment. One key reason for this is that due to the single-threaded history of our code-base, and indeed many Windows applications, a lot of the objects are tied to the STA that is the main thread.

To create opportunities for concurrency you have to start to disentangle the important data structures from the STA and give them better access models that are friendly to concurrency. A great example of this is the new editor in VS2010. It allows you to access editor buffers via immutable micro-snapshots. Previously, if you wanted to do a something like say background analysis on a live text buffer, you had to do crazy things like copying the entire editor buffer on every keystroke. Concepts familiar to database programmers, like isolation, need to be present in the key structures of any concurrent sytem.

The STA concerns are a bit like a plague, they can cause a massive “entanglement” of data processing and presentation. Transitions into or between STAs can result in unexpected message pumping, leading to bizarre re-entrancies which in turn result in astonishing bugs that are very hard to find and fix.

Additional entanglements occur between processes trying to create parent/child relationships between their windows where the single pump effectively fuses together certain STAs — a necessary step given typical application assumptions but a total bug nightmare and a parallelism disabler.

Ultimately the most important thing is to make it as easy as possible to move work off of the UI thread. I think it’s vitally important that there is a single thread responsible for the UI — it’s the sanest way to preserve the order of operations the user initiated — the causality from their perspective if you will — but as their requests become non-trivial you need the flexibility to do them asynchronously.

There are two major kinds of background activity that come up in an IDE; it is easiest to illustrate these two needs by example:

  1. Search for “foo”

2. Rename “foo” to “bar”

Both of these represent two cases where we can use a “coarse” parallelism model (e.g. one file at a time, independently). Of course we also would like to do fine-grained parallelism in places and in fact searching could be amenable to this as well (e.g. search odd lines with one thread even lines with another). Fine-grained parallelism has the opportunity to keep data locality good because you can access the data in one stream rather than access disparate data streams all at once, at the expense of more complex scheduling. Locality is good for performance, more threads are good, sounds good but you need some fundamentals to make any of this happen:

Putting the text editor on a plan that can satisfy these needs is itself a big step forward but on a long term basis it would be nice to do many things on this plan.

New UI Model

Again, I went on at great length about this in my original memo but I’ll treat this topic even more briefly than concurrency. The long and the short of it is that your average developer’s machine has the equivalent of a few 1990s supercomputers for a GPU (we could haggle over how many) and it’s important to be able to get good mileage out of that unit. It’s hard to imagine achieving those kinds of successes at reasonable cost using the good old WM_PAINT + GDI model — a high quality retained mode graphics system is basically a must.

In addition to a making it easier to use the GPU, there is also a tremendous desire to do good separation between the UI and the data elements of your design. Not just because this is architecturally sensible and clean but, as I discussed in the previous section, because to do otherwise significantly hinders your ability to achieve a good degree of parallelism. The retained mode graphics model actively encourages badly needed separation and you all know I love the Pit of Success.

Ultimately that leads to the choice of WPF for presentation in Visual Studio because the long term choices are to either use that or roll our own. However WPF and Silverlight evolve in future frameworks, and you’d have to be crazy to think they won’t, we’re better off using WPF notions than relying on the past, or worse, on something of our own making.

What came of all this?

I’ve deliberately avoided going into feature discussions so far, just as I did in my original memo, but ultimately we had to choose particular product areas where we would take these notions and invest in moving them into a good direction. That’s the point of the “remodel” after all.

These are some of the major initiatives that got an investment; they had impact throughout the entire product.


A new framework, and in particular a new CLR, essentially mandated that we do this work. We had been able to do this more cheaply in previous releases because all the target frameworks shared a CLR (the CLR had been on v2.0 even though the framework moved on to 3.0 and 3.5). This base level of compatibility allowed a simpler “onion-skin” kind of model to deliver an effective solution in Orcas — that wasn’t going to work in Dev10. In Dev10 we had to be able to target CLR 2.0 and CLR 4.0 and we had to do it in such a way that any given solution might use the related frameworks in any combination. We certainly couldn’t assume that the mscorlib that the IDE had loaded was the one that the user wanted to target.

Then there’s the matter of tools. There are different compilers for different frameworks and we can’t assume that, for instance, the C# compiler for version 4.0 is the right tool for the job when building a project designed to run on version 3.5. And even if we were super careful with the compilers, what about all the other tools that create outputs, like say resources? Those assets might be in a format that is serialized with the expectation that a particular framework will load it. So no one tool could do the job.

And let’s not forget the native tools, they have all these problems too, though I’ve been using managed examples, you can search and replace the runtime names in my discussion with native tools and libraries and you get basically all the same phenomena.

So in VS2010, based on the project target, we have to select the right libraries and tools to create all your outputs and we have to be able to vary these on a project by project basis. That sounds good.

That’s just the start.

The build chain is in some ways the least of our problems, what about all those designers? What about Intellisense? As you change focus from from file to file, depending on the framework you’re targeting in that file we need to show you Intellisense for just the methods you can actually use in that context. Likewise designers should include only choices that are reasonable for the current target, so for instance, the items in the Winforms designer toolbox should not include controls that are available in the version of the framework the IDE happens to be using, but rather the ones that are available on the desired target, and of course those are not likely to be the same.

Add to this mix the desire to target platform subsets like the “client profile” of framework 4.0 and you’ve got quite challenge on your hands.

I remember a very compelling breakdown that the teams used to explain the levels of success to management (and me)

Good: Designers and Intellisense prevent you from making mistakes; you see only things that you can really use for your target.

Not Good: Designers didn’t warn you, but at least you got a compile time error that was helpful and can be corrected.

Bad: There was no compile time error but when you ran the code you could readily see that it had some target specific aspect that shouldn’t be there and you could correct the code to avoid the problem.

Ugly: As above, but there is no reasonable diagnostic information and/or no reasonable way to fix it.

Obviously we want all aspects to be “Good” and if that can’t be achieved then as few as possible cases in each successively lower category. I think you’ll find that as of Beta 2 we’ve been very successful.

New Editor

Without going into a full discussion of the problems with the “old” editor it is worth mentioning these basic facts:

Many customer performance complaints boil down to lack of scalability in the current editor structure. These issues in turn boil down to non-linear memory usage, and non-linear algorithms, where sub-linear choices are possible, and necessary, to create the proper experience. We get real scalability complaints from customers on an ongoing basis and these problems can’t be solved by “tuning.”

To address these problems we had already built a new editor platform, including visualization and text management services. The design had proven scalability and extensibility and had already shipped (in Blend). It was a very real technology. However, there was a great deal of code in Visual Studio that depended on old editor behaviors.

To simplify adoption of the new editor we created “shim” services that supported the most popular of the old interfaces allowing existing clients to continue to use those interfaces. Since most editor consumers actually use only a subset of the whole API we were able to substantially mitigate the cost of conversion across Visual Studio — shims weren’t a universal solution, but they helped.

Now I say that like it was some easy thing but wow it’s a ton of work even at the “reduced” cost. And the number of shims seemed to keep growing, and growing, and… it took a while to get this under control.

There were other problems. The new editor wasn’t identical to the old, despite being more functional overall it was missing some things — like column select — that would need to be added (it’s in Beta 2). Then there are the complicated editor interactions in cases like those presented by the Web Forms designer — the presence of markup intermixed with a language (like VB or C#) makes Intellisense very tricky. The language service needs to see the file without the markup to provide coloring and Intellisense but of course the user edits the composite file. Getting this to work on the new editor was quite a trick.

Having gone through all this I think I can say that the new editor has a very bright future; it has unprecedented extensibility — everything from text replacement, to object insertion, to HUD–like overlays. In this version we’ve barely begun to tap its capabilities.

New Shell

It would be a mistake to think of the new shell as just a redo of the old in WPF, it’s quite a lot more than that. There is a whole new toolbar system, a whole new menu system, both based on WPF. The docking controls, easily taken for granted, those are all new. There are the bits that allow document windows to completely float for better multi-monitor support. There are all the decorative bits, the window tabs, the output window tabs. It’s starting to add up but these things are only one aspect of the Shell, its UI.

Behind the scenes, there to enable pieces like the new editor that I discussed above, and the project system that will be next, are the Shell Services. You might see screenshots of the new Extension Manager and the Visual Studio Gallery, they are in some ways the face of much of the underlying work: a new deployment model for extensions centered around the VSIX packaging format and the MEF extension model.

But those dialogs really are just the face, to make the systems work there are many invisible things, like the MEF catalog and services that manage extension state. In fact, if you were to visit the Shell Services link I provided above, you could go down that entire list and you would find very few of the major systems that were not affected in this rewrite. From the Start Page, to Intellisense, most services have had significant work. MEF itself lays out a roadmap for future levels of extensibility in the product, simultaneously more comprehensive, easier to use, and in more areas.

That last notion is perhaps the most important one, I’ve said this many times in various speeches but it bears repeating: It’s hard to look at Visual Studio, a product that is substantially ALL extensions, and say “it needs more extensibility” — but it does. The original Visual Studio shell provided excellent extensibility of the core services — VS itself is the proof of that — but what is lacking is general extensibility of the extensions themselves. These extra levels are what we sought to add in Editor and the Common Project System, and hopefully they will pervade the product over time.

“Common” Project System

I’ve alluded to this system several times so far in the document; it’s also an important step on a long-term plan.

Anyone who has created a project type for Visual Studio knows that it can be a serious pain to try to keep up with the Joneses. The flagship languages add new features (ClickOnce is the canonical example) and since there is no central place to put project features, they end up having to be re-implemented in the code for many different project types. The ability to create project “flavors” (described a little bit here) helps alleviate this somewhat but certainly does not provide complete relief.

So, at the start of the cycle we find ourselves in a situation that is something like this:

So the solution seems obvious — we’ll start building “CPS” and we’ll do C++ first, that way we have a test case, we don’t have to change the world all at once, and the C++ team will no longer have to be in the business of managing all its own project stuff. The reason that I quoted “Common” is that, at least for now, it’s only the C++ project system — though it has ambitions to be a lot more.

How hard can that be? It’s only an all new project system with virtually all of its attributes encoded cascaded XML files, general purpose editing UI, with pervasive MEF extension points, and enough scale to handle the largest C++ solutions we have. Why it’s triviality itself J

Other Highlights

When I wrote about “Dolphin” I gave a litany of changes, it was an impressive list, but I think I can say with candor that I gave a fairly complete list of the major new pieces of work in that release. I was able to do that because the team was a lot smaller and the whole release could fit inside one person’s head. I’m not even going to pretend that I can write about everything that’s in VS2010 — So far I’ve stayed very close to my own team and the things that were most dear to me — this is only “my” history after all, the complete history would require input from thousands. So even though I can’t possibly be complete, there are a few of my favorite new things that I want to talk about.

The New C++ Code Model

The VS2010 C++ Code Model is essentially a complete redo of the old technology. Now I’m a funny guy to be writing about this because probably more than any other one person I’m responsible for creation of the old technology. All those .bsc files, that was my baby, we came up with the current system when we were working on Minimal Rebuild (previously discussed) and several of my friends had the joy of maintaining that code in the years after I left the C++ team (thanks, you know who you are, and sorry!). The .ncb code was the “no compile browser” and the original flavor of that was created by another fellow who I supervised, no name dropping, this was another case of “give the new guy this really hard project, I’m sure he’ll do fine”. Gee I find myself apologizing a lot in this history.

Anyway suffice to say they were fine in say 1993 but I think it’s possible to do a lot better in 2009 and nobody is happier to see my old code leave the building than me. The new system offers improvements almost across the board, especially in reliability and performance.

Historical Debugging

You know I couldn’t do another entry without talking about debugger technology. Historical Debugging is a feature that’s been incubating in the minds of debugger people probably for as long as they’ve been debugging.

I was the debugger lead in the early 90s and I used to explain the utility of debuggers and debugging tools in this way: Imagine a program with a bug, it has been running along, everything is fine, everything is going wonderful, the flow of execution arrives at a point we’ll call Albuquerque, where it turns right. Now as every Bugs Bunny fan knows, the correct thing to do at Albuquerque is to turn left. The program’s decision to go right has led it down an incorrect path and sometime later we will observe a problem.

Now if we’re very lucky “sometime later” will be very soon, like for instance it might be that we just de-referenced a null pointer and we’re going to take an exception about 2 nanoseconds after the mistake. That’s an easy bug to fix. On the other hand it could be that “turning right” was more subtle — maybe we corrupted a data structure in a minor way and it might be days before we can see an observable effect — that kind of bug is a nightmare.

Finding “Albuquerque” is what I call The Fundamental Problem of Debugging. The debugger provides you with tools (e.g. breakpoints) that allow you to stop execution while things were still good and slowly approach the point where things first went wrong. The debugger likewise provides you with tools to examine the state afterwards, hoping to find evidence of what went wrong in the recent past that will help you to see the origin. The callstack window is a great example of looking at the past to try to understand what might have already gone wrong.

To find the problem, you might start after the failure and try to look back, finding a previously unobserved symptom and moving closer to the original problem or you might start before the failure and try to move forward slowly, hopefully not overshooting the problem by too much. Or you might do a combination of these things. You might add assertions or diagnostic output to help you to discover sooner that things went wrong, and give you a view of the past. It’s all about finding Albuquerque.

Historical debugging directly addresses the Fundamental Problem by giving you the ability to look at a lot more of the past. And it’s far superior to the limited ideas we had back in the 90s for accomplishing the result. I know a couple of people who have had this idea in their head for over fifteen years — it’s great to see it out there for programmers to enjoy. I think you’ll love it, it’s like a drug! J


The Help system was another case where we saw very early on that it should get a remodel. I think the energy around that effort began with a very well written memo from a friend of mine. Frankly the sad truth is that it was pretty hard to find people who had anything nice to say about the Help system in late 2008. I think many users had given up and were just using their favorite online search to find what they needed, which, as a doubly tragic statistical truth, was probably not our search.

The Help team didn’t have an architect so I loaned myself out to them and together we made what I think are some pretty cool plans that have borne fruit. The new system is much lighter weight and is standards based. Taking a cue from MS Office the content files are basically XHTML in a .ZIP file with a custom extension MSHC. This is an archivist’s dream, but it also opens up the Help authoring space tremendously — with no documentation at all you can now just look at one of the archives and pretty much figure out how to get your content anywhere in the system.

For offline viewing, the customer help viewer is gone, replaced with a mini localhost web-server that delivers the help to the browser of your choice from the archives. We tried various indexing strategies for full-text and other searches but ultimately settled on a custom indexer that I wrote in a few days (well, to be fair they did a lot of work on it after I finished my parts, but I wrote the core). The indexer is actually very similar in operation to my old bscmake program, the C++ program database generator — those techniques fit Help content well, and it produces MSHI files.

Importantly, you don’t have to create our index format to publish Help for your topics; the MSHC file has all the information in the XHTML. We ship pre-built MSHI files to save installation time because of the size of our corpus, but you don’t have to. Additionally, the MSHI files can actually be merged as much or as little as desired for even more performance. But don’t merge MSHI’s from different locales because there is no usable universal sort order that looks decent to customers.

A 3rd party help viewer need not use MSHI at all, you could easily build your own index.

The content itself is delivered in a much lighter weight format, the big tree control is gone. In fact most of the styling for the pages is itself stored in the content as a topic and it can all be updated like any other content. You can have a lot of help on your system, or just a minimal set of your favorite books, your choice.

Overall I’m very happy with how this effort went. We replaced aging indexing technology with something new and much more precise. We removed the viewer application entirely. We drastically simplified the interface between the IDE and help (basically it’s a URL at this point). And it’s faster!

The Benefits of Dogfooding

It’s fairly well known that we try to dogfood all our own products. In this particular release I think there were some nice significant benefits from this practice. I’ll highlight just a few of the ones that I think are the most important here.

.NET Framework 4.0

It’s hard to imagine not dogfooding this in the release given its importance but there were several unexpected benefits for customers. First, the general character of the release was profoundly affected by Visual Studio’s early adoption, and our need to create designer assets for many different framework versions. The effects were felt in everything from binding policy to metadata format. As an architect I was very pleased by these results because in many cases we were able to find and prevent problems on the whiteboard — where corrections are a lot a cheaper.

A secondary benefit came about because Visual Studio 2010 had to implement so many existing COM interfaces in managed code — resulting in a large number of COM Callable Wrapper (CCW) objects. Likewise much new managed code needed to call existing interfaces that were not being converted, hence significant use of Runtime Callable Wrapper (RCW) objects. That’s all well and good, these technologies had existed for years, however, fairly innocuous seeming choices, like “when should RCW’s be cleaned up” were having profound consequences. The additional CLR cleanup code present in past releases introduced unexpected reentrancy that was not present in the strictly native implementation. Importantly Visual Studio gave us fairly easy and important test cases to work with and ultimately resulted in significant improvements in COM interoperability — these kinds of experiences make the Framework better for everyone.

Windows Presentation Foundation

The WPF team had many headaches to deal with thanks to having VS for a customer but I’d like to think WPF also got more than a few benefits. Some of the corrections we asked for were fairly minor — like the toolbar panel’s layout code which was fairly weak, and we have a lot of toolbars. But in other cases there were significant weakness.

I’d say that if there was a theme to our WPF problems it was this: that the complexity of Visual Studio’s use of hybrid traditional Win32 and WPF elements was a new challenge. But ultimately this resulted in a lot of goodness. It’s possible to do good focus transitions now between WPF elements and Win32 elements, something that would make a person’s head hurt. And, perhaps even more importantly, different combinations of hosting WPF in Win32 UI and Win32 UI in WPF were resulting in extra drawing delays. These paths were exercised by the many window combinations present in Visual Studio and the abundance of big display transitions (like switching to debug mode and back). Generally speaking the hosting services got a lot of exercise and were significantly improved as a consequence.

I’ll talk a little bit more about WPF below specifically as it relates to Beta 2.

Team Foundation Server

Of all the technologies that we beat on, I think TFS probably got the beating worse than any other system. For sheer volume of users, size and number of shelf-sets, number of branches, merges, automated builds, reports, to say nothing of bugs, issue tracking, and basically every other TFS feature our entire division put a level of load on our TFS systems that makes me think that anyone with any load that is even remotely normal is going to have no trouble at all. Sometimes it was painful for us, like dogfooding is (and is supposed to be) but it’s going to be that much better for you.

From Beta 1 to Beta 2

I’ve saved the best news for the last section of the last installment. Beta 1 of VS2010 represented some really great progress towards our goals but there were a lot of issues, I’m happy to report that we’ve been hard at work on the most serious ones and I think we’ve got great results for you. I’ve listed just a few of the improvements that I think are most important below:

There are so many improvements I can’t possibly list them all. I lost count of the number of performance issues we addressed, there were probably over 100 performance analysis reports counting only my efforts on the most critical scenarios and I was hardly alone in this!

Generally, looking at the full battery of our performance tests we find that VS2010 outperforms VS2008 in the majority of cases, and, if you weigh the scenarios by importance, VS2010 looks even better than that. Our remaining performance battles are around reducing overall memory consumption, and I’m sure that will continue to be a focus area between now and the final release as we fight for every byte.

Internally, folks are very surprised and pleased by how far we have come in this Beta. I hope you are equally pleased.

To everyone that got this far, thanks so much for reading My History of Visual Studio, and I look forward to your feedback on either the Beta or the series.

Epilog [4/14/2010]

Visual Studio 2010 Launched on Monday. Wow! It’s HUGE. A major round of congratulations are in order for everyone involved, not just on the Visual Studio team but also on the Frameworks team and the supporting teams and of course the customers whose feedback was so vital to the success of the product.

I’ve already written a lot about VS2010 previously in the series and I don’t want to go over all that stuff again. In my last history posting, back when Beta 2 came out, I covered all the major things that we wanted to get into this version of Visual Studio from an architectural perspective. You could look down that list and get a feel for whether or not the release was a success from that perspective. I think key architectural features like a full WPF surface, MEF extensibility, extension management generally, and of course multi-targeting were delivered. General purpose multi-targeting is a huge thing because it means we don’t have to do a VS SKU release for every new target platform that comes out — that was a critical thing for our own sanity. And WPF has certainly (finally?) proved itself as a viable platform for a flagship application. We set out on a multi-release remodeling plan and I think we delivered an excellent first-installment.

You might be wondering why I was so quiet during the home stretch. I’m afraid my perspective on the VS RC and RTM milestones was an outsider’s. The Beta 2 push was my last major hands-on involvement with the product; probably the simplest diagnosis is that I burned myself out at that point. In any case I took a vacation after that Beta with the goal of returning to find a new challenge. I’d been in the Developer Division about 7 years, for the second time, and I like to move around about that often, so it was time. I’d love to take some credit for fixing the daunting memory leaks that were left in Beta 2 but I’m afraid all credit for that goes to my capable colleagues to whom I am forever grateful for taking this particular release across the finish line.

Some things were lost on the way in the name of getting the necessary stability and basically locking things down. It’s kind of a sad thing when that happens but it’s a necessary thing — they’ll just have to wait for the service release. Nearly ready to go things that just needed a little more bake time are a great thing to have in your pocket at that point, but there are a few especially delightful things that I wanted to mention, sort of unexpectedly delightful, for me anyway.

We started with an ambitious goal to make a major dent in Visual Studio’s setup experience. A lot of what we wanted to do ended up having to be pared back and I feared we would end up making little progress but in the final analysis I think the setup experience is substantially better. Seeing tweets on how people had successfully installed in less than 30 minutes was very nice. More than I hoped for in the darkest hours. My own experiences during the beta were the best of any VS release I’ve ever installed — and that’s a lot of releases.

I feared the historical debugging feature, IntelliTrace, was going to be too bulky to use all the time but that’s not proving to be the case at all. I’m just getting used to having it available all the time and it’s proving to be much more of a game-changer than I ever imagined. I wrote more about the history of that feature previously so I won’t go on other than to give some kudos to those that worked hard to make that feature reality.

I didn’t think TFS was going to be so universally important but the ease with which TFS Basic can be brought online was just staggering. In just a few minutes I configured my own home TFS installation — it’s great to have that kind of quality ALM system so easily available to everyone.

The rate at which new extensions are appearing in the gallery is incredible. I’m so glad to see this hub creating energy for all our customers, but there was one extension in particular that made my day. Dmitry Kolomiets produced, I think, the first Solution Load Manager extension. Why do I care so much about that particular extension? Well there’s a story there: from very early on in the history of VS2010 I was espousing a philosophy that we could not universally load every project in a solution and achieve our memory goals — many solutions are just too large and it’s entirely normal to not need every project. But there is no reasonable way to manage solutions. Unfortunately there was just not enough time to make this feature a reality — but there was enough time to add the necessary APIs so that a 3rd party could make us load less and manage what was loaded. With that support they could build their own UI to drive the experience. It was hoped that the performance benefits from such a system would motivate the creation of a variety of managers with different policies — potentially tied in to the great refactoring engines engines already out there (e.g. CodeRush and Resharper). But it was just a hope. Now there is one!

I could go on for ages. The diagramming support and of course our uses of it. Integrated Silverlight support, ASP.NET 4 with MVC2 (read about it on Scott’s Blog). Language improvements in C#, VB, and C++. Did I mention Windows Azure support as well? Sharepoint support? F#? It’s huge.

This was only “My” History of Visual of Visual Studio. For me this is the “epilog” but for everyone else, VS2010 out the door, it’s time to begin your own History.

Congratulations and Enjoy!

Written by

I’m a software engineer at Facebook; I specialize in software performance engineering and programming tools generally. I survived Microsoft from 1988 to 2017.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store