Why Small Programs are Better
Sometimes people ask why it’s so important for programs to be small. With programs made for your phone of course a couple of important aspects are how long it will take to download the code and how much space it will take on disk to be present. Those are both important, but actually binary size is a good yardstick for many other kinds of efficiencies, or inefficiencies that can linger in your code.
Let’s first start by talking about loading your program, or “cold start” as it’s usually called. And as always I’m going to give some approximately correct numbers to help motivate the discussion without getting too buried in all the ways your mileage may vary.
In cold start, typically, the dominant cost is not running the initialization code but rather in getting that code loaded so that it can run at all. To understand this I’ll give you some rough numbers. If data you need is sitting in main-memory rather than the processors cache, by the time it is available to use you could have run something like 150 instructions. If the data is instead on some kind of decent flash drive that number goes up to something more in the neighborhood of 1,000,000 instructions. If the drive is poor, old, full, maybe double or triple that. If we’re talking about something spinning maybe 20 times that figure. That’s a lot of cheddar.
So what’s the best way to avoid those costs? Well it’s possible to very carefully lay out your code so that only a tiny sliver of it is needed at startup time and that surely helps. And you need to ensure that not only is a small percentage needed but that percentage is all together rather than spread all over the place. This is doable but it’s hard work. On the other hand if your binary is very small, you’re much more likely to be in good shape, and the bigger things get the harder it is for things to stay in control.
Now actually, everything I just said about cold start applies pretty much in the same way to every transition in your programs flow. So if you’re going from doing one thing to doing some new thing, all that new stuff has to be loaded. The less of it there better that is likely to go. And of course if you want all your transitions to be fast (and that’s what people tend to care about because that’s what tends to go wrong) then you mostly need everything to be small.
And there’s more sort of “stealth” transitions. Users move between applications and operating systems re-purpose memory for their new needs. That means code that was once loaded gets unloaded. And, you guessed it, when you return to your program it’s another one of those transitions. The smaller your program is the less likely you will be victimized by this and of course the easier it is to recover.
Even if there aren’t transitions at the disk level, memory is constantly transitioning in from main memory to the processor cache. The smaller your program is the more likely it is to stay in cache and the quicker it is to get it back. Most name-brand programs are far too big to stay in cache all the time but keeping everything small helps everything stay fast. Every bit of new memory you use erodes the performance of everything else. So even when there is no visible transition there’s still invisible transitions.
Most everything I said about code actually applies to data too. Certainly if the data is initialized and loaded from disk, but even if it isn’t loaded from disk it might need to be swapped out in favor of something else and then it’s coming back from — somewhere — and you’re back to transitions again. So actually economizing on memory is good for everything.
But wait, you might be saying, “I thought I heard using more memory is how you make things faster, you know space/speed trade-offs and stuff. I want some of that hotness!” Alas my friend, the great space/speed trade-off thing is pretty much a lie. In the normal cases smaller IS faster because of the economies you get from efficient execution. Sexy looking data structures often fail horribly because of their size and poor cache behavior. It can be quite hard to make up a factor of 150. But because nothing can be easy there are cases where using more memory will help speed things up overall: but those are the exceptions. I’d like to think we study those because they’re like spotting a Norwegian Blue, “Oh look! A Space/speed trade-off! Beautiful plumage inn’t it?” You mostly don’t see those things outside of 2nd year CS classes, or the zoo.
OK, so small is good. So how? What ARE the leading sources of binary growth? Glad you asked, easy to answer: dependencies. Those useful libraries are kind of like the Dark Side of The Force: seductive, but they have a price. But hey Force Lightning could come in handy right? You bet it can. You can’t eschew libraries entirely but if you set good binary size budgets for your application you’ll be thinking things like “Do I need ALL of that library?”, “Is there are more economical library that does the same stuff?”, “Where did all this new size come from, did we just accidentally link in a bunch of new stuff?”, “How much Force Lightning do we need anyway?” All of those are healthy thoughts.
And that’s it in a nutshell. Mostly correct. Thanks for reading.