Avoiding Coding Pitfalls with Performance Signatures

Rico Mariani
8 min readJul 8, 2018

[Originally posted 12/11/2006… this came up again and I thought it would be good to reprint this pearl on Medium. Some notes added.]

On Friday of last week [2006] I gave this presentation at the Computer Measurements Group CMG2006 conference(http://www.cmg.org). I had previously alluded to it in this posting and I have been waiting to write about it until after the conference.

So for the benefit of those of you who couldn’t be there; here’s the slides [in plain text] plus some high level points from the talk. I hope you find it at least somewhat interesting.

Why Am I Talking About This?

Performance Problems: Top Two:

  1. Algorithms with a lot of waste
  2. Dependencies we cannot afford

But how do we know we can’t afford them?

Speaking very broadly, most performance disasters owe their origin to one of the above problems. An algorithm fundamentally unsuitable for the problem, or a dependency that is fundamentally not affordable. It is that second one that this set of slides is all about. I realized just the day before that I had made the cosmic mistake of creating a slide deck whose sole purpose was to talk about “number two.” Har har.

What’s an Affordable Dependency?

Many people ask about the cost of certain methods

  • “Is this slow?”
  • “What’s the performance of System.Foo?”

A lot of times they have little in the way of details

Crystal balls are back-ordered until 2015 [I didn’t get mine 3 years ago, must have been lost in shipping]

I’m often approached by developers looking for guidance with regard to certain approaches. They want to know if something is “fast” or “slow” or reasonable or what. Of course it’s very hard to answer these questions in any kind of general way — what can you say about a method’s suitability for use in someone elses’ system? Yet you really want to help these people — they’re trying to do the right thing.

Context is Everything

A better approach:

Who Am I?

  • What context am I in?
  • What are reasonable costs in this context?
  • What do I need to say about my own performance?

Who are they?

  • That code I’m using: What context was it written for?
  • Can I afford that cost in my context?

When I’m asked about particular methods I can often help without having taken a lot of measurements. I do this by asking the customer something about their context. What are they trying to use and where are they trying to use it? Then you can make some basic conclusions about fitness for purpose. Once you know where the code their are intending to write is going to sit in context, you can often give some general guidance. It’s not perfect but at least you can give some sense reasonableness.

The Need For Qualitative Advice

Quantitative advice is best

  • But there is a deep need for less formal advice

When costs are wrong they are [almost always] very wrong

  • So we can prevent problems with a rough approach

While it’s true that the very best advice you could give is precise and quantitative in nature it’s often the case that qualitative advice is still useful. The main reason for this is that in many situations if a developer gets within a few percent of their performance goal they will be having a large party celebrating their success. Failure more often looks like some kind of calamity where the goals have been missed by a factor of 20 or something like that. So in order to prevent calamities we can at least start by giving very basic advice.

Durable and Consumable Advice

We don’t want to have to change the advice every time the libraries change

We want the advice to be easily consumable

  • no complicated units, friendly to all programmers

Quantitative advice has the problem that it changes almost day to day. Consider just CPU usage of an API. Do you specify it in time, or cycles? Which processor? What if the microarchitecture changes, do you republish? Does it really matter if the delta is only a few percent? Qualitative descriptions on the other hand tend to last — an API that is “medium” cost problably is medium forever (relative to other methods anyway) and if it ever did change to “hi” that would be a very big deal, worthy of republishing. Minor maintenance on an API isn’t likely to change it from a “low” to a “medium” etc.

Introducing Performance Signatures

To reach these goals we associate a signature with every method

  • A signature characterizes some kind of cost
  • You can do this as a mental exercise
  • You can do it with tools support (coming, I hope <g>) [some shipped in Visual Studio 2008 and 2010 and fxcop]

Signatures can be simple one-worders

  • As simple as “Heavy”, “Medium”, “Light”

If you publish a simple description of the cost of a method. Even as simple as “Heavy”, “Medium”, “Light” you immediately give your customers some grasp of what is expensive and what isn’t so much. Today we have nothing in the way of published costs. Compare this to say a structural engineer who has all manner of information about his/her raw materials. We have virtually nothing in the way of costs — even very rough costs would be something. As I said in the talk, “Just throw me a bone or something… anything!” The costs can potentially be along any dimension that is interesting — even things that are only measured approximately or estimated by static analysis.

How Do We Use Them?

You can do profound things even with dumb signatures

  • First, if writing a “Medium” don’t call any “Heavies”
  • Typical mistake

Define best practices for “Medium” and check them in code reviews

  • e.g. Medium cannot do more than 1k of allocations and no long lived objects
  • Your costs

The most important thing to do is to think about the costs and apply “Rico’s Rule” which is that if you’re trying to write a method that is supposed to have a certain cost (low, medium, 12s, 42 iops, whatever units are meaningful to you) you can’t succeed if you use any methods whose cost is typically greater than what you’re supposed to be doing. This isn’t an especially profound observation but the consequence really is profound. Just think about this particular case: If you decide that all hashing functions (every implementation of GetHashCode) should have “low” cost and that “low” cost means no memory allocations then you can give immediate guidance as to what parts of the framework are reasonable to call within the context of a GetHashCode implemenation.

But more importantly, you can extend this idea at any level. Suppose you’re trying to use some web service. Which methods are reasonable to call? Can you characterize the rough costs? Is it suitable to use these methods at your level of abstraction? Many times very unfortunate cycles or backwards dependencies are introduced into systems because very high level methods are called from a very low level context. A very rough scoring could prevent these in a lot of cases. It isn’t perfect by any means but it’s something.

Remember, we’re not trying to find little grains of gold in the sand on a beach here. Performance calamaties look more like a WW2 tank in the surf. We can at least make sure we have none of those before we start with very little effort.

What about those Tools?

Insane coolness is possible

  • I’m writing an “Inner Loop”
  • My Intellisense color codes methods that are not reasonable to use in that context by cross-checking their signature!
  • I’m looking at a profile
  • The profiler notices that your intended costs are not matching the measured costs and points out this problem

Let’s think about that GetHashCode example again. Because you’re using the .NET framework — or any framework, certain conventions apply and can be readily detected. One of these might be that all methods named “GetHashCode” should be of “low” cost. Now these costs I’m considering in this example are just three states “low”, “medium”, and “high” so we need two lousy bits per method to hold them all. But look at what coolness I can do: while I’m in a method named GetHashCode I could put a little green squiggly underline underneath any method I tried to call that was not of “low” cost because it’s probably a bad idea. Sure it’s syntactically valid, even semantically correct but is it wise? This is incredible performance guidance as you type. And these are real problems we’re avoiding hereeven with signatures that are very simple-minded.

Likewise, when looking at actual measured results you can cross check the actuals against the intended and highlight cases where things didn’t go as planned. The fact that there is a plan, or that one can be at least approximately inferred is invaluable to the report. “This function was intended to be ‘medium’ but the measured cost is ‘hi’ — click here for details.” Again, any kind of hints in this space are invaluable, even if they are imperfect.

Static Analysis Based Tools

We can auto-generate signatures based on code complexity and other heuristics

  • Now we have qualitative guidance (that we can amend) about any managed method
  • We can do this for old frameworks too!

Tools like FXCOP can give warnings when methods take bad dependency

Can do the same for unmanaged code

Not only can we generate the signatures of methods for current and past frameworks based on heuristics like total allocations, we can publish them — which I hope to do soon. With managed code it’s fairly easy to decompile and get these costs; with other code it’s harder but still doable.

Does this Really Help?

Absolutely!

  • Real problems like this happen all the time
  • Hashing and Comparison functions need to be “Inner Loop” quality
  • “Throughput” oriented functions cannot create long-lived objects
  • We see these mistakes in code all the time
  • Sometimes they’re baked into the design!
  • In principle, Intellisense can save you from performance mistakes as you type — a first!

I have some real results from my initial analysis of mscorlib to share as well.

Initial Results

Applying the algorithm as described in the paper found 53 “violations” in mscorlib

  • The test is to make sure that functions named “Equals”, “CompareTo”, and “GetHashCode” result in no memory allocations

Twenty of these were false positives, introduced because of methods that did one-time allocations

The most basic analysis was still pretty good, but we can make it better with just a little help.

False Positive Pattern

Typical false positives looked like this:

Something GetSomething()
{
if (savedSomething == null)
{
savedSomething = CreateSomething();
}
return savedSomething;
}

Any function calling GetSomething() would appear to allocate (statically) which is problematic because in this case the costs are associated with allocations.

Remedy

Initially, I used a broad filter that gives zero cost to all functions of the form

… F()
{
if (…)
{

}
return …;
}

A more discriminating filter is clearly possible, but even this dumb filter gets the job done.

Results to Date

With the filter in place, runs over mscorlib resulted in 33 reports, all of which were confirmed to be actual problems by manual examination

However, some are not that important because the methods would not be considered performance critical

  • e.g. hashing of security policy

Some are known performance problems with comments to that effect

  • e.g. System.ValueType.Equals()

On the other hand, there were some I would consider to be real problems.

A Typical Error

Problems like this one are easily addressed once they are found:

public override int System.Net.Cookie.GetHashCode() { 
return string.Concat(
new object[] {
this.Name, “=”, this.Value, “;”,
this.Path, “; “, this.Domain, “; “, this.Version }
).GetHashCode();
}

Do we really need to make a string array, with 4 constant elements, concatenate them and then hash that string in order to hash a cookie? I think not. Now is this a calamity? Not so much but then if I found any real calamities in code with as much inspection as mscorlib that would be pretty amazing — what’s important is that the technique holds water and customers writing code can get some immediate guidance either in real-time or in an FXCOP-like fashion for potential issues.

Conclusion

Addressing these kinds of issues, to give basic guidance, is, in my opinion, vastly superior to making a lot of ad hoc rules about API usage which try to prevent the same kinds of problems one at a time.

I hope to keep making progress in this area. I’d be happy to hear your thoughts.

--

--

Rico Mariani

I’m an Architect at Microsoft; I specialize in software performance engineering and programming tools.