Code optimization can sometimes be experienced as a lengthy process, with disruptive effects on code readability and maintainability. For effective optimization, it is crucial to focus efforts on areas where minimal work and minimal changes will have to most impact, ie. go for the jugular
I will illustrate this using SamplingProfiler in a small example, taken from a small library that deals with short vectors of varying length (but usually less than 10 dimensions), which I simplified, isolated & anonymized for the purpose of this article.
uses TypInfo; type TDoWhat = (dwInc, dwDec); procedure DoSomething1(var data : array of Integer; what : TDoWhat); var i : Integer; begin for i:=Low(data) to High(data) do begin case what of dwInc : Inc(data[i]); dwDec : Dec(data[i]); else raise Exception.Create('Unsupported: '+GetEnumName(TypeInfo(TDoWhat), Integer(what))); end; end; end;
Get Meat into Belly
Before starting any kind of optimization, one has to define goals and limits, ie. figure out what “good enough” will be rather consider “good enough” to be the state of the code one has grown tired of optimizing it!
The sample code above is quite straightforward and simple. It would of course be possible to blow this code to huge proportions for optimization’s sake. If you are after getting every last drop of CPU-cycle juice, and allow yourself to use every trick in the book, a fully optimized version could represent several thousandths of lines of code (I’m not exaggerating). If it’s your core business, it might be okay, but if it’s just a utility library, the increased maintainability issues could never be justified.
But since this article is intended more as an illustration than a discussion on the methodology, I’ll get straight to the buffalo (beef). For further reading on that subject, you can start from Big O Notation, Benchmarking and Software metrics articles in wikipedia, there are also whole books on the subject.