Friday, August 12, 2005

The ultimate Delphi IDE start-up hack

He's done it again! Petr Vones has hacked together and published an at-your-own-risk patching tool that will trim the start-up time of your Delphi IDE. I've tested it with my Delphi 7, 8 and 2005 IDEs and in each case, the startup time of the IDE is noticeably faster after applying Petr's clever hack.

Before you use the patcher, read Petr’s readme file where he notes:

Warning: removing the dup unit check code may cause unexpected errors, USE AT YOUR OWN RISK !!!

Reading the rest of this article, doesn’t hurt either. ;-)

Background information
At start-up the Delphi IDE (like any other application that uses the magic of Delphi's run-time packages) will load statically and dynamically linked packages (plain Windows .DLL files using a .BPL extension and containing Borland's magic plumbing to make it all work). Each package can contain any number of units and require any number of external packages. In this context, the main thing is that Bad Things™ (such as random AVs, .DFM streaming errors etc) can happen if you were to load to packages containing the same (or same-named) unit in two or more packages.

To prevent this from happening, the Delphi RTL contains a SysUtils.LoadPackage routine that is responsible for dynamically loading a new package. This is used by the IDE to load all packages (containing the standard VCL components, third party components, design time editors, wizards, exports etc.) One of the tasks LoadPackage performs is to check that the currently loaded package does not contain a unit name that exists in one of the already loaded packages. If it detects a unit-name collision, a EPackageError exception with the message

Cannot load package 'PackageA' It contains unit 'UnitName' which is also contained in package 'PackageB'

is raised and the newly loaded package is unloaded again.

Analysing the algorithm
If you have the Delphi RTL source code available (all serious Delphi programmers should) you have a look at this logic by searching for "CheckForDuplicateUnits". Stop reading now and take a look at that code.

Ok, back already? Pretty complicated-looking code, don't you think? I've taken a quick shot at trying to analyse the complexity of checking the uniqueness of all unit names contained in all loaded packages (i.e. how it theoretically performs when number of loaded packages and units increase). There are five nested loops:

  1. The outer loop (in the IDE) loading all the installed packages (read from the Registry)
  2. One recursive loop (InternalUnitCheck calling itself) to handle all the requires-links between the current package and all the sub-packages
  3. One iterative loop (in InternalUnitCheck) of all the contained units in the current package
  4. One iterative loop (in IsUnitPresent) of all currently loaded modules
  5. One iterative loop (nested in IsUnitPresent) of all the contained units in each loaded module
So the algorithm looks like it has the complexity of O(P * R * M * U * U) where
  • P is the number of installed packages that will be loaded by the IDE
  • R is the average number of required packages
  • M is the average number of loaded modules (starts low and increases as more and more modules have been loaded)
  • U is the average number of units contained in a package

M varies from 0 to ~P, and on average it will be about P / 2. Let's be nice and set the average number of required packages to just 2. The expression then simplifies to O( P * 2 * (P/2) * U^2) or O(P^2 * U^2).

U^2 will be some constant (but potentially large) number, so the major complexity is O(N^2).

What this all this jumble-bumble means in practical terms is that the more packages you have to load, the slower it gets - no surprise there. But it also means that the running time rises exponentially as the number of packages and units increase. So theoretically, if loading 100 packages takes 10 seconds, then loading 200 packages should take in the neighbourhood of 40 seconds (instead of the 20 seconds that would be expected with a linear O(N) algorithm).

Is there a problem?
For most people the start-up time of the IDE should not be a major issue. The safety that the unit-name checks gives you is probably worth the extra time it takes. If you are often changing the installed packages, testing freeware and commercial packages, installing your own development packages etc. the convenience of having the IDE explicitly inform you of unit name conflicts outweighs any start-up time improvements.

However, for people that have a fixed group of a large number of packages, the start-up time can get noticeably long. If you are willing to live a little dangerously and risk shooting yourself in the foot (getting random crashes or weird issues caused by duplicate unit problems), you may consider using Petr's hack.

What is the solution?
If we conclude that given a trend of increasing number of packages typically installed into the Delphi IDE, the start-up time starts getting uncomfortably long, what can we do about it?

Well, for the fool-hearted (brave?) among us, there is Petr's patching hack, of course. This is a brute-force solution that simply turns off all unit-name checks "simply" by patching the first instruction of the CheckForDuplicateUnits routine to be a RET instruction, effectively turning it into a NOP operation.

But if Borland decided to do something about this, what could they do? Considering the high complexity and O(N^2) behaviour of the existing CheckForDuplicateUnits implementation, the most obvious thing to do would be to change the algorithm into something a little more efficient, like O(NlogN) or even O(N). How can this be done?

The goal is to detect collisions between a large set of strings. A hashtable using a string key would be perfectly suited for this. One complication is that the current algorithm uses the linked structure of all currently loaded packages to iterate over them all.

If *all* loaded packages are loaded dynamically through calling LoadLibrary this should not be a problem. However, if one or more packages are loaded statically (an application or package require a set of other packages at load time), the OS will load this packages and the LoadLibrary routine (and its unit-name checking logic) will never "see" these packages, and thus will fail to detect collisions with the unit names contained in them.

In addition a hashing list solution will have to take into consideration unloading of packages, removing unit names from the hashlist. The current algorithm doesn't have to take this into account as it uses the automatically linked up/torn down module chain.

Instead of maintaining a global hash-list that is kept between invocations of CheckForDuplicateUnits, it could create an empty one from scratch on each call and just use it as a relatively quick way of finding unit-name collisions. It would do this by first iterating over the already loaded modules, adding their unit names to the hash list. Then it can try to add the currently loading package unit names to the hash list – if a collision is detected here raise an error and unload it.

I’ve not analysed the possible implementations in details but (assuming it would be possible to implement correctly) I’d guess that a global hash list implementation would give O(N) performance, while a local hash list implementation (that is re-populated on each call) would give O(NlogN).

A completely different solution (and one that could be implemented even if the basic algorithm is improved), is to have the IDE detect when a new graph of packages/units that have not been checked before is being loaded. Turn off the checks if there are no new or changed packages since the last check.

Conclusions
Hardware is getting faster, but the growing size and complexity of software may be eating up all the benefits. Specifically for the Delphi IDE with a large number of packages installed, start-up times can become sluggish.

I think that Petr's patching tool will end up being used by die-hard hackers like myself until a cleaner and safer solution is present. That is not necessarily a bad thing - it gives us the nice feeling of living on the edge and getting a faster IDE start-up time than everybody else ;).

7 comments:

Jay Huber said...

Hallvard, you said...

The major complexity is O(N^2)...

If loading 100 packages takes 10
seconds, then loading 200 packages
should take in the neighbourhood
of 100 seconds...

I don't agree with your math. It was probably just a typeo on your part. In your example, you have doubled N, so the running time will go up by a factor of 4, not 10. Why 4? Well, let's take your formula: O=N^2. For N=100, we get O=10,000. For N=200, we get O=40,000. 40,000/10,000 is 4. Thus, the time for N=200 should be 40 seconds.

In order to get a tenfold slowdown, you'd need to go from 100 elements to 317, not 200.

If the running time really went from 10 to 100 when doubling, you'd be dealing with O(N^3.32), not O(N^2). Why 3.32? It's Log2(10).

-Jay

Brad White said...

A cleaner way to look at it:
O(N^2) =10 where N=100

M = 200 = 2N

O((2N)^2) is obviously
O(4(N^2)) = 4*O(N^2) = 40

Brad.

Hallvard Vassbotn said...

You are both right - I've updated the text now, thanks!

Jarle Stabell said...

Hallvard, you said:
"it also means that the running time rises exponentially"

Since it is O(N^2), you probably meant quadratically. :-)

Anonymous said...

Maybe his math is incorrect in some way, but the Idea is main here. It's not time to say somebody about his math, it's time to do something with very long IDE starting...

Anonymous said...

thanks for the info

jemmy said...

Thanks for sharing this newly idea on this latest technology about Delphi Product Development



Copyright © 2004-2007 by Hallvard Vassbotn