Comment 31 for bug 598462

Revision history for this message
Ulrich Weigand (uweigand) wrote :

I've now reproduced and analyzed the problem on i386 with GCC 4.5, i.e. building python3.2-3.2~rc1 using the 4.5.1-1ubuntu3 compiler.

It turns out that there is no particular bug here (the CFG looks identical in the -fprofile-generate and -fprofile-use cases), but instead we're running in a known limitation of the profiling code: it is not thread-safe.

This means that when using profiling in a multi-threaded application, profile counters may not be fully accurate since counter updates may have been skipped due to race conditions in accessing the counters.

This is consistent with the symptoms we're seeing: the problem disappears on my (single-core) ARM board, but is present on i386/amd64/ppc all of which presumably are built on multi-core machines, which increases the chance of actually running into one of those race conditions. The functions where the problem shows up likewise tend to be places where high thread contention is expected (e.g. synchronization primitives).

The recommended way to deal with this problem is to use the -fprofile-correction flag, which will employ heuristics to attempt to adjust incorrect counters, instead of simply aborting compilation. See the manual page:

       -fprofile-correction
           Profiles collected using an instrumented binary for multi-threaded
           programs may be inconsistent due to missed counter updates. When
           this option is specified, GCC will use heuristics to correct or
           smooth out such inconsistencies. By default, GCC will emit an error
           message when an inconsistent profile is detected.

I've added the -fprofile-correction flag to the build_all_use_profile rule in Makefile.pre.in, and with this change I was able to successfully build the package (only i386 tested).

The issues with 4.6 must be some other problem (note that the profiling code was significantly rewritten between 4.5 and 4.6, so we may well run into new problems here). I'll look into this next.