User forums > Using Code::Blocks

C::B very slow/freezing compiling large project

<< < (2/3) > >>

3. Nope, this is a tool. You don't have to insert anything. (I'm linux dev and I don't know how to actually use it, I know that it exists)

It seems that most of the time C::B is "frozen" is actually spent deleting the dozens/hundreds of "old" object files all at once, before the compilation is even started.

Old objects are not headers, so they shouldn't mess up the compilation of all updated files I guess?

So maybe a solution to speed this up would be to delete each old object file just before compiling it again, instead of doing that as a preliminary pass?  That would distribute the CPU vs FS load better I guess?

Deleting many files should be fast. At least when you don't have productivity killers like windows defender, the indexing service and similar MS improvements. :)

BTW: We don't delete .obj/.o files for normal builds, I think. The compiler just overrides them. I think we delete .obj files only for Rebuild/Clean operations.

2 remarks:

When my compilations on our larger c++project (>2k files) initially take several seconds before all cpus start being used, it is when I use PRECOMPILED headers. This is why I do NOT use them during a normal development day when you only change a few files between 2 compile/link runs. We keep your include dependencies minimal, Thus, our compile/link cycles are faster without precompiled headers. Overall just a few seconds. This is one reason why I love to use CB. Superfast feedback cycles. No fancy stuff, neither ramfs nor distcc or else. Just a Desktop with a CPU having enough cores, an SSD and telling cb to use the number of virtulal cores (SMT, HT) or less for compilation. At least on Linux the OS helps by using free ram. We have separate targets for gcc and clang with pch. These we use rather for batch builds. Even then the pch-header is mentioned only as compiler argument, totally non-invasive. Sure, you must not be lazy with #includes and keep translation-units self-sufficient (see book John Lakos, Large Scale C++, old or new book). Granted, we are in a (at least compiletime wise) fortunate situation not to use partially terrible compiletime hogs like boost. If you want to see a counter example, then try to compile the current trunk of a program called PrusaSlicer ( for 3D printer. It also uses wxWidgets and 1 particlular GUI translation-unit recently always exhausts all 32GB on my machine when i issue make -j 32. Using make -j 16 succeeds though.

Having said that. For 2 or 3 weeks or so, I observe in cb that sometimes (non-pch) builds SEEM TO STALL after several seconds or the log window does not show any output at all. Stopping and restarting compilation does not seem to help, so I restart cb. Somewhat annoying. I do not have digged deeper yet. As usual, way too many things to do.

CB: trunk svn 12312, wx trunk (3.15), arch linux amd64

Sorry for a late update.

I've been struggling with this issue for another year or so, also doing some more tests.
I found that if I reduce the number of targets in my project (from ~10 down to 2 targets only), the freezing time is reduced from ~15 seconds down to ~2 seconds.
(all targets contain most source files, a few hundreds)

Does C::B engage in any cleaning/checking of other targets when a build is started?


[0] Message Index

[#] Next page

[*] Previous page

Go to full version