2 remarks:
When my compilations on our larger c++project (>2k files) initially take several seconds before all cpus start being used, it is when I use PRECOMPILED headers. This is why I do NOT use them during a normal development day when you only change a few files between 2 compile/link runs. We keep your include dependencies minimal, Thus, our compile/link cycles are faster without precompiled headers. Overall just a few seconds. This is one reason why I love to use CB. Superfast feedback cycles. No fancy stuff, neither ramfs nor distcc or else. Just a Desktop with a CPU having enough cores, an SSD and telling cb to use the number of virtulal cores (SMT, HT) or less for compilation. At least on Linux the OS helps by using free ram. We have separate targets for gcc and clang with pch. These we use rather for batch builds. Even then the pch-header is mentioned only as compiler argument, totally non-invasive. Sure, you must not be lazy with #includes and keep translation-units self-sufficient (see book John Lakos, Large Scale C++, old or new book). Granted, we are in a (at least compiletime wise) fortunate situation not to use partially terrible compiletime hogs like boost. If you want to see a counter example, then try to compile the current trunk of a program called PrusaSlicer (github.com/prusa3d) for 3D printer. It also uses wxWidgets and 1 particlular GUI translation-unit recently always exhausts all 32GB on my machine when i issue make -j 32. Using make -j 16 succeeds though.
Having said that. For 2 or 3 weeks or so, I observe in cb that sometimes (non-pch) builds SEEM TO STALL after several seconds or the log window does not show any output at all. Stopping and restarting compilation does not seem to help, so I restart cb. Somewhat annoying. I do not have digged deeper yet. As usual, way too many things to do.
CB: trunk svn 12312, wx trunk (3.15), arch linux amd64