Thinking about it again, actually what I said is bull...
Dependencies are an issue with cluster compilation, but we have all files on the disk anyway. So the only thing that may have a problem is the linker which can of course only start after all jobs are finished.
Could be an interesting feature for the future. The compiler in c::b runs asynchronously anyway, and the next job is taken from the queue when the event sent by PipedProcess::OnTerminate is processed. What if there was a semaphore whose value the user could configure (1, 2, 3, whatever), and as long as there are jobs remaining and DoRunQueue() can acquire the semaphore, another compile process is started. It is only an idea, but it might just work.
Two problems only:
1. The linker has to know when to start, i.e. there must be some kind of counter for the total number of source files
2. Compiler output. How do you receive messages from 2 or 3 threads which may alternately send you stuff via stdout or sterr? One process may fail, the other may run fine. One could pass a number via wxCommandEvent::SetExtraLong, maybe. Then the complete output of one respective job could be buffered and appended to the log in one block.