Hi again,
It is your decision, however, whether you prefer to live in 2006 or in 1978. Personally, I prefer to click the blue gear, and whenever I have to manually edit a config file anywhere, there's this voice in my head saying "duh... couldn't they do it the easy way".
Obviously I dont use VI, or i wouldnt be posting here

But if you develop an open source project, your contributors, if you are lucky to get them will come with all sorts of baggage (maybe thats my problem). I was just commenting that it seems easier to me to get them to install another command line tool, rather than to require them to use a particular total gui environment. It might be a barrier to entry if on your web site you say "To compile and develop with this project you need to download C::B and install that, and use that, then they come here, and see its a big GUI thing, and that "puts them off" because they are an Emacs or Vi user (and there are a lot of them around). Whereas an Emacs or Vi user isnt likely to be put off by using another command line tool, because it doesnt interefere in their own development mind set. Thats what i was trying to get at.
Also, I am surprised what makes you think that installing Jam is easier than installing Code::Blocks. Well, it is a matter of taste, maybe. Personally, I perceive Jam (despite being a good build system otherwise) as an extreme bitch to build and set up.
Agreed, it can be a bitch to set up your build files correctly, but that is mostly due to a lack of good documentation I find. However to install for an end user who just wants to build your project from source, it is a snap, it is a small exe, that they can put anywhere, and it will just run. It doesnt require any Gui Toolkit to be installed, or library matching, or anything.
On the other hand, your fear about future incompatibilities of project files is quite understandable. Considering that my Fedora Core 4 DVD contains four different versions of automake which are not compatible with each other, this fear seems justified... 
Yes, in my environment i need a 100% replicatable build environment, what that comes down to, is with a project, I archive the code of the Compiler, all Libraries and Tools im using to generate the code with the project, so that in 3 years or more, i can come back to it if i need to and get the same binary from the source. It also means I can give my development environment to a government regulator and tell them how to install it, and type ./build in the root of the project and the output will be exactly the same binary/binaries as ive submitted for approval.
As far as autogenerating data is concerned, I have to wonder what makes you believe that Code::Blocks is unfit for that purpose. I use it for that purpose every day.
[snip]
But now for the one important question:
If you think that the Code::Blocks build system is unfit for some purpose or lacks a vital functionality, can you name a different IDE which has this functionality (so we can have a look at it) or do you have a proposal of what is missing and how it could be implemented?
It is not like we aren't willing to go for it if you can name what exactly is missing 
Id say C::B is about as good as a "Custom Built into the IDE" build systems get, they all have their strengths and weaknesses, and this post wasnt meant to be a comparitive review of the merits or otherwise of C::B versus its competitors. Its just that they all seem to be re-inventing the wheel, without a clear purpose for improving the nature of software engineering. Ive never been an advocate of "Any App ive come across" does this cool thing, so we should to, feature creep. A Build System is a very complex thing to get right, and my post was mostly about the philosophy, meaning what problems are trying to be solved, whats the theory in solving those problems, how is it envisioned that this system will be better than what has come before for software engineering.
1. Building mixed language applications.
From what I understand, this depends on the compiler. I've read in the forums that someone is using Fortran and C mixed, in Code::Blocks, without problems.
I was more thinking about 2 different compilers, that generate compatible object code, which can be linked by an appropriate linker. Not that ive tried, but it seems not very straight forward to do, because the compiler setup is project wide. This is also, i would agree an uncommon thing to do, but it is done. I would expect the person using Fortran and C is using GCC, which is really one compiler with 2 (or more) language front ends.
2. Handling custom dependency scanning.
What is a "custom dependency scanning"? Inter-project dependencies?
Ok, Consider this ficticious example, for a description of what im getting at:
I have a utility called "MyFileManipulator" it takes a file, which lists a bunch of operations to do with other files, adn generates an output file.
You would call it thus:
MyFileManipulator output input_script
an example input_script might look like this:
Add ../Data/MyMenuImage.png
DefinesFrom MyGlobalDefines.h
Add ../Data/MyMenuFont.png
Shrink ../Resources/ABigLogo.jpg 80,48
EncodeIntoLsBit ../Resources/ATestPattern.png "This is a test pattern"
Concatenate FurtherManipulatedFiles.mlist
The Build system for this then goes:
I Build ManipulatedData.dat from FilesToManipulate.lst by calling MyFileManipulator
I can see that ManipulatedData.dat is up to date with FilesToManipulate.mlist, BUT
this isnt enough, because ManipulatedData.dat is also dependent on the files listed in FilesToManipulate.mlist, however
I dont know how to process FiilesToManipulate.lst to get the dependencies from it out (like i automatically do for c files, by scanning for #include) the user has howerver provided me with a script (or regex expression, or something) I can use to extract those dependencies (for this file). So using the above example, the script gives the build system the following list of dependent files:
MyGlobalDefines.h
../Data/MyMenuFont.png
../Resources/ABigLogo.jpg 80,48
../Resources/ATestPattern.png
FurtherManipulatedFiles.mlist
The the build system can then check if any of these have changed, or files they depend on have changed. MyGlobalDefines.h (being a .h file) would be processed by the standard .h file dependency checker. the FurtherManipulatedFiles.mlist would be checked by the custom mlist dependency scanner. And so on, until there were no more dependencies to check, if any of them change, then the ManipulatedData.dat file would be regenerated.
Without this, the developer needs to keep in his head, ahh, ive changed a file that my auto generated data relies upon, but the build system cant recognise, so i either need to do a clean build all or i need to delete ManipulatedData.dat. The problem with this is it can be forgotten, you build, its a clean build, but its not an uptodate build.
Also, you dont want to have to go and tick a box in a dialog that says "rebuild this file if this file now selected changes" because that relies on a process of "Ok, ive edited the .mlist file, now ive got to go and make sure all of the dependiencies in the build system are spelled out". That process is error prone. A Build system should handle the deatils, once you have specified the rules. Sometimes its not possible to autoscan a dependency, so you absolutely must force it, but those instances should be mimimised, and everything absolutely possible should be (able to be) automated.
3. Handling a build, such that a file will always be recompiled if its code changes (dates are not enough, what if the file hasnt changed, but the compiler flags for it have? What if you are building from a network share, and it has date synchronisation problems)
You mean, using signatures instead of dates. I don't know which one C::B uses.
I dont see any evidence of C::B using signitures, because usually they are cached somewhere. Yes, i think signatures are a better approach, but it is a matter of philosophy, there may be an even better approach (i dont know). But if you use signatures, then what do they contain? How do you cache them so you are not un-neccesarily generating signitures for unmodified code? How do you handle signatures for your script that decide what to build and how?
4. Autogenerating c code from custom data files (in a custom way), and then compiling the result.
This can be done as thomas said, with a target, and having it as first target.
Or you can use AngelScript in this case (altrough I preffer the first solution).
But the problem as i understand it with both of these approaches currently is it would require the custom data to be re-generated every build. Some of my custom data takes minutes to generate (on a very fast computer) and i dont want to do it if i dont need to. I suppose i could code a script to work out if the file needs to change explicitly, but then the build engine isnt doing anything for me, because it isnt aware of this extra work.
6. Post processing a build elf (Or whatever) output file from the link (or similar) stage.
You can do that using pre/post build steps.
True, but again, they would be executed every time. Also, they may be derived from a number of .elf's. Again, I see this as targets that get built after the .elf file/s in my case they are more important than the .elf files (even though they may be built from them). Again, they shouldnt re-build if the dependencies on them havent changed.
7. Handling Autoconf-like support for finding #include files, libraries, functions and typedefs depending on the target being built on.
Ok, this is the most important one, and your entire post can be resumed to this.
I expect that somewhere in the future, C::B could export automake/autoconf projects.
But this is really a picky subject. Because of two facts:
1) Autotools are found in every posix system. They almost works everytime, very easy and standard way from a USER point of view.
2) Autotools are a pain in the ass to maintain from a DEVELOPER point of view. And they suck bad. :lol:
Agreed and Agreed. The single biggest argument people put up to using "Autoconf" tools over any other method is "There are a lot of M4 scripts already written for Autoconf". To my mind, thats not as important as having a build system that can deal with the sorts of problems Autoconf is designed to address. If it can, then a developer is free to add their own tests, and a new standard test library can be built to make a developers life easy. Im also not convinced most of the auto tests are worthwhile, but that is a seperate subject altogether.
So it's a debate of doing things "the standard way (on *nix)" vs. "another (better) way".
Not an easy topic, but this needs to be discussed more.
And this is the cruxt of my post (even if i failed to express it well). If C::B is going to re-invent the build system, it needs to make sure its a "better way" or there isnt a lot of point to it.
8. Rebuilding if the map file (generated during the link) is missing. (Which means re-linking, even if the output .ef still exists)
It doesn't do this just now?
This is a 2 Targets from 1 set of source problem, i dont see how to set that up in C::B.
In a generic form
XC output.1 output.2 list of inputs
XC generates both output.1 and output.2 from the list of inputs. XC needs to be called to regenerate both output.1 and output.2 if either are out of date with the source. Most build systems assume 1 output from a list of inputs, which in the case of a link isnt the case (you get a link map (if you pass the right options) and the linked executable, for me, both are equally important to the development process.
9. Easily re-using common custom elements across projects.
This is also the other very important topic. I'm working on this, at a inter-platform level. Not an easy topic neither.
[/quote]
No not an easy subject with a GUI interface, and to be clear, im not talking about 2 variants of the same tree, im talking about taking things like "custom dependency scanners", "auto code generation scripts", "custom post processing sequences", "autoconf like stuff" from one project and sticking it in another (completely unrelated) project, where the only similarity is where the sorts of build processing remain similar.