Author Topic: Query on the Philosophy of the Build System  (Read 11267 times)

Stevo

  • Guest
Query on the Philosophy of the Build System
« on: January 24, 2006, 02:15:30 pm »
Hi,

Before I start, let me say this:
1. I think you guys have done a great job so far with C::B
2. I really like C::B so far (bugs aside) :)
3. This post isnt a flame or a troll, but im really confused about the intent of the Build System, it isnt meant to be an attack on anyone or an insult to their effort on this project, so please dont take it that way.
4. Im happily bypassing the integrated build system, so this post isnt a gripe about that.

C::B has this really (from my perspective) awkward build system.  it seems, that it can do simple things really simply, but cant do complex things very easily at all, nor (as far as i can tell) will it ever be able to manage complex projects easily.  So I wonder to myself, whats the point?  Obviously a lot of effort is being put into it, and im sure the developers have their reasons, its just that i cant see them, and it seems to me to make certain parts of the C::B project overly complicated.

As anyone who has read my (admitedly few) previous posts knows, I use build systems like Jam, and Scons, and on occasion autotools/make.

Each of these have their strengths and weaknesses, and i dont propose to debate their relative merits, or try to convince anyone to use or not use any particular tool (C::B included).

The projects I develop have multiple sources of data, they automatically create code from things like FPGA bit streams, they encapsulate graphics files as c files, then link them with code, they incorporate assembler code, they link to non standard libraries, they run various utilities to package the end results for my targets (which arent PC's), etc.  Once I get an elf from my build, im only half way to having a usable program.  Now to set this up under C::B's build system might be possible, but it would be tedious. 

Further, because there is no "build" script as such, I have nothing to save with the project so  i can re-build the project in 6 months, 12 months or 2 years time.  I would need to save a copy of the C::B i used to build it, to guarantee i could rebuild it later from the saved C::B Project files (in case C::B has an incompatible change which i cant guarantee it wont).  Further, if i was to re-distribute my source (say to a customer, or a government regulator) then i would have to give them a copy of C::B to use to build the application, whcih would be a more difficult prospect than having them install SCons, or Jam, or Make and just run the build script through it.

Also, I dont see how the current build system handles things like (but not limited to):
1. Building mixed language applications.
2. Handling custom dependency scanning.
3. Handling a build, such that a file will always be recompiled if its code changes (dates are not enough, what if the file hasnt changed, but the compiler flags for it have?  What if you are building from a network share, and it has date synchronisation problems)
4. Autogenerating c code from custom data files (in a custom way), and then compiling the result.
5. Only doing 4, if the data files change (relates to 2).
6. Post processing a build elf (Or whatever) output file from the link (or similar) stage.
7. Handling Autoconf-like support for finding #include files, libraries, functions and typedefs depending on the target being built on.
8. Rebuilding if the map file (generated during the link) is missing. (Which means re-linking, even if the output .ef still exists)
9. Easily re-using common custom elements across projects.

What all of this boils down to for me is:
1. I dont see how the integrated build system can be used for Open Source software, because it would force any prospective contributors to use C::B to build it, any they may want to use Vi or Emacs, or something else, and so would limit user contribution.  Or otherwise to maintain a makefile, and to independently maintain the C::B project.  I know C::B can autogenerate makefiles (at least in theory, ive never done it), its still sub optimal, because if a contributor adds a file, the maintainer has to back track their changes to the make file, into the C::B script and re-generate.  Also, then there is room for error, because the Makefile may not generate the exact same build as the C::B build system does.
2. I dont see how it can be (at least easily) used for programs that need a repeatable build system, or have complex build requirements.
3. The only application (i see) is for simple (maybe university style) projects, that have limited life and applicability.  It may be very easy for those types of projects, but it has little applicibility to (as far as i can see) many real world programming problems.

I cant see the point, but surely there must be one, because a lot of unpaid (but not unapreciated) time and effort is devoted to it.  So I gues the questions are,

Whats the philosophy behind it?
What am i missing?
What problem does the integrated build system seek to solve? 

Thanks for reading this far.
Stevo

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Re: Query on the Philosophy of the Build System
« Reply #1 on: January 24, 2006, 03:24:23 pm »
Is today April the first? Sorry, but I really can't believe you are serious. :lol:

In particular the argument about being unfit for open source development because someone might want to use vi strikes me...
The only way to solve this issue is to not use an IDE at all and write all your makefiles by hand. No matter what IDE you use, and no matter how sophisticated it is, you will always find someone unwilling to use it, and you will always have the problem of having to back-port something that someone has written. The only way to get around this is if everybody uses the least common denomiter (i.e. everybody uses vi and make).
It is your decision, however, whether you prefer to live in 2006 or in 1978. Personally, I prefer to click the blue gear, and whenever I have to manually edit a config file anywhere, there's this voice in my head saying "duh... couldn't they do it the easy way".

Also, I am surprised what makes you think that installing Jam is easier than installing Code::Blocks. Well, it is a matter of taste, maybe. Personally, I perceive Jam (despite being a good build system otherwise) as an extreme bitch to build and set up.

On the other hand, your fear about future incompatibilities of project files is quite understandable. Considering that my Fedora Core 4 DVD contains four different versions of automake which are not compatible with each other, this fear seems justified... :)

As far as autogenerating data is concerned, I have to wonder what makes you believe that Code::Blocks is unfit for that purpose. I use it for that purpose every day.
As it happens, Code::Blocks even uses an auto-generated header in its own build process (even though running the autorelease tool is rather a very modest example, but it is nevertheless a proof of concept).
I am working on a proprietary project about twice the code size of Code::Blocks, using Code::Blocks and using a lot of auto-generated or auto-processed data. Sure enough, you have to do some scripting here and there to make everything work seamlessly, but it is really not a problem.
You got me quite amazed if your projects are so big that you find Code::Blocks unsuitable to handle them... are your projects in the dimension of OpenOffice?

But now for the one important question:
If you think that the Code::Blocks build system is unfit for some purpose or lacks a vital functionality, can you name a different IDE which has this functionality (so we can have a look at it) or do you have a proposal of what is missing and how it could be implemented?
It is not like we aren't willing to go for it if you can name what exactly is missing ;)
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Offline 280Z28

  • Regular
  • ***
  • Posts: 397
  • *insert unicode here*
Re: Query on the Philosophy of the Build System
« Reply #2 on: January 24, 2006, 03:48:50 pm »
@stevo

I think the best reply is you use yours, and I'll continue using mine and making it better for what I'm using it for. :umm: :umm:
78 280Z, "a few bolt-ons" - 12.71@109.04
99 Trans Am, "Daily Driver" - 525rwhp/475rwtq
 Check out The Sam Zone :cool:

Offline Game_Ender

  • Lives here!
  • ****
  • Posts: 551
Re: Query on the Philosophy of the Build System
« Reply #3 on: January 24, 2006, 04:03:38 pm »
I think you have really missed a point here.  You can still get almost all of the IDE functionality of Code::Blocks and still use another buid system.  You can just "enable custom makefiles" and then tell it what make commands to run for a target.  You can then have your elaborate collection of make/jam files (which you must have) do the work.  As long as you tell Code::Blocks where the final binary went you can still debug it too. 

Now if you were to use Code::Blocks' build system you have several option
  • You can run arbitrary pre and post build shell commands, which could... anything. 
  • You can set a target to be dependent on external files.
  • Soon you will be able to use build hook scripts written in AngelScript to modify compile settings based on platform.

So with that said, we are down to what your fear of your build system going obsolete?
« Last Edit: January 24, 2006, 04:05:57 pm by Game_Ender »

takeshimiya

  • Guest
Re: Query on the Philosophy of the Build System
« Reply #4 on: January 24, 2006, 05:48:01 pm »
Quote
1. Building mixed language applications.
From what I understand, this depends on the compiler. I've read in the forums that someone is using Fortran and C mixed, in Code::Blocks, without problems.

Quote
2. Handling custom dependency scanning.
What is a "custom dependency scanning"? Inter-project dependencies?

Quote
3. Handling a build, such that a file will always be recompiled if its code changes (dates are not enough, what if the file hasnt changed, but the compiler flags for it have?  What if you are building from a network share, and it has date synchronisation problems)
You mean, using signatures instead of dates. I don't know which one C::B uses.

Quote
4. Autogenerating c code from custom data files (in a custom way), and then compiling the result.
This can be done as thomas said, with a target, and having it as first target.
Or you can use AngelScript in this case (altrough I preffer the first solution).

Quote
5. Only doing 4, if the data files change (relates to 2).
Same as 4. You can do just the same.

Quote
6. Post processing a build elf (Or whatever) output file from the link (or similar) stage.
You can do that using pre/post build steps.

Quote
7. Handling Autoconf-like support for finding #include files, libraries, functions and typedefs depending on the target being built on.
Ok, this is the most important one, and your entire post can be resumed to this.
I expect that somewhere in the future, C::B could export automake/autoconf projects.

But this is really a picky subject. Because of two facts:
1) Autotools are found in every posix system. They almost works everytime, very easy and standard way from a USER point of view.
2) Autotools are a pain in the ass to maintain from a DEVELOPER point of view. And they suck bad. :lol:

So it's a debate of doing things "the standard way (on *nix)" vs. "another (better) way".
Not an easy topic, but this needs to be discussed more.

Quote
8. Rebuilding if the map file (generated during the link) is missing. (Which means re-linking, even if the output .ef still exists)
It doesn't do this just now?

Quote
9. Easily re-using common custom elements across projects.
This is also the other very important topic. I'm working on this, at a inter-platform level. Not an easy topic neither.

Offline Game_Ender

  • Lives here!
  • ****
  • Posts: 551
Re: Query on the Philosophy of the Build System
« Reply #5 on: January 24, 2006, 05:51:46 pm »
Quote
9. Easily re-using common custom elements across projects.
This is also the other very important topic. I'm working on this, at a inter-platform level. Not an easy topic neither.

That is what the new build script hooks are for.  You can just write one script and use it in multiple projects.

takeshimiya

  • Guest
Re: Query on the Philosophy of the Build System
« Reply #6 on: January 24, 2006, 06:14:52 pm »
Quote
9. Easily re-using common custom elements across projects.
This is also the other very important topic. I'm working on this, at a inter-platform level. Not an easy topic neither.

That is what the new build script hooks are for.  You can just write one script and use it in multiple projects.

Yes, but if you do it that way (AngelScript hooks for the build system) it will be more difficult to export them to autotools/makefiles/whatever.
The real problem is autoconf.

Stevo

  • Guest
Re: Query on the Philosophy of the Build System
« Reply #7 on: January 25, 2006, 01:35:28 am »
Hi again,

It is your decision, however, whether you prefer to live in 2006 or in 1978. Personally, I prefer to click the blue gear, and whenever I have to manually edit a config file anywhere, there's this voice in my head saying "duh... couldn't they do it the easy way".
Obviously I dont use VI, or i wouldnt be posting here :)  But if you develop an open source project, your contributors, if you are lucky to get them will come with all sorts of baggage (maybe thats my problem).  I was just commenting that it seems easier to me to get them to install another command line tool, rather than to require them to use a particular total gui environment.  It might be a barrier to entry if on your web site you say "To compile and develop with this project you need to download C::B and install that, and use that, then they come here, and see its a big GUI thing, and that "puts them off" because they are an Emacs or Vi user (and there are a lot of them around).  Whereas an Emacs or Vi user isnt likely to be put off by using another command line tool, because it doesnt interefere in their own development mind set.  Thats what i was trying to get at.

Also, I am surprised what makes you think that installing Jam is easier than installing Code::Blocks. Well, it is a matter of taste, maybe. Personally, I perceive Jam (despite being a good build system otherwise) as an extreme bitch to build and set up.
Agreed, it can be a bitch to set up your build files correctly, but that is mostly due to a lack of good documentation I find. However to install for an end user who just wants to build your project from source, it is a snap, it is a small exe, that they can put anywhere, and it will just run.  It doesnt require any Gui Toolkit to be installed, or library matching, or anything.

On the other hand, your fear about future incompatibilities of project files is quite understandable. Considering that my Fedora Core 4 DVD contains four different versions of automake which are not compatible with each other, this fear seems justified... :)
Yes, in my environment i need a 100% replicatable build environment, what that comes down to, is with a project, I archive the code of the Compiler, all Libraries and Tools im using to generate the code with the project, so that in 3 years or more, i can come back to it if i need to and get the same binary from the source.  It also means I can give my development environment to a government regulator and tell them how to install it, and type ./build in the root of the project and the output will be exactly the same binary/binaries as ive submitted for approval.

As far as autogenerating data is concerned, I have to wonder what makes you believe that Code::Blocks is unfit for that purpose. I use it for that purpose every day.
[snip]
But now for the one important question:
If you think that the Code::Blocks build system is unfit for some purpose or lacks a vital functionality, can you name a different IDE which has this functionality (so we can have a look at it) or do you have a proposal of what is missing and how it could be implemented?
It is not like we aren't willing to go for it if you can name what exactly is missing ;)

Id say C::B is about as good as a "Custom Built into the IDE" build systems get, they all have their strengths and weaknesses, and this post wasnt meant to be a comparitive review of the merits or otherwise of C::B versus its competitors.  Its just that they all seem to be re-inventing the wheel, without a clear purpose for improving the nature of software engineering.  Ive never been an advocate of "Any App ive come across" does this cool thing, so we should to, feature creep.  A Build System is a very complex thing to get right, and my post was mostly about the philosophy, meaning what problems are trying to be solved, whats the theory in solving those problems, how is it envisioned that this system will be better than what has come before for software engineering.

Quote
1. Building mixed language applications.
From what I understand, this depends on the compiler. I've read in the forums that someone is using Fortran and C mixed, in Code::Blocks, without problems.

I was more thinking about 2 different compilers, that generate compatible object code, which can be linked by an appropriate linker.  Not that ive tried, but it seems not very straight forward to do, because the compiler setup is project wide.  This is also, i would agree an uncommon thing to do, but it is done.  I would expect the person using Fortran and C is using GCC, which is really one compiler with 2 (or more) language front ends.

Quote
2. Handling custom dependency scanning.
What is a "custom dependency scanning"? Inter-project dependencies?

Ok, Consider this ficticious example, for a description of what im getting at:

I have a utility called "MyFileManipulator"  it takes a file, which lists a bunch of operations to do with other files, adn generates an output file.
You would call it thus:
MyFileManipulator output input_script

an example input_script might look like this:

Code
Add ../Data/MyMenuImage.png
DefinesFrom MyGlobalDefines.h
Add ../Data/MyMenuFont.png
Shrink ../Resources/ABigLogo.jpg 80,48
EncodeIntoLsBit ../Resources/ATestPattern.png "This is a test pattern"
Concatenate FurtherManipulatedFiles.mlist

The Build system for this then goes:
I Build ManipulatedData.dat from FilesToManipulate.lst by calling MyFileManipulator
I can see that ManipulatedData.dat is up to date with FilesToManipulate.mlist, BUT
this isnt enough, because ManipulatedData.dat is also dependent on the files listed in FilesToManipulate.mlist, however
I dont know how to process FiilesToManipulate.lst to get the dependencies from it out (like i automatically do for c files, by scanning for #include) the user has howerver provided me with a script (or regex expression, or something) I can use to extract those dependencies (for this file).  So using the above example, the script gives the build system the following list of dependent files:

MyGlobalDefines.h
../Data/MyMenuFont.png
../Resources/ABigLogo.jpg 80,48
../Resources/ATestPattern.png
FurtherManipulatedFiles.mlist

The the build system can then check if any of these have changed, or files they depend on have changed.  MyGlobalDefines.h (being a .h file) would be processed by the standard .h file dependency checker. the FurtherManipulatedFiles.mlist would be checked by the custom mlist dependency scanner.  And so on, until there were no more dependencies to check, if any of them change, then the ManipulatedData.dat file would be regenerated.

Without this, the developer needs to keep in his head, ahh, ive changed a file that my auto generated data relies upon, but the build system cant recognise, so i either need to do a clean build all or i need to delete ManipulatedData.dat.  The problem with this is it can be forgotten, you build, its a clean build, but its not an uptodate build.

Also, you dont want to have to go and tick a box in a dialog that says "rebuild this file if this file now selected changes" because that relies on a process of "Ok, ive edited the .mlist file, now ive got to go and make sure all of the dependiencies in the build system are spelled out".  That process is error prone.  A Build system should handle the deatils, once you have specified the rules.  Sometimes its not possible to autoscan a dependency, so you absolutely must force it, but those instances should be mimimised, and everything absolutely possible should be (able to be) automated.

Quote
3. Handling a build, such that a file will always be recompiled if its code changes (dates are not enough, what if the file hasnt changed, but the compiler flags for it have?  What if you are building from a network share, and it has date synchronisation problems)
You mean, using signatures instead of dates. I don't know which one C::B uses.
I dont see any evidence of C::B using signitures, because usually they are cached somewhere.  Yes, i think signatures are a better approach, but it is a matter of philosophy, there may be an even better approach (i dont know).  But if you use signatures, then what do they contain?  How do you cache them so you are not un-neccesarily generating signitures for unmodified code?  How do you handle signatures for your script that decide what to build and how?

Quote
4. Autogenerating c code from custom data files (in a custom way), and then compiling the result.
This can be done as thomas said, with a target, and having it as first target.
Or you can use AngelScript in this case (altrough I preffer the first solution).
But the problem as i understand it with both of these approaches currently is it would require the custom data to be re-generated every build.  Some of my custom data takes minutes to generate (on a very fast computer) and i dont want to do it if i dont need to.  I suppose i could code a script to work out if the file needs to change explicitly, but then the build engine isnt doing anything for me, because it isnt aware of this extra work.

Quote
6. Post processing a build elf (Or whatever) output file from the link (or similar) stage.
You can do that using pre/post build steps.
True, but again, they would be executed every time.  Also, they may be derived from a number of .elf's.  Again, I see this as targets that get built after the .elf file/s in my case they are more important than the .elf files (even though they may be built from them).  Again, they shouldnt re-build if the dependencies on them havent changed.

Quote
7. Handling Autoconf-like support for finding #include files, libraries, functions and typedefs depending on the target being built on.
Ok, this is the most important one, and your entire post can be resumed to this.
I expect that somewhere in the future, C::B could export automake/autoconf projects.

But this is really a picky subject. Because of two facts:
1) Autotools are found in every posix system. They almost works everytime, very easy and standard way from a USER point of view.
2) Autotools are a pain in the ass to maintain from a DEVELOPER point of view. And they suck bad. :lol:
Agreed and Agreed.  The single biggest argument people put up to using "Autoconf" tools over any other method is "There are a lot of M4 scripts already written for Autoconf".  To my mind, thats not as important as having a build system that can deal with the sorts of problems Autoconf is designed to address.  If it can, then a developer is free to add their own tests, and a new standard test library can be built to make a developers life easy.  Im also not convinced most of the auto tests are worthwhile, but that is a seperate subject altogether.

So it's a debate of doing things "the standard way (on *nix)" vs. "another (better) way".
Not an easy topic, but this needs to be discussed more.
And this is the cruxt of my post (even if i failed to express it well).  If C::B is going to re-invent the build system, it needs to make sure its a "better way" or there isnt a lot of point to it.

Quote
8. Rebuilding if the map file (generated during the link) is missing. (Which means re-linking, even if the output .ef still exists)
It doesn't do this just now?
This is a 2 Targets from 1 set of source problem, i dont see how to set that up in C::B.

In a generic form

XC output.1 output.2 list of inputs

XC generates both output.1 and output.2 from the list of inputs.  XC needs to be called to regenerate both output.1 and output.2 if either are out of date with the source.  Most build systems assume 1 output from a list of inputs, which in the case of a link isnt the case (you get a link map (if you pass the right options) and the linked executable, for me, both are equally important to the development process.

Quote
9. Easily re-using common custom elements across projects.
This is also the other very important topic. I'm working on this, at a inter-platform level. Not an easy topic neither.
[/quote]
No not an easy subject with a GUI interface, and to be clear, im not talking about 2 variants of the same tree, im talking about taking things like "custom dependency scanners", "auto code generation scripts", "custom post processing sequences", "autoconf like stuff" from one project and sticking it in another (completely unrelated) project, where the only similarity is where the sorts of build processing remain similar.

Offline Game_Ender

  • Lives here!
  • ****
  • Posts: 551
Re: Query on the Philosophy of the Build System
« Reply #8 on: January 25, 2006, 01:55:19 am »
You have spent at lot of time writing but a little bit more time look would shown you that anything in the build system options can be set on an IDE, Project, or target basis.  This include compiler.  So you could have your project compile under 5 different compilers and then a 6th custom one to link it all together.  Just go to build options, click you target and change "Selected Compiler".

About the tool chain issue.  I think it would be a good idea to look into a way to make a stand alone codeblocks build system.  I think there would be some difficulty in a completely standalone system because for better or worse the entire project uses, including the debugger plugin uses wxstrings, wxchars, and the wxwidgets event system.  With that said, you could probably make a build system that had no GUI but still needed wxwidgets around but didn't have a GUI a little easier.
« Last Edit: September 19, 2006, 12:43:38 am by Game_Ender »

Offline Urxae

  • Regular
  • ***
  • Posts: 376
Re: Query on the Philosophy of the Build System
« Reply #9 on: January 25, 2006, 02:26:15 am »
I was just commenting that it seems easier to me to get them to install another command line tool, rather than to require them to use a particular total gui environment.

C::B can be used from the command line. I do so every day. (Actually, I use a shortcut. But that's just for convenience)

Quote
  It might be a barrier to entry if on your web site you say "To compile and develop with this project you need to download C::B and install that, and use that, then they come here, and see its a big GUI thing, and that "puts them off" because they are an Emacs or Vi user (and there are a lot of them around).  Whereas an Emacs or Vi user isnt likely to be put off by using another command line tool, because it doesnt interefere in their own development mind set.  Thats what i was trying to get at.

If you don't want to tell people where to get C::B, feel free to bundle it.
And there's no need to use the GUI if you don't want to. You'll need to use it at least once to tell it where the compiler is (or at least confirm the auto-detect) and optionally to disable plugins not needed to compile, but after that you don't ever need to open the GUI to build a project.

Quote
Also, I am surprised what makes you think that installing Jam is easier than installing Code::Blocks. Well, it is a matter of taste, maybe. Personally, I perceive Jam (despite being a good build system otherwise) as an extreme bitch to build and set up.
Agreed, it can be a bitch to set up your build files correctly, but that is mostly due to a lack of good documentation I find. However to install for an end user who just wants to build your project from source, it is a snap, it is a small exe, that they can put anywhere, and it will just run.  It doesnt require any Gui Toolkit to be installed, or library matching, or anything.

Okay, so C::B isn't contained in one file, and still needs wxWidgets to be present even if the GUI isn't used. I'll give you that one. However, no installer is necessary (It'll request vital information like where the compiler is on first run) and it can be put anywhere.
Running it is easy, and can be made even easier by a one-line script that provides the right parameters.

Quote
Yes, in my environment i need a 100% replicatable build environment, what that comes down to, is with a project, I archive the code of the Compiler, all Libraries and Tools im using to generate the code with the project, so that in 3 years or more, i can come back to it if i need to and get the same binary from the source.  It also means I can give my development environment to a government regulator and tell them how to install it, and type ./build in the root of the project and the output will be exactly the same binary/binaries as ive submitted for approval.

If you need to provide the complete build environment, I presume that includes the compiler (esp. if you want a bitwise-equal result). My compiler is about 27 MB compressed (MinGW, zipped). C::B + wxWidgets is about 6 MB zipped. Doesn't seem like a huge overhead in comparison (though admittedly probably more than make or jam will take up).

Quote
Quote
1. Building mixed language applications.
From what I understand, this depends on the compiler. I've read in the forums that someone is using Fortran and C mixed, in Code::Blocks, without problems.

I was more thinking about 2 different compilers, that generate compatible object code, which can be linked by an appropriate linker.  Not that ive tried, but it seems not very straight forward to do, because the compiler setup is project wide.  This is also, i would agree an uncommon thing to do, but it is done.  I would expect the person using Fortran and C is using GCC, which is really one compiler with 2 (or more) language front ends.

(Note: Compiler can be set on a per-target basis)
What's the fundamental difference between two compilers with compatible output or two frontends for the same compiler?
I'd bet that if I put together a project that created a library and an executable, with different but compatible-output compilers it'd link together just fine. After all, that's the definition of "compatible output", is it not?
Unfortunately though, I only have one compiler installed so I'm not in a position to try.
Oh, and this does not allow directly linking object files generated by different compilers together. So what? That's what static libraries are for (amongst other things).

Stevo

  • Guest
Re: Query on the Philosophy of the Build System
« Reply #10 on: January 25, 2006, 04:01:18 am »
You have spent at lot of time writing but a little bit more time look would shown you that anything build system options can be set on and IDE, Project, or target basis.  This include compiler.  So you could have your project compile under 5 different compilers and then a 6th custom one to link it all together.  Just go to build options, click you target and change "Selected Compiler".
I dont see it. 

If i right click on the file, the only possible option that could be relevent is properties, if i select that, the only option that seems to do what you say is build, which lets me specify custom builds commands, granted, but i would have to do that individually for each file, which hardly seems efficient.

Or are you saying to have different targets, one for each compiler?  or sub result?  I have now found where i would do the link map/.elf auto dependency, so the process if i had that occur in other places would be for each of those to be their own target?

And for the "Project" to bind the result together?

But i still dont see how i would specify different compilers, the only place i can see to set the compiler executable names is under:
settings/environment/global compiler settings

And yes, i can copy GCC and specify different GCC's for embedded targets, etc.  I dont see anyway to select those at a target level, because the "Selected Compiler" option in "Project Compiler Options" is greyed out?

i cant find any way to over-ride this on a file basis, or a target basis.

Im using C::B Cvs from about 2 days ago.

About the tool chain issue.  I think it would be a good idea to look into a way to make a stand alone codeblocks build system.  I think there would be some difficulty in a completely standalone system because for better or worse the entire project uses, including the debugger plugin uses wxstrings, wxchars, and the wxwidgets event system.  With that said, you could probably make a build system that had no GUI but still needed wxwidgets around but didn't have a GUI a little easier.

Granted, this would address a number of philosophical points i raise in my post, but if this was done, what would be gained by having it "integrated" into C::B at all then?  It could still be part of the C::B project but if it could be built to run stand alone, wouldnt it make sense then for C::B to just execute it as a utility, and capture its output (just like it does for a custom makefile?)  Basically the only difference would be the build script files would be graphically maintainable by C::B.

C::B can be used from the command line. I do so every day. (Actually, I use a shortcut. But that's just for convenience)

This is the first ive heard that, can the defaults be supplied so it never asks?  I just tried it, and it works as expected, but it stops at the end of the build, and i had to press ^C to terminate?

Okay, so C::B isn't contained in one file, and still needs wxWidgets to be present even if the GUI isn't used. I'll give you that one. However, no installer is necessary (It'll request vital information like where the compiler is on first run) and it can be put anywhere.
Running it is easy, and can be made even easier by a one-line script that provides the right parameters.
This assumes the correct version of wxWidgets is installed, no?

If you need to provide the complete build environment, I presume that includes the compiler (esp. if you want a bitwise-equal result). My compiler is about 27 MB compressed (MinGW, zipped). C::B + wxWidgets is about 6 MB zipped. Doesn't seem like a huge overhead in comparison (though admittedly probably more than make or jam will take up).

Part of my automated build for a regulator is an automated installation of all of the tools, and yes i can also auto install code::blocks.  A Primary issue with this process is removed being able to build using C::B from the command line, but i still dont think end users should always be subjected to a gui, just to build the code (again its a philosophy thing)  And yes, i agree with the previous post about "You use yours and ill use mine", thats not the point of this, its just to see where its going, and what it aims to achieve, and how is it planning on being better than the current tools available?

What's the fundamental difference between two compilers with compatible output or two frontends for the same compiler?
I'd bet that if I put together a project that created a library and an executable, with different but compatible-output compilers it'd link together just fine. After all, that's the definition of "compatible output", is it not?
Unfortunately though, I only have one compiler installed so I'm not in a position to try.
Oh, and this does not allow directly linking object files generated by different compilers together. So what? That's what static libraries are for (amongst other things).

The difference between 2 front ends, and 2 compilers is, with 2 front ends, you select the compiler with a compiler option, 2 compilers use 2 different executables.  As ive said, i can find no way (that works for me) to alternate the compiler executable on a file or target basis.

See above, i cant see how it can, cause the options are greyed out...

Also,

The other thing a modern build system should be capable of (which i havent touched on before) is parallel builds (compiling multiple independent files simultaneously) so that multi-processor machines can accelerate long build processes.

Stevo


Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Re: Query on the Philosophy of the Build System
« Reply #11 on: January 25, 2006, 09:30:37 am »
C::B can be used from the command line. I do so every day.
This is the first ive heard that, can the defaults be supplied so it never asks?  I just tried it, and it works as expected, but it stops at the end of the build, and i had to press ^C to terminate?
It may be that you experience one of the dreaded Linux hangs there, that's a yet unresolved bug though, not a feature. Generally, it terminates after building. I only use Code::Blocks on Windows, and luckily it does not hang there.

Okay, so C::B isn't contained in one file...
This assumes the correct version of wxWidgets is installed, no?
Just unpack the archive.

Quote
Some of my custom data takes minutes to generate (on a very fast computer) and i dont want to do it if i dont need to.
Well yes, that's what is usually the case if you process data. Luckily, that's what bash has test -nt for, works like a charm, even on Windows :)
OK, maybe it would be better if the IDE already had built-in functionality for this, somehow. But then, where do you start and where do you stop. An IDE is no divination apparatus, after all, and it cannot foresee every possible thing that someone might want to do. That's what we have scripting for :)
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

sean345

  • Guest
Re: Query on the Philosophy of the Build System
« Reply #12 on: January 31, 2006, 06:00:13 am »
The lcc-win32 IDE has a nice build system.  I'm not sure if this is possible with C:B (I haven't seriously tried), but if I add a file to the project that the IDE does not know what to do with (i.e. not a *.c or *.h file in the case of lcc-win32), the system asks me how to build it.  I can specify to use the compiler or I can choose another program to "compile" or interact with the file.  I am also given the option to automatically add a generated file to the make file as a dependency of the original (i.e. test.md is read by a program called lburg and test.c is generated as output and then compiled in with the project). 

I would like to be able to do this in C:B.  If this is already possible, could someone point me in the right direction?

Thanks,
 - Sean