This is not a C::B, but a gdb bug, and it's just a warning and can be ignored.
The bug you mentioned was, that terminals that do not have an uppercase "T" in the commandline (like xterm has) have not been found by the debugger and therefore the debugging output could not be parsed (e.g. gnome-terminal).
This bug is fixed in trunk.
Awesome! So I can start learning to use it!
The core dump has nothing to do with the IDE.
Don't forget C::B is "just" an IDE not the compiler nor the system.
"ulimit -c unlimited" does not mean you have unlimited resources. The limit is the current hard limit, which can only be changed by root.
By the way executable needs to have read access and you should run cat /proc/sys/kernel/core_pattern to see whether the default template to create a coredump has changed.
I'll update here for others who have this issue (even though it's not a Code::Blocks issue). I figure others may come here with a similar problem, so I'll edit progress into this post until it's solved, and then edit the solution up top. I'm going to check cat now.
This may be a compound issue, based on the following program and its output:
#include "sys/resource.h"
#include "sys/time.h"
#include <iostream>
int main ( )
{
struct rlimit limits;
getrlimit (RLIMIT_CORE, &limits); // Get core file limits
std::cout << limits.rlim_cur << "\n"; // output current core file limit
limits.rlim_cur = limits.rlim_max; // Set current core file limit to hard limit in struct
setrlimit (RLIMIT_CORE, &limits ); // update current core file limit
getrlimit (RLIMIT_CORE, &limits); // Get core file limits (to be sure the set worked)
std::cout << limits.rlim_cur << "\n"; // Output current core file limit
getrlimit (RLIMIT_FSIZE, &limits); // Get file size limits
std::cout << limits.rlim_cur << "\n"; // output current file size limit
limits.rlim_cur = limits.rlim_max; // Set current file size limit to hard limit in struct
setrlimit (RLIMIT_FSIZE, &limits ); // update current file size limits
getrlimit (RLIMIT_FSIZE, &limits); // Get file size limits (to be sure the set worked)
std::cout << limits.rlim_cur << "\n"; // output current file size limit
int *a=NULL; *a=5; // Cause a segfault
return 1;
}
/*
0
4294967295
4294967295
4294967295
Segmentation fault
*/
So, the limit on the core file is 0 starting off, but the hard limit isn't. Changing that does not resolve the problem, so I suspect something in the kernel itself is causing this.
Going by this:
http://manpages.ubuntu.com/manpages/lucid/man5/core.5.html* The directory is writeable (double checked working directory)
* The program has read and write permissions
* I own the program and directory (So shouldn't have had to use root before the problem, but I did! A hint?)
* The core file doesn't already exist
* The file system is not full
* It's not a ulimit issue
* This post ruled out RLIMIT_CORE and RLIMIT_FSIZE
So, I've just ruled out all of Canonical's troubleshooting possibilities. I'd like to report this as a bug, but I really don't know what to include to reproduce this problem because I don't know what precipitated it! Just a sudden change, out of the blue.