Author Topic: Memmory leaks  (Read 21514 times)

johny5

  • Guest
Memmory leaks
« on: July 05, 2005, 07:47:06 pm »
I think there are a couple of leaks in c::b.  Currently it is using 500MB memory and has about 30 running threads. Also it has about 100 handles to threads.  I dont think c::b should use that much resources  :shock:.  But since it is still running smooth, i dont really mind 8) .

Offline mandrav

  • Project Leader
  • Administrator
  • Lives here!
  • *****
  • Posts: 4315
    • Code::Blocks IDE
Memmory leaks
« Reply #1 on: July 05, 2005, 07:59:31 pm »
How long have you been running it to use 500MB?
Also, I 've noticed that if I minimize C::B and then restore it the memory consumption reaches bottom again. Can you confirm this?

Yiannis.
Be patient!
This bug will be fixed soon...

johny5

  • Guest
Memmory leaks
« Reply #2 on: July 05, 2005, 08:25:16 pm »
You are right, i minimized it and now its 5.9 mb. I dont know how long it is runing because i hybernate windows, and i use multiple desktops so i normally dont close any programs.  I think a couple of days, but i dont really know.

Also i have to correct myself....  proces explorer (from sysinternals) showed c::b used 500 MB "private bytes" but the windows task manager showed "only" 150 MB.  If i now read the settings correct,  c::b used a maximum of 200MB while running.  

How come the memory usage drops this much when i minimize?

Anonymous

  • Guest
Memmory leaks
« Reply #3 on: July 05, 2005, 08:36:34 pm »
Sorry i post twice, but my login doesnt seem to work anymore :(. I dont know exactly what c::b does when it minimizes, but i can hardly imagine it then decided to deallocate all memory leaks, so maybe its a problem with windows?

Im using winxp SP1 and have 1.5 GB mem, so maybe windows thinks it can spare some of it for C::B.   But 200 MB for caching still seems an awfull lot of memory.

Offline rickg22

  • Lives here!
  • ****
  • Posts: 2283
Memmory leaks
« Reply #4 on: July 05, 2005, 08:41:55 pm »
( Johny5: Try signing up again. We've had trouble with the server recently. )

Anonymous

  • Guest
Memmory leaks
« Reply #5 on: July 05, 2005, 08:44:19 pm »
(Incorrect login. Please try again...
The password is stored in my browser so i dont think it can be wrong. )

Offline mandrav

  • Project Leader
  • Administrator
  • Lives here!
  • *****
  • Posts: 4315
    • Code::Blocks IDE
Memmory leaks
« Reply #6 on: July 05, 2005, 09:05:26 pm »
Quote
I dont know exactly what c::b does when it minimizes, but i can hardly imagine it then decided to deallocate all memory leaks, so maybe its a problem with windows?

C::B does absolutely nothing when minimized. That's why I asked you.
I believe it's a wxWidgets leak somewhere, but I can't point my finger on it.
Maybe someone else knows better?

Yiannis.
Be patient!
This bug will be fixed soon...

Offline rickg22

  • Lives here!
  • ****
  • Posts: 2283
Memmory leaks
« Reply #7 on: July 05, 2005, 10:24:46 pm »
There's an article on the wxWiki about avoiding memory leaks.
http://wiki.wxwidgets.org/wiki.pl?Avoiding_Memory_Leaks

Altho, getting a memory debugger to find leaks in C::B wouldn't be a bad idea.

Offline Urxae

  • Regular
  • ***
  • Posts: 376
Memmory leaks
« Reply #8 on: July 06, 2005, 12:10:38 am »
Actually, the task manager is a rather poor indication of actual memory use, especially the "Mem Usage" column ("VM Size" is better, but not perfect either).

Was there less than 1.5 gigs of RAM was in use? If so, here's something you might want to know about Windows memory allocation: Windows doesn't automatically take back memory no longer in use by a process when the memory isn't currently needed by any process. This means that if you have more memory, applications may seem to use more memory than if you'd have had less.
The reason minimizing seems to help is that when you minimize a program, Windows seems to take back the memory a process is no longer actually using (and swap out some (or most, I'm not sure) of the rest IIRC).

A link about this. It's focussed on .Net, but includes information about Windows in general. It was also the only link about this I could find during a quick Google :(.

Conclusion: this might not be a problem in CodeBlocks.

Anonymous

  • Guest
Memmory leaks
« Reply #9 on: July 06, 2005, 12:51:37 am »
This seems a very logical explenation. Almost 800MB of memory was free so i can imagine windows isnt in a rush to take it back. And if it was a real memory leak, it would be kind of strange they would magically disappear when minimizing. Like i said before, everything was running smooth, so i didnt have any real problems.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #10 on: July 06, 2005, 11:19:43 am »
The thread count seems rather strange, it never shows more than 3 on my machine, and virtual memory is always well under 20 Mb. Maybe that is an issue with hibernation, possibly a few statistics are not correctly updated by Windows?

fvbommel is quite right about the paging when minimizing the application. You can very easily see this by enabling the column "page faults" in task manager, then mimimize/maximize any non-trivial program. The number of page faults will go up by a few hundred.

The principle of keeping allocated pages is actually pretty clever, clearly one of the many stolen ideas in Windows (and as often, they did not even steal it correctly). Purging pages when minimizing an application is in my opinion clearly a bug, but alas, nothing one can do against...

If you look at a Linux machine, you will see that there is never any significant amount of free memory (much unlike on Windows). Naively, one may think "Why! Linux must be broken", but the opposite is the case.
It is a misconception that free memory is anything good. There is nothing worse than free memory, because "free" memory is unused.
There are even  so-called "memory optimizer" tools available for Windows. These would be quite funny, were it not so tragical. Not only is "no effect" the best thing they can do - in the normal case, they significantly degrade performance.
Things that are in RAM have nanosecond access time. Things that are on hard disk have millisecond access time. Dropping a page when you really need the RAM is zero cost compared to that. Therefore, purging things from RAM unless absolutely necessary is not a good idea - it beats me why Windows does that.
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #11 on: July 15, 2005, 01:21:23 am »
The deallocation of memory by minimized applications could only be a bug if it were unintentional; it is not.  Many Windows users find it to be expected behavior (ever observe someone minimize several running programs and then launch a real memory-sucker, like a 3D game?).  Yes, it is ignorant to believe that minimized applications are in some way destroyed or closed.  The average user doesn't care about their ignorance or their incompetence.

As for the paging scheme implemented by the kernel, there is no "best".  There are only competing ideologies.  The oft-quoted quip "free RAM is wasted RAM" is a farce.  Unused memory is no more wasted than money in a savings account is wasted.  To call unallocated RAM "wasted" is quite ironic; the definition of waste involves consumption -- precisely the opposite of the actual situation.  In short, one is committing a logical fallacy (that of definition) to equate waste and potential in any way.

Concerning the NT kernel, automatic deallocation of memory is not particularly disadvantageous.  It is designed to keep as much physical RAM available for new processes (or threads within existing processes, or documents within existing threads...and so on) as reasonable.  The kernel does not page oft-used data to the disk in the vast majority of cases.  It may/can/does commit various blocks of memory into a sort of buffer in virtual memory, which is ready to be moved to the pagefile if it goes unaccessed for some time.  If the kernel were anywhere as inefficient as has been suggested, all windows applications would suffer large performance hits during even the most basic usage.

Hopefully I have effectively demonstrated that there is no superior approach to memory allocation, and that the NT kernel, "RAM optimizers" or other windows programs which make broad changes to memory are not inherently wrong or inferior based upon their method or intent.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #12 on: July 15, 2005, 01:47:33 pm »
kagerato, I cannot agree with this, sorry.
Deallocation of minimized applications *is* a bug. It may be intentional, but it is still a bug. It shows that the M$ developers have not understood that hard disks are slower than main memory.
And no, this is not necessarily what users expect:
https://bugzilla.mozilla.org/show_bug.cgi?id=76831

Keeping pages allocated does no harm. If your memory-hungry game needs memory, they can just be dropped, and the RAM is available. This does not really cost anything (well, not significantly more than using a free page, anyway.).
The point is, you paid for the memory, and it is in your PC anyway, whether you use it or not. If you have 1024 megs in your PC and you only ever use 350, then you could as well have paid for 384 and never see a difference.

Contrarily, deliberately freeing pages without real need *does* harm. If they are never needed again, well fine... But if they are accessed again, they must be read back from disk. Yes, there may be a disk cache, but let's be serious, it is still an awful lot slower ;)

Unluckily, it is hard to prove how badly this strategy actually performs, as other things (process creation, pipe throughput) are quite poor in Windows, too.
But just try and run an identical program from a bash script on Windows and on Linux. You will be surprised.
When I first started working on my subversion plugin for c::b, I ran "time svn status" on my Linux box. Seeing "real 0m0.014s", I said "Great, I can easily call that half a dozen times when building the menu and still have an interactive response!"
Surprisingly, a mere three invocations from the plugin (under Windows) were outright disturbing. This did not seem to make sense. So I timed a bash script that invoked the very same command 500 times both on Linux  and on Windows (identical program versions, same repo).
The Linux box (Sempron 2200+, cheapest standard components) outperformed the Windows PC (Athlon64 3500+/nForce 4SLi dual channel) by a factor of 4.6! How can this be explained, if not by truly poor paging behaviour. :)

EDIT: the above URL was wrong, sorry
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #13 on: July 15, 2005, 02:58:36 pm »
Regardless of your opinion, you cannot accurately describe the minimization behavior as a bug.

Quote from: dictionary.com
An unwanted and unintended property of a program
or piece of hardware, especially one that causes it to
malfunction.


It is rather disturbing to ignore the preferences of many windows users on the basis of one "bug" report of a single user of a cross-platform, open-source application.  Naturally, if someone has employed a program on multiple operating systems, he/she will expect it to behave very similarly.  This cannot always be the case, or the program will become completely unattached to each platform and therefore will cease to 'feel' like a native application.

Your counter-argument is not substantiated effectively.  It can just as easily be reversed to say "keeping pages unallocated does no harm", because the statement is no more true or false based on the evidence presented.  The allocation or deallocation of pages is virtually instantaneous; the only operation involved here that takes human-noticeable time is reading from the disk pagefile.

The example you've presented is another fallacy.  It will never occur, and it does not occur.  If a gigabyte or more memory is available in a PC and mapped, it will at some point (with high probability, at the point when the high ram is at greatest demand) become used.  What one such as yourself fails to perceive here is to the design of the NT kernel is not based around high performance, top-end machines with googles of memory.  When you drop that one gigabyte down to 64, 128, or even 256 mbyte, suddenly there is no longer an excess of ram for every application to use as cache.  There is, in fact, very little memory available much of the time on PCs with this hardware who have aggressive users (people who do a great deal of multi-tasking, for example).  The optimal policy for the kernel in terms of great memory need is to match the aggressiveness of the user and try to create free memory for new tasks.

I've used three machines in the past with both windows and a linux distribution at one time or another.  The two more recently have both had what I would consider an excess of RAM for normal computing needs: 512 mbyte and 2048 mbyte, respectively.  There is no perceptible performance difference between applications running on the different operating systems.  The policy of the kernel, in effect, makes absolutely no difference.  The reason why is simple: neither of these machines has a significant need for a pagefile.  I could, in all likelihood, simple disable paging on both boxes.

The third machine was much older, from quite a while back: a Pentium I MMX 166 MHz with 40 mbyte ram.  At different times, it ran Windows 95 and Mandrake 6 (GNOME, iirc).  Both the kernels involved were also different, I'm sure: 2.4 an 2.6 are fairly dissimilar in numerous ways, and the 9x kernel and the NT kernel are quite unlike one another in others.  At the time, the Mandrake system seemed slower.  However, in time I've come to realize from more objective comparison that this was nonsense; it wasn't the handling of memory, cache, or processing time that made the perceptible difference.  The dissimilarity occurred at a much higher level; with the actual userspace applications and their user interfaces.

Each of us can cite ancedotal evidence displaying a disparity or nondisparity between performance on the two operating systems.  The actual truth of the matter lies more in perception than in fact, friend.

How can your experience be explained?  Quite simply.  Windows does not have native bash scripting.  You are inherently introducing excess layers of complexity and potential slow-down points by running anything through a bash shell on windows.  For valid comparisons of actual program runtimes, you need to run native windows applications through native windows interfaces, and time them using windows performance tools.  For linux, replace all instances of 'windows' with 'linux/gnu'.

Here's a decent (though far from infallible) method of testing your hypothesis (that the windows subsystem for memory management is responsible).  Disable the pagefile.  On Windows, this is done pretty easily through the system control panel.  On Linux, you can edit /etc/fstab or probably just temporarily pass some parameter to the kernel via your bootloader.

Honestly, I'd like to see your results.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #14 on: July 16, 2005, 11:31:21 am »
Quote from: kagerato
The allocation or deallocation of pages is virtually instantaneous; the only operation involved here that takes human-noticeable time is reading from the disk pagefile.


Here you prove me right, because this is the *exact* reason why deliberately dropping pages is evil. "Human noticeable" means one million times, by the way.

Blaming bash or an intermediate api layer as responsible for running an identical program 4.6 times slower (we are talking about 4.6 times, not 4.6 percent) is hilarious. Even more so as the hardware on the Windows system is superior in every respect. It could very well burn a few extra CPU cycles, no one would notice.

But I see there is little point in discussing this issue further, you will not agree anyway.
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."