User forums > Using Code::Blocks

Memmory leaks

<< < (3/6) > >>

thomas:
The thread count seems rather strange, it never shows more than 3 on my machine, and virtual memory is always well under 20 Mb. Maybe that is an issue with hibernation, possibly a few statistics are not correctly updated by Windows?

fvbommel is quite right about the paging when minimizing the application. You can very easily see this by enabling the column "page faults" in task manager, then mimimize/maximize any non-trivial program. The number of page faults will go up by a few hundred.

The principle of keeping allocated pages is actually pretty clever, clearly one of the many stolen ideas in Windows (and as often, they did not even steal it correctly). Purging pages when minimizing an application is in my opinion clearly a bug, but alas, nothing one can do against...

If you look at a Linux machine, you will see that there is never any significant amount of free memory (much unlike on Windows). Naively, one may think "Why! Linux must be broken", but the opposite is the case.
It is a misconception that free memory is anything good. There is nothing worse than free memory, because "free" memory is unused.
There are even  so-called "memory optimizer" tools available for Windows. These would be quite funny, were it not so tragical. Not only is "no effect" the best thing they can do - in the normal case, they significantly degrade performance.
Things that are in RAM have nanosecond access time. Things that are on hard disk have millisecond access time. Dropping a page when you really need the RAM is zero cost compared to that. Therefore, purging things from RAM unless absolutely necessary is not a good idea - it beats me why Windows does that.

kagerato:
The deallocation of memory by minimized applications could only be a bug if it were unintentional; it is not.  Many Windows users find it to be expected behavior (ever observe someone minimize several running programs and then launch a real memory-sucker, like a 3D game?).  Yes, it is ignorant to believe that minimized applications are in some way destroyed or closed.  The average user doesn't care about their ignorance or their incompetence.

As for the paging scheme implemented by the kernel, there is no "best".  There are only competing ideologies.  The oft-quoted quip "free RAM is wasted RAM" is a farce.  Unused memory is no more wasted than money in a savings account is wasted.  To call unallocated RAM "wasted" is quite ironic; the definition of waste involves consumption -- precisely the opposite of the actual situation.  In short, one is committing a logical fallacy (that of definition) to equate waste and potential in any way.

Concerning the NT kernel, automatic deallocation of memory is not particularly disadvantageous.  It is designed to keep as much physical RAM available for new processes (or threads within existing processes, or documents within existing threads...and so on) as reasonable.  The kernel does not page oft-used data to the disk in the vast majority of cases.  It may/can/does commit various blocks of memory into a sort of buffer in virtual memory, which is ready to be moved to the pagefile if it goes unaccessed for some time.  If the kernel were anywhere as inefficient as has been suggested, all windows applications would suffer large performance hits during even the most basic usage.

Hopefully I have effectively demonstrated that there is no superior approach to memory allocation, and that the NT kernel, "RAM optimizers" or other windows programs which make broad changes to memory are not inherently wrong or inferior based upon their method or intent.

thomas:
kagerato, I cannot agree with this, sorry.
Deallocation of minimized applications *is* a bug. It may be intentional, but it is still a bug. It shows that the M$ developers have not understood that hard disks are slower than main memory.
And no, this is not necessarily what users expect:
https://bugzilla.mozilla.org/show_bug.cgi?id=76831

Keeping pages allocated does no harm. If your memory-hungry game needs memory, they can just be dropped, and the RAM is available. This does not really cost anything (well, not significantly more than using a free page, anyway.).
The point is, you paid for the memory, and it is in your PC anyway, whether you use it or not. If you have 1024 megs in your PC and you only ever use 350, then you could as well have paid for 384 and never see a difference.

Contrarily, deliberately freeing pages without real need *does* harm. If they are never needed again, well fine... But if they are accessed again, they must be read back from disk. Yes, there may be a disk cache, but let's be serious, it is still an awful lot slower ;)

Unluckily, it is hard to prove how badly this strategy actually performs, as other things (process creation, pipe throughput) are quite poor in Windows, too.
But just try and run an identical program from a bash script on Windows and on Linux. You will be surprised.
When I first started working on my subversion plugin for c::b, I ran "time svn status" on my Linux box. Seeing "real 0m0.014s", I said "Great, I can easily call that half a dozen times when building the menu and still have an interactive response!"
Surprisingly, a mere three invocations from the plugin (under Windows) were outright disturbing. This did not seem to make sense. So I timed a bash script that invoked the very same command 500 times both on Linux  and on Windows (identical program versions, same repo).
The Linux box (Sempron 2200+, cheapest standard components) outperformed the Windows PC (Athlon64 3500+/nForce 4SLi dual channel) by a factor of 4.6! How can this be explained, if not by truly poor paging behaviour. :)

EDIT: the above URL was wrong, sorry

kagerato:
Regardless of your opinion, you cannot accurately describe the minimization behavior as a bug.


--- Quote from: dictionary.com ---An unwanted and unintended property of a program
or piece of hardware, especially one that causes it to
malfunction.
--- End quote ---


It is rather disturbing to ignore the preferences of many windows users on the basis of one "bug" report of a single user of a cross-platform, open-source application.  Naturally, if someone has employed a program on multiple operating systems, he/she will expect it to behave very similarly.  This cannot always be the case, or the program will become completely unattached to each platform and therefore will cease to 'feel' like a native application.

Your counter-argument is not substantiated effectively.  It can just as easily be reversed to say "keeping pages unallocated does no harm", because the statement is no more true or false based on the evidence presented.  The allocation or deallocation of pages is virtually instantaneous; the only operation involved here that takes human-noticeable time is reading from the disk pagefile.

The example you've presented is another fallacy.  It will never occur, and it does not occur.  If a gigabyte or more memory is available in a PC and mapped, it will at some point (with high probability, at the point when the high ram is at greatest demand) become used.  What one such as yourself fails to perceive here is to the design of the NT kernel is not based around high performance, top-end machines with googles of memory.  When you drop that one gigabyte down to 64, 128, or even 256 mbyte, suddenly there is no longer an excess of ram for every application to use as cache.  There is, in fact, very little memory available much of the time on PCs with this hardware who have aggressive users (people who do a great deal of multi-tasking, for example).  The optimal policy for the kernel in terms of great memory need is to match the aggressiveness of the user and try to create free memory for new tasks.

I've used three machines in the past with both windows and a linux distribution at one time or another.  The two more recently have both had what I would consider an excess of RAM for normal computing needs: 512 mbyte and 2048 mbyte, respectively.  There is no perceptible performance difference between applications running on the different operating systems.  The policy of the kernel, in effect, makes absolutely no difference.  The reason why is simple: neither of these machines has a significant need for a pagefile.  I could, in all likelihood, simple disable paging on both boxes.

The third machine was much older, from quite a while back: a Pentium I MMX 166 MHz with 40 mbyte ram.  At different times, it ran Windows 95 and Mandrake 6 (GNOME, iirc).  Both the kernels involved were also different, I'm sure: 2.4 an 2.6 are fairly dissimilar in numerous ways, and the 9x kernel and the NT kernel are quite unlike one another in others.  At the time, the Mandrake system seemed slower.  However, in time I've come to realize from more objective comparison that this was nonsense; it wasn't the handling of memory, cache, or processing time that made the perceptible difference.  The dissimilarity occurred at a much higher level; with the actual userspace applications and their user interfaces.

Each of us can cite ancedotal evidence displaying a disparity or nondisparity between performance on the two operating systems.  The actual truth of the matter lies more in perception than in fact, friend.

How can your experience be explained?  Quite simply.  Windows does not have native bash scripting.  You are inherently introducing excess layers of complexity and potential slow-down points by running anything through a bash shell on windows.  For valid comparisons of actual program runtimes, you need to run native windows applications through native windows interfaces, and time them using windows performance tools.  For linux, replace all instances of 'windows' with 'linux/gnu'.

Here's a decent (though far from infallible) method of testing your hypothesis (that the windows subsystem for memory management is responsible).  Disable the pagefile.  On Windows, this is done pretty easily through the system control panel.  On Linux, you can edit /etc/fstab or probably just temporarily pass some parameter to the kernel via your bootloader.

Honestly, I'd like to see your results.

thomas:

--- Quote from: kagerato ---The allocation or deallocation of pages is virtually instantaneous; the only operation involved here that takes human-noticeable time is reading from the disk pagefile.
--- End quote ---


Here you prove me right, because this is the *exact* reason why deliberately dropping pages is evil. "Human noticeable" means one million times, by the way.

Blaming bash or an intermediate api layer as responsible for running an identical program 4.6 times slower (we are talking about 4.6 times, not 4.6 percent) is hilarious. Even more so as the hardware on the Windows system is superior in every respect. It could very well burn a few extra CPU cycles, no one would notice.

But I see there is little point in discussing this issue further, you will not agree anyway.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version