Author Topic: Memmory leaks  (Read 21515 times)

johny5

  • Guest
Memmory leaks
« on: July 05, 2005, 07:47:06 pm »
I think there are a couple of leaks in c::b.  Currently it is using 500MB memory and has about 30 running threads. Also it has about 100 handles to threads.  I dont think c::b should use that much resources  :shock:.  But since it is still running smooth, i dont really mind 8) .

Offline mandrav

  • Project Leader
  • Administrator
  • Lives here!
  • *****
  • Posts: 4315
    • Code::Blocks IDE
Memmory leaks
« Reply #1 on: July 05, 2005, 07:59:31 pm »
How long have you been running it to use 500MB?
Also, I 've noticed that if I minimize C::B and then restore it the memory consumption reaches bottom again. Can you confirm this?

Yiannis.
Be patient!
This bug will be fixed soon...

johny5

  • Guest
Memmory leaks
« Reply #2 on: July 05, 2005, 08:25:16 pm »
You are right, i minimized it and now its 5.9 mb. I dont know how long it is runing because i hybernate windows, and i use multiple desktops so i normally dont close any programs.  I think a couple of days, but i dont really know.

Also i have to correct myself....  proces explorer (from sysinternals) showed c::b used 500 MB "private bytes" but the windows task manager showed "only" 150 MB.  If i now read the settings correct,  c::b used a maximum of 200MB while running.  

How come the memory usage drops this much when i minimize?

Anonymous

  • Guest
Memmory leaks
« Reply #3 on: July 05, 2005, 08:36:34 pm »
Sorry i post twice, but my login doesnt seem to work anymore :(. I dont know exactly what c::b does when it minimizes, but i can hardly imagine it then decided to deallocate all memory leaks, so maybe its a problem with windows?

Im using winxp SP1 and have 1.5 GB mem, so maybe windows thinks it can spare some of it for C::B.   But 200 MB for caching still seems an awfull lot of memory.

Offline rickg22

  • Lives here!
  • ****
  • Posts: 2283
Memmory leaks
« Reply #4 on: July 05, 2005, 08:41:55 pm »
( Johny5: Try signing up again. We've had trouble with the server recently. )

Anonymous

  • Guest
Memmory leaks
« Reply #5 on: July 05, 2005, 08:44:19 pm »
(Incorrect login. Please try again...
The password is stored in my browser so i dont think it can be wrong. )

Offline mandrav

  • Project Leader
  • Administrator
  • Lives here!
  • *****
  • Posts: 4315
    • Code::Blocks IDE
Memmory leaks
« Reply #6 on: July 05, 2005, 09:05:26 pm »
Quote
I dont know exactly what c::b does when it minimizes, but i can hardly imagine it then decided to deallocate all memory leaks, so maybe its a problem with windows?

C::B does absolutely nothing when minimized. That's why I asked you.
I believe it's a wxWidgets leak somewhere, but I can't point my finger on it.
Maybe someone else knows better?

Yiannis.
Be patient!
This bug will be fixed soon...

Offline rickg22

  • Lives here!
  • ****
  • Posts: 2283
Memmory leaks
« Reply #7 on: July 05, 2005, 10:24:46 pm »
There's an article on the wxWiki about avoiding memory leaks.
http://wiki.wxwidgets.org/wiki.pl?Avoiding_Memory_Leaks

Altho, getting a memory debugger to find leaks in C::B wouldn't be a bad idea.

Offline Urxae

  • Regular
  • ***
  • Posts: 376
Memmory leaks
« Reply #8 on: July 06, 2005, 12:10:38 am »
Actually, the task manager is a rather poor indication of actual memory use, especially the "Mem Usage" column ("VM Size" is better, but not perfect either).

Was there less than 1.5 gigs of RAM was in use? If so, here's something you might want to know about Windows memory allocation: Windows doesn't automatically take back memory no longer in use by a process when the memory isn't currently needed by any process. This means that if you have more memory, applications may seem to use more memory than if you'd have had less.
The reason minimizing seems to help is that when you minimize a program, Windows seems to take back the memory a process is no longer actually using (and swap out some (or most, I'm not sure) of the rest IIRC).

A link about this. It's focussed on .Net, but includes information about Windows in general. It was also the only link about this I could find during a quick Google :(.

Conclusion: this might not be a problem in CodeBlocks.

Anonymous

  • Guest
Memmory leaks
« Reply #9 on: July 06, 2005, 12:51:37 am »
This seems a very logical explenation. Almost 800MB of memory was free so i can imagine windows isnt in a rush to take it back. And if it was a real memory leak, it would be kind of strange they would magically disappear when minimizing. Like i said before, everything was running smooth, so i didnt have any real problems.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #10 on: July 06, 2005, 11:19:43 am »
The thread count seems rather strange, it never shows more than 3 on my machine, and virtual memory is always well under 20 Mb. Maybe that is an issue with hibernation, possibly a few statistics are not correctly updated by Windows?

fvbommel is quite right about the paging when minimizing the application. You can very easily see this by enabling the column "page faults" in task manager, then mimimize/maximize any non-trivial program. The number of page faults will go up by a few hundred.

The principle of keeping allocated pages is actually pretty clever, clearly one of the many stolen ideas in Windows (and as often, they did not even steal it correctly). Purging pages when minimizing an application is in my opinion clearly a bug, but alas, nothing one can do against...

If you look at a Linux machine, you will see that there is never any significant amount of free memory (much unlike on Windows). Naively, one may think "Why! Linux must be broken", but the opposite is the case.
It is a misconception that free memory is anything good. There is nothing worse than free memory, because "free" memory is unused.
There are even  so-called "memory optimizer" tools available for Windows. These would be quite funny, were it not so tragical. Not only is "no effect" the best thing they can do - in the normal case, they significantly degrade performance.
Things that are in RAM have nanosecond access time. Things that are on hard disk have millisecond access time. Dropping a page when you really need the RAM is zero cost compared to that. Therefore, purging things from RAM unless absolutely necessary is not a good idea - it beats me why Windows does that.
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #11 on: July 15, 2005, 01:21:23 am »
The deallocation of memory by minimized applications could only be a bug if it were unintentional; it is not.  Many Windows users find it to be expected behavior (ever observe someone minimize several running programs and then launch a real memory-sucker, like a 3D game?).  Yes, it is ignorant to believe that minimized applications are in some way destroyed or closed.  The average user doesn't care about their ignorance or their incompetence.

As for the paging scheme implemented by the kernel, there is no "best".  There are only competing ideologies.  The oft-quoted quip "free RAM is wasted RAM" is a farce.  Unused memory is no more wasted than money in a savings account is wasted.  To call unallocated RAM "wasted" is quite ironic; the definition of waste involves consumption -- precisely the opposite of the actual situation.  In short, one is committing a logical fallacy (that of definition) to equate waste and potential in any way.

Concerning the NT kernel, automatic deallocation of memory is not particularly disadvantageous.  It is designed to keep as much physical RAM available for new processes (or threads within existing processes, or documents within existing threads...and so on) as reasonable.  The kernel does not page oft-used data to the disk in the vast majority of cases.  It may/can/does commit various blocks of memory into a sort of buffer in virtual memory, which is ready to be moved to the pagefile if it goes unaccessed for some time.  If the kernel were anywhere as inefficient as has been suggested, all windows applications would suffer large performance hits during even the most basic usage.

Hopefully I have effectively demonstrated that there is no superior approach to memory allocation, and that the NT kernel, "RAM optimizers" or other windows programs which make broad changes to memory are not inherently wrong or inferior based upon their method or intent.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #12 on: July 15, 2005, 01:47:33 pm »
kagerato, I cannot agree with this, sorry.
Deallocation of minimized applications *is* a bug. It may be intentional, but it is still a bug. It shows that the M$ developers have not understood that hard disks are slower than main memory.
And no, this is not necessarily what users expect:
https://bugzilla.mozilla.org/show_bug.cgi?id=76831

Keeping pages allocated does no harm. If your memory-hungry game needs memory, they can just be dropped, and the RAM is available. This does not really cost anything (well, not significantly more than using a free page, anyway.).
The point is, you paid for the memory, and it is in your PC anyway, whether you use it or not. If you have 1024 megs in your PC and you only ever use 350, then you could as well have paid for 384 and never see a difference.

Contrarily, deliberately freeing pages without real need *does* harm. If they are never needed again, well fine... But if they are accessed again, they must be read back from disk. Yes, there may be a disk cache, but let's be serious, it is still an awful lot slower ;)

Unluckily, it is hard to prove how badly this strategy actually performs, as other things (process creation, pipe throughput) are quite poor in Windows, too.
But just try and run an identical program from a bash script on Windows and on Linux. You will be surprised.
When I first started working on my subversion plugin for c::b, I ran "time svn status" on my Linux box. Seeing "real 0m0.014s", I said "Great, I can easily call that half a dozen times when building the menu and still have an interactive response!"
Surprisingly, a mere three invocations from the plugin (under Windows) were outright disturbing. This did not seem to make sense. So I timed a bash script that invoked the very same command 500 times both on Linux  and on Windows (identical program versions, same repo).
The Linux box (Sempron 2200+, cheapest standard components) outperformed the Windows PC (Athlon64 3500+/nForce 4SLi dual channel) by a factor of 4.6! How can this be explained, if not by truly poor paging behaviour. :)

EDIT: the above URL was wrong, sorry
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #13 on: July 15, 2005, 02:58:36 pm »
Regardless of your opinion, you cannot accurately describe the minimization behavior as a bug.

Quote from: dictionary.com
An unwanted and unintended property of a program
or piece of hardware, especially one that causes it to
malfunction.


It is rather disturbing to ignore the preferences of many windows users on the basis of one "bug" report of a single user of a cross-platform, open-source application.  Naturally, if someone has employed a program on multiple operating systems, he/she will expect it to behave very similarly.  This cannot always be the case, or the program will become completely unattached to each platform and therefore will cease to 'feel' like a native application.

Your counter-argument is not substantiated effectively.  It can just as easily be reversed to say "keeping pages unallocated does no harm", because the statement is no more true or false based on the evidence presented.  The allocation or deallocation of pages is virtually instantaneous; the only operation involved here that takes human-noticeable time is reading from the disk pagefile.

The example you've presented is another fallacy.  It will never occur, and it does not occur.  If a gigabyte or more memory is available in a PC and mapped, it will at some point (with high probability, at the point when the high ram is at greatest demand) become used.  What one such as yourself fails to perceive here is to the design of the NT kernel is not based around high performance, top-end machines with googles of memory.  When you drop that one gigabyte down to 64, 128, or even 256 mbyte, suddenly there is no longer an excess of ram for every application to use as cache.  There is, in fact, very little memory available much of the time on PCs with this hardware who have aggressive users (people who do a great deal of multi-tasking, for example).  The optimal policy for the kernel in terms of great memory need is to match the aggressiveness of the user and try to create free memory for new tasks.

I've used three machines in the past with both windows and a linux distribution at one time or another.  The two more recently have both had what I would consider an excess of RAM for normal computing needs: 512 mbyte and 2048 mbyte, respectively.  There is no perceptible performance difference between applications running on the different operating systems.  The policy of the kernel, in effect, makes absolutely no difference.  The reason why is simple: neither of these machines has a significant need for a pagefile.  I could, in all likelihood, simple disable paging on both boxes.

The third machine was much older, from quite a while back: a Pentium I MMX 166 MHz with 40 mbyte ram.  At different times, it ran Windows 95 and Mandrake 6 (GNOME, iirc).  Both the kernels involved were also different, I'm sure: 2.4 an 2.6 are fairly dissimilar in numerous ways, and the 9x kernel and the NT kernel are quite unlike one another in others.  At the time, the Mandrake system seemed slower.  However, in time I've come to realize from more objective comparison that this was nonsense; it wasn't the handling of memory, cache, or processing time that made the perceptible difference.  The dissimilarity occurred at a much higher level; with the actual userspace applications and their user interfaces.

Each of us can cite ancedotal evidence displaying a disparity or nondisparity between performance on the two operating systems.  The actual truth of the matter lies more in perception than in fact, friend.

How can your experience be explained?  Quite simply.  Windows does not have native bash scripting.  You are inherently introducing excess layers of complexity and potential slow-down points by running anything through a bash shell on windows.  For valid comparisons of actual program runtimes, you need to run native windows applications through native windows interfaces, and time them using windows performance tools.  For linux, replace all instances of 'windows' with 'linux/gnu'.

Here's a decent (though far from infallible) method of testing your hypothesis (that the windows subsystem for memory management is responsible).  Disable the pagefile.  On Windows, this is done pretty easily through the system control panel.  On Linux, you can edit /etc/fstab or probably just temporarily pass some parameter to the kernel via your bootloader.

Honestly, I'd like to see your results.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #14 on: July 16, 2005, 11:31:21 am »
Quote from: kagerato
The allocation or deallocation of pages is virtually instantaneous; the only operation involved here that takes human-noticeable time is reading from the disk pagefile.


Here you prove me right, because this is the *exact* reason why deliberately dropping pages is evil. "Human noticeable" means one million times, by the way.

Blaming bash or an intermediate api layer as responsible for running an identical program 4.6 times slower (we are talking about 4.6 times, not 4.6 percent) is hilarious. Even more so as the hardware on the Windows system is superior in every respect. It could very well burn a few extra CPU cycles, no one would notice.

But I see there is little point in discussing this issue further, you will not agree anyway.
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Luca

  • Guest
Memmory leaks
« Reply #15 on: July 16, 2005, 02:56:26 pm »
I would like to come back to the original topic of this thread, i.e. memory leaks (even if I will say my opinion on memory management systems later, maybe).

The fact that, when the main window of an application is minimized, the memory consumption falls to 0 DOES NOT MEAN that there aren't memory leaks. Indeed, quite the opposite is true!

What happens when a user minimizes a window? Windows thinks "Oh well, this application won't be used by the user for some time, especially for what concerns its GUI, maybe it's a good time for making some order", and consequently swaps out the whole application from memory. On the task manager you can see two columns: "Memory usage" (the amount of RAM used by the application) and "Virtual memory size" (it should be the TOTAL memory allocated by the application, which may reside on the disk). Now, when you minimize the app, then the memory usage falls to 0, while the VM does not vary.

After you minimize an application, the application, in order to work (in the background) will immediately make some memory accesses: some page faults will occurr, and a (hopefully small) part of the application will be brought back to memory.

Now, what happens if there is a memory leak? Leaked memory is, by definition, memory that the programmer forgot to free, and that now stays unused in the virtual memory of your application; allocated yes, but unused. Consequently, when Windows swaps it out (for example due to a minimizing, or because there is not enough room in RAM), it will never be used anymore, that is it will never cause any further page fault, that is it will never be brought back to RAM again.

Thus, after the minimization, the memory usage will remain small (unless new memory leaks occur), but the VM will remain large (because in the VM there is still all the leaked memory!!!)

My suggestions are the following ones:
1. Check the VM on file manager: it should be large even after the minimization, confirming the memory leak;
2. Try to solve the memory leak!!!

I Hope I managed to be clear...
Regards, and thank you for this wonderful software,
Luca

Luca

  • Guest
Memmory leaks
« Reply #16 on: July 16, 2005, 03:41:07 pm »
And now, some considerations about the OS issue.

I agree with kagerato, there is not an optimal strategy for memory management, even from a theoretical point of view (as long as you are using a page replacement algorithm like LRU, or FIFO, you are already optimal!)

What an OS can do is to implement some clever "tricks" to improve the performance of the paging algorithm in practice.

What Windows does is, IMO, clever and effective: it tries to be "stingy", i.e. it tries to give to an application the minimal amount of RAM that will allow it to run smoothly,  i.e. without incurring in too many page faults. This is implemented using (at least) two mechanisms:

1. From time to time, during the execution of an application, Windows reduces the RAM available to the application by a little quantity. If the application was using more RAM than its real need, we are happy because Windows has recovered some memory that can be useful elsewhere. On the contrary, if the application needed all the RAM it was using, then now it doesn't have enough memory to perform its computation efficiently, and must continuously swap to disk: a huge number of page fault occurs. Windows dectects this situation, and gives back some memory to the application, and we are happy again. Technically, the subset of the virtual memory of an application that is currently been used actively is called "WORKING SET".

2. When you minimize an application, Windows pages it out completely to detect its new working set. This is done for two reasons: (i) when you minimize a program, you won't use its GUI, thus some of the memory used by the application's GUI will not be part of its new working set, as long as the app remains minimized; (ii) when you minimize some applications, you'll probably won't use them for a while (unless they're doing some computations in background), and so they won't make many memory accesses at all). Thus, again, it is VERY reasonable to swap out the application; the worst thing that can happen is that it will be swapped in again immediately.

And here it comes another important fact. Paging out some memory does not imply making a disk access immediately. In fact, simplifying a bit, now the swapped-out memory page will be considered "free memory" by Windows; and Windows will use it as disk cache. But, as a page of disk cache, it will immediately contain a useful information, i.e. the swapped-out page. Thus, if it happens that the swapped-out page is indeed useful for the application, it can be swapped-in again WITHOUT ANY DISK ACCESS!

Pictorically, here is the path for a memory page:
APPLICATION MEMORY (in RAM) <--> DISK CACHE (in RAM) <--> VIRTUAL MEMORY (on DISK)

And this is why, when you minimize an app, a very few number of DISK I/Os occur! Try to minimize Firefox (which is a monster for memory occupation) and see what happens to your disk and to the memory used in Task Manager...

My conclusions: I think that memory management is well done in NT. I have been using Linux kernel 2.4.x, and the system easily went thrashing as soon as the total virtual memory exceeded my system RAM; this happened far less often in Windows. (I don't know what happens in Linux 2.6.x, 'cause my new system has quite a lot of RAM). My feeling was confirmed by a scientific paper I read once (I don't remember where I found it).

This DOES NOT MEAN that "Windows is better than Linux": there are other basic functions where Linux outperforms Windows in my own experience (readings large file from disk, for example).

Greets,
Luca

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #17 on: July 16, 2005, 07:03:15 pm »
Quote from: thomas
Here you prove me right, because this is the *exact* reason why deliberately dropping pages is evil.


You're still stuck on a non-sequitur, friend.  That is ultimately irrelevant, though; no matter the reasoning or evidence you would not be able to objectively demonstrate that one philosophy or method is better than the other.  In essence, what you are trying to prove is that one computer program (or at least, one part of it) is written superior to another.

This is the nature of arguing opinions; you present what you know and organize theories around your experience.  When a seperate body of evidence contradicts your standing, it becomes necessary to introduce newer (often more accurate and less broad) conclusions.  The only conclusions which carry significant meaning, however, are those that can be supported by an overwhelming body of evidence.

Strengthening any line of reasoning reduces to three simple steps:

1.) Depersonalize the message.  Remove as many references to the first person as possible.  Statements which are heavily supported should appear to originate from a body of authors, not just one.

2.) Objectification.  Strip any statements which are primarily (or entirely) opinion.  Rewrite as many of the basic elements in relative terms (avoid references to absolute concepts).

3.) Reduction and reinforcement.  Condense the reasoning to its most basic premises and primary conclusion.  Use the strongest pieces of evidence available, and drop those that are weak or easily refuted.  Continue to add new information as it is discovered.

Quote from: thomas
"Human noticeable" means one million times, by the way.


This is quite the arbitrary definition.  It is proper to at least display what train of thought generated it; otherwise there is little reason to respond to such a statement.

Quote from: thomas
Blaming bash or an intermediate api layer as responsible for running an identical program 4.6 times slower (we are talking about 4.6 times, not 4.6 percent) is hilarious. Even more so as the hardware on the Windows system is superior in every respect. It could very well burn a few extra CPU cycles, no one would notice.


bash itself is only indirectly the problem in my hypothesis.  It is the layers necessary to run bash on Windows which introduce the actual dilemma.  If one is to operate scientifically, it is necessary to remove as many alternate causal explanations as possible before drawing a definite deduction.

Reiteration of several facts is certainly warranted by this point:

1.) This individual was proposed a question, given the task of determing an alternative cause to a phenomenon.  The particular phenomenon has not been reproduced in any fixed environment or by any objective observer.

2.) An alternate explanation was provided, but immediately rejected as absurd -- without any evidence.

3.) The burden of proof lies on he who presented the original assertion.  The true problem here is in the nature of the discussion.  One person is attempting to draw an absolute conclusion from excessively insufficient data, and is using ancedotal evidence as his only actual support.  The other person is trying, and clearly failing miserably, to present the reasons for which there appears to be no objective truth in this situation (nor indeed any other -- "objective truth" is an oxymoron).

In short, the initial assertion has not been provided with nearly enough substantiation to make it a more reasonable opinion than "there is no best".

Quote from: thomas
But I see there is little point in discussing this issue further, you will not agree anyway.


Perhaps my intent was not to prove one position as correct, but rather to show that it is narrow-minded to hold one side as irrefutably correct on an issue which has a wide breadth of experience and knowledge?

Quote from: Luca
And now, some considerations about the OS issue (...)


Very well described.  Your technical knowledge of the situation is greater than my own; therefore your terminology and understanding is more complete.

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #18 on: July 16, 2005, 08:33:30 pm »
Well, thank you for the course in scientific work. :)
I could reply something to a couple of your remarks, but I will abstain. This would lead nowhere.

Also, the discussion went so far off topic, it does not really contribute to the original question (what was it, anyway?). So we're wasting other people's time, which is no good.
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #19 on: July 18, 2005, 01:02:08 am »
The original purpose of the thread was to determine whether Code::Blocks contained some kind of memory leak.  However, the anonymous poster was only making an inquiry -- he/she did not actually pinpoint such a problem.  The guest's system did not suffer any real performance drawback, so this issue became essentially self-nullified.

Threads that spin off on tangents like this one do so often because they become abandoned without definite closure.  At least it was a tangental topic, though.  This poster has seen many a phpBB thread jump from issue to issue with absolutely no correlations whatsoever.

Luca

  • Guest
Memmory leaks
« Reply #20 on: July 18, 2005, 02:56:05 am »
Quote from: kagerato
The guest's system did not suffer any real performance drawback, so this issue became essentially self-nullified.


Alright, but even if he did not experience any performance drawback, the bug (i.e. the memory leak) is still present... I suggest to consider the problem!

Regards,
Luca

Offline rickg22

  • Lives here!
  • ****
  • Posts: 2283
Memmory leaks
« Reply #21 on: July 18, 2005, 05:27:49 am »
i thought there weren't memory leaks but it was the behavior of MS Windows...

Offline thomas

  • Administrator
  • Lives here!
  • *****
  • Posts: 3979
Memmory leaks
« Reply #22 on: July 18, 2005, 01:37:23 pm »
Quote from: rickg22
i thought there weren't memory leaks but it was the behavior of MS Windows...

Yes, Sir!  My saying.
"We should forget about small efficiencies, say about 97% of the time: Premature quotation is the root of public humiliation."

Luca

  • Guest
Memmory leaks
« Reply #23 on: July 18, 2005, 05:02:42 pm »
Quote from: rickg22
i thought there weren't memory leaks but it was the behavior of MS Windows...


I haven't run C::B long enough to observe its memory usage increasing. But if this happens, then I strongly believe there ARE memory leaks, as I explained in a previous post...

Luca

Offline rickg22

  • Lives here!
  • ****
  • Posts: 2283
Memmory leaks
« Reply #24 on: July 18, 2005, 05:35:56 pm »
Keep in mind that there WAS a memory leak in 1.0-betafinal, due to some popup menus being created and not destroyed...

Offline kagerato

  • Multiple posting newcomer
  • *
  • Posts: 56
    • kagerato.net
Memmory leaks
« Reply #25 on: July 18, 2005, 05:44:44 pm »
Quote from: rickg22
Keep in mind that there WAS a memory leak in 1.0-betafinal, due to some popup menus being created and not destroyed...


Over an extended usage period, that certainly could make a difference.  I think the problem here has been solved and everyone is running in circles...