As far as wxWidgets go, I have a hard time believing a library in so wide-use has this issue. It might be however, that the API is prone to be used in such a way.
Hmm... sadly, wxWidgets has quite a few issues. I am struggling with some of them at the present time. And yes, a few things are quite inefficient.
But despite the many reasons to hate wxWidgets, it has one advantage, which cannot be denied... It may never work perfectly, but it kind of works somehow, and it does so on several Platforms without requiring you to change a line of code (well, almost).
And you have to account for the fact that code::blocks as a beta version is still an awful lot
- faster (about 2-3 times in some things?)
- more stable (c::b crashes total: 0 / dev-cpp crashes per day: 4-6)
- smaller
- better usable
than for example Dev-CPP in version 4 or 5.
So, despite all evil that comes with wxWidgets, it still enables people to produce quite good end results.
Is this fan on/off issue not something that can be configured under power management? Surely, it can be configured to something like "When on AC, always run CPU at full speed"?
Another way to keep the fan running is this, of course:
int main()
{
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST);
for(;;)
;
}
That will prevent the fan from ever going off and the only thing that you lose is the validity of CPU idle time in the task manager ;)
This maybe?
EVT_IDLE( CompilerGCC::OnIdle )
EVT_IDLE( DebuggerGDB::OnIdle )
These plugins read stdout from processes they launch on a regular base triggered by a wxTimer (100ms). OnTimer() does not do anything except for calling wxWakeUpIdle(), and OnIdle() does the actual polling.
While moving the mouse, the system goes idle quite a few times, so it may be that OnIdle() is indeed called many, many times per second. At the very least, this means a lot of calls to wxProcess::IsInputAvailable() which calls wxStream::CanRead(). If there are actually processes running, two wxTextInputStreams are opened in addition, and one line is read from each.
I am not saying this *is* the actual cause, but it might just be. One should run gprof to see how often OnIdle() is really called. If it is 50 or 100 times per second, we have a culprit. In that case, have OnTimer() poll for input, which is better anyway, and remove OnIdle().
Even if it is not that, gprof will likely find some function that gets called amazingly often, providing a starting point for the search.
Is there a way to check the ellapsed time in milliseconds since the last idle event handled?
Also, something makes me think. Is there a point on doing these checks if NO compilation or debug session has been started yet? I think we need a couple of if's in there.
Hmmm....
void CompilerGCC::OnIdle(wxIdleEvent& event)
{
if (m_Process && ((PipedProcess*)m_Process)->HasInput())
event.RequestMore();
else
event.Skip();
}
void CompilerGCC::OnTimer(wxTimerEvent& event)
{
wxWakeUpIdle();
}
I think we should add an "if(m_Process)" before doing the wxWakeUpIdle, this would save us from creating an idle events chain. The same code applies for debuggergdb.
Could not wait until afternoon, too interesting... :)
Unluckily, the overall result is a bit disappointing, the changes in CPU time are not really significant:
13-24% (average ~20%) all plugins (including debugger and compiler) original version
9-22% (average ~16%) all plugins (including debugger and compiler) after removing OnIdle
It appears to be slightly better, but the two intervals overlap quite a bit, so it may well be that the observed difference is due to inaccurate measurement.
The code was changed in the following manner:
//void CompilerGCC::OnIdle(wxIdleEvent& event)
//{
// if (m_Process && ((PipedProcess*)m_Process)->HasInput())
// event.RequestMore();
// else
// event.Skip();
//}
void CompilerGCC::OnTimer(wxTimerEvent& event)
{
if(m_Process)
while (((PipedProcess*)m_Process)->HasInput())
;
}
(the respective declarations and event table entries were deleted)
These changes were done for CompilerGCC, DebuggerGDB, and PipedProcess, which internally all use this strategy. Another possible optimization would be to move the while into PipedProcess::HasInput, so the two TextInputStreams are not opened and closed for every single line of output read, but that will only affect performance while a compile is running, it should make no difference when idle (probably not an issue, so best leave as is).
Also, I deleted the OnIdle function from ProjectManager because all it did was call Skip() - this is quite useless, we get the same result if we don't insert that function into the event table in the first place.
wxScintilla does some very peculiar idle processing, too. Apparently, the line wrapping is done by calling RequestMore() for each line (?). Using RequestMore in a loop is titled as "not usually a good idea" on comp.soft-sys.wxwindows, hmm... anyway. I removed this idle code to check what difference it makes, but since my test setting has no document open, the event table entry is not generated anyway, so of course there was no difference at all (it might while editing, maybe... but nobody complained so far?).
NotifyPlugins is not guilty of taking up CPU time. I removed each and every call to NotifyPlugins and did not see any difference in CPU load at all.
To summarise:
It is not OnIdle (it could be accused of maybe 5% CPU).
It is not NotifyPlugins (not noticeable at all).
Neither compiler nor debugger spawn threads secretly.
Whatever eats those 15% of CPU time, must consquently be related to UI events.
Or... any other ideas?
For all I know wxWindows takes care of that, so we need not fear that.
But it is nevertheless inefficient to send that many messages.
To give an example, wxScintilla has code like this:
void wxScintilla::OnIdle (wxIdleEvent& evt)
{
m_swx->DoOnIdle (evt);
}
...
void ScintillaWX::DoOnIdle(wxIdleEvent& evt)
{
if ( Idle() )
evt.RequestMore();
else
SetIdle(false);
}
...
bool Editor::Idle()
{
bool wrappingDone = (wrapState == eWrapNone) || (!backgroundWrapEnabled);
...
return !idleDone;
}
In other words, this code does "post idle messages as long as there is text left to wrap" and "do the actual work when the messages come back in".
It is not surprising that the profiler shows the event loop dispatcher as the most CPU intensive section. It's being used :)