Developer forums (C::B DEVELOPMENT STRICTLY!) > Development

lexer file loading ...

<< < (4/5) > >>

squizzz:
Thanks for fixing Settings->Editor delay. :)

Regarding startup time - I have some low end ancient specs machine here (533MHz 256MB :lol:), so it takes ~ 20 seconds to start Code::Blocks - 10 seconds for lexers and 10 for plugins. What's interesting - matlab_lexer takes whole 4 seconds to load, while others of comparable size take no more than 400 ms... Anyway, it's nice to know what should I turn off first. :)

thomas:

--- Quote ---Yes, that's another point, further improvements can be done with caching
--- End quote ---

--- Quote ---I thought that the XRC's were loaded from the zips, which then were read from memory instead of disk. (zip's from disk, xrc's from memory, uncompressed).
--- End quote ---
I am really unable to explain this to you if you don't read, Takeshi. Run FileMon yourself and you will see. We cannot cache those reads, they are made implicitely by the XRC loader... it is not the lexers that take 50 seconds to load.


--- Quote ---True, but if network latency were the only issue, why the SciTE lexers takes 1 second on LAN, while it haves more lexers? What is SciTE doing <somehow> that reduces network latency? Perhaps what Michael suggested?
--- End quote ---
Network latency is the only issue for that phenomenon. And once again, it is not the lexers that take 50 seconds.
Scite is not doing anything to reduce network latency (as it happens, the speed of light is the same for SciTE as for Code::Blocks, and routers work none differently for SciTE, either).
SciTE simply does not use XRC and thus does a lot less I/O operations (about 800 alltogether, 1/16 as many as Code::Blocks). That, too, can be seen from running FileMon.


--- Quote ---Notice that those rough 200ms per xml lexer is on local disk, guess what takes to parse more than 50 C::B xml lexers.
--- End quote ---
Unless your machine is ideed 10 times slower than any machine Yiannis and me have tried, this is not correct. It is certainly true that parsing is not free. However, there are just 3 lexers which take that long (the ones with mega-long keyword lists: matlab, masm, and nsis), all others are on the order of 20-40ms. But never mind that's a different issue. Once we have found a way to implement on-demand loading (which is unluckily not trivial to implement), that should no longer be a problem.

takeshimiya:

--- Quote from: thomas on April 07, 2006, 02:18:48 am ---it is not the lexers that take 50 seconds to load.

--- End quote ---
Never implied that. What's implied is that C::B takes 50 seconds to load and the lexers are partly responsible for that, but not the biggest one.
As you correctly pointed out, the major (constant time) bottleneck are resources.

The startup-time bottlenecks are:

* Loading of Lexers (Takes constant time between runs)
    Can be solved in various ways: on-demand loading, another format, etc.
* Loading of Resources (Takes constant time between runs, more than Lexers)
    Not much can be done, except reducing a bit the XRC usage (ie. toolbars).
    Or compressing the resources with compression level 0 (that will imply more space on disk,
    but probably smaller download size for the "compressing what is compressed" thing).
* Loading of DLLs (Takes a lot of time on first run, more than Resources. In the subsequent runs Windows keeps the DLLs on memory)
    Can be solved with prelinking, and with GCC4 visibility flags. Certainly the two will have a big impact on faster loading.
    We can wait until MinGW GCC4.1 comes out, and compile with the visibility flags.
Certainly the lexers aren't the major startup-time bottleneck, but when you're not an "fast platform", like a "not-so-fast" pc, or a loading from a network or usb-key, everything matters.
Anyways I'm more concerned about other aspects of the lexers, not only the performance. :P


--- Quote from: thomas on April 07, 2006, 02:18:48 am ---
--- Quote ---Notice that those rough 200ms per xml lexer is on local disk, guess what takes to parse more than 50 C::B xml lexers.
--- End quote ---
Unless your machine is indeed 10 times slower than any machine Yiannis and me have tried, this is not correct.

--- End quote ---
You're right, my "rough" measure is what C::B printed in the log, which I certainly can't trust because it's common that two items have the same time, which I really doubt.
I'm in the search of a profiler for MinGW other than gprof (for it's known limitations). I'm using AMD CodeAnalyst which is really great but currently doesn't support MinGW.
Any suggestions? :)

thomas:

--- Quote ---Not much can be done, except reducing a bit the XRC usage (ie. toolbars).
Or compressing the resources with compression level 0 (that will imply more space on disk,
but probably smaller download size for the "compressing what is compressed" thing).
--- End quote ---
No, that makes it worse.

takeshimiya:

--- Quote from: thomas on April 07, 2006, 10:07:29 pm ---
--- Quote ---Not much can be done, except reducing a bit the XRC usage (ie. toolbars).
Or compressing the resources with compression level 0 (that will imply more space on disk,
but probably smaller download size for the "compressing what is compressed" thing).
--- End quote ---
No, that makes it worse.

--- End quote ---
That's true in most cases, but it really depends if the bottleneck is the CPU or file access.
When loading from a network it will be the file access, and when loading from a local hard disk with a slow CPU (or if you're doing another CPU-intensive task), it will be CPU.
So it's a good compromise as is now.

I'm looking for a MinGW profiler other than GProf, anyone knows?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version