Developer forums (C::B DEVELOPMENT STRICTLY!) > Development
Code completion doesnt follow #include in struct
oBFusCATed:
--- Quote from: ollydbg on March 23, 2011, 08:32:25 am ---2, I found you use std::vector in the code, does std::list is much better? when doing a macro replacement, a vector will always re-size itself.
--- End quote ---
Or probably a std::deque :) -> http://www.gotw.ca/publications/mill10.htm
MortenMacFly:
--- Quote from: JGM on March 23, 2011, 07:57:00 am ---some weeks ago I started working on a simple to use cpp parser mainly a prepreocessor just for fun.
--- End quote ---
[...]
--- Quote from: JGM on March 23, 2011, 07:57:00 am ---http://www.mediafire.com/?yqvsstq23jot650
--- End quote ---
I cannot test it atm, but do you mean "a preprocessor", or "for preprocessors?
My recent idea concerning preprocessor was using tools like:
http://dotat.at/prog/unifdef/
(...and there are other going in the same direction like "sunifdef") to "clean up" source files in a pre-process and parse what's left. If we do this in memory it should also be pretty fast.
JGM:
--- Quote from: ollydbg on March 23, 2011, 08:32:25 am ---...
1, I think using a generated lexer will make things much easier. a generated lexer can handle somethings like: line counting, column counting, and it use a state machine which will catch the "keyword" much faster than "clang or gcc". (both clang and gcc does not distinguish between a keyword or an identifier, they just do a hashtable search when an identifier returned), from this point, I'd suggest my work on Quex based lexer. ( I do benchmarks showing that it was 200% as the speed of flex generated lexer under windows). I put the test code here( also it include a clang test project to test codecompletion feature of clang )
http://code.google.com/p/quexparser/
would you like to have a look?
...
--- End quote ---
I started writing a custom tokenizer for the preprocessor since I thought the output would be much simple to analyze on the future and also I wanted it to be smart and produce a tree more easy to analyze. Also I wanted to produce the cleaned code after preproccessing with correct column and line numbers for the code parser as optimizable if possible. Still I need to correctly manage multiple line preprocessors (#define blah blah(123) \).
Whoa! that quexparser looks a little kind of complex for my brain to digest I will try to analyze it deeply.
--- Quote from: ollydbg on March 23, 2011, 08:32:25 am ---...
2, I found you use std::vector in the code, does std::list is much better? when doing a macro replacement, a vector will always re-size itself.
...
--- End quote ---
2. I have read several c++ books and read about the performance on available containers as inner structure but I always forget the differences on each of them :( (lack of practice) but it may be easy to substitute since containers almost always share same interface (I think)
--- Quote from: ollydbg on March 23, 2011, 08:32:25 am ---Currently, I feel a little confused about my quexparser, I do not have a clean direction, I found that even doing a macro replacement need many tricky.
you can look at
http://gcc.gnu.org/onlinedocs/cppinternals/
--- End quote ---
Yep, this whole c++ parsing thing is hard since the language itself has so many features to look up, but it is fun, I just wanted to create a simple to use preprocessor after several months of c++ inactivity on my blood.
JGM:
--- Quote from: MortenMacFly on March 23, 2011, 02:54:52 pm ---
--- Quote from: JGM on March 23, 2011, 07:57:00 am ---some weeks ago I started working on a simple to use cpp parser mainly a prepreocessor just for fun.
--- End quote ---
[...]
--- Quote from: JGM on March 23, 2011, 07:57:00 am ---http://www.mediafire.com/?yqvsstq23jot650
--- End quote ---
I cannot test it atm, but do you mean "a preprocessor", or "for preprocessors?
My recent idea concerning preprocessor was using tools like:
http://dotat.at/prog/unifdef/
(...and there are other going in the same direction like "sunifdef") to "clean up" source files in a pre-process and parse what's left. If we do this in memory it should also be pretty fast.
--- End quote ---
yep a preprocessor, with the future goal of complete parser. (my english vocabulary and game of words suck :P)
Current logic is to identify preprocessor type when tokenizing (function ex: #define test(x) (x*2) or just a declaration ex #define test_delcared) add it to a vector to then make correct replacements on code to parse it correctly. Actually nested preprocessors as I tested worked correctly, I was fixing some issues with multiple line preprocessors (handle incorrectly to produce correct line and column positions) and then write an expression parser to evaluate macro expressions. Also include files are parsed only once as normally, it handles global and local includes and you can indicate to the class the paths to search. Also the use of string class should be replaced by the wstring one, but since I was playing around at first there are things to be improved.
I wanted to have a much complete and documented code before posting it but well, after reading some threads here I decided to let it go as it is and see if it is understandable by other developers, wishing for the best. The main.cpp file should serve as an example of how I intended to make use of it. If people think the code is not that hard to understand then it may merit it's completeness.
ollydbg:
--- Quote from: MortenMacFly on March 23, 2011, 02:54:52 pm ---
--- Quote from: JGM on March 23, 2011, 07:57:00 am ---some weeks ago I started working on a simple to use cpp parser mainly a prepreocessor just for fun.
--- End quote ---
[...]
--- Quote from: JGM on March 23, 2011, 07:57:00 am ---http://www.mediafire.com/?yqvsstq23jot650
--- End quote ---
I cannot test it atm, but do you mean "a preprocessor", or "for preprocessors?
My recent idea concerning preprocessor was using tools like:
http://dotat.at/prog/unifdef/
(...and there are other going in the same direction like "sunifdef") to "clean up" source files in a pre-process and parse what's left. If we do this in memory it should also be pretty fast.
--- End quote ---
I briefly read the site: unifdef - selectively remove C preprocessor conditionals
it said:
--- Quote ---It is useful for avoiding distractions when studying code that uses #ifdef heavily for portability (the original motivation was xterm's pty handling code), or as a lightweight preprocessor to strip out internal routines from a public header (the Linux kernel uses unifdef to strip out #ifdef __KERNEL__ sections from the headers it exports to userland)
--- End quote ---
Great, I think we need a lightweight preprocessor, as my point of view, gcc's preprocessor code base was too big and too complex.
The main two job is:
1, handle conditional preprocessor directive, like #if and do a expression evaluation.
2, do macro expansion
In fact this two method was done in the current implementation of cc, but I think they need to be refactored. Morten, can you give a direction?
@JGM
quex's lexer generator is quite easy to lean, and it's grammar is very easy to learn. once you use this, you can give(retern) a token once a time. the token contains several information include at least four field.
1, token id (identifier, keyword, open-bracket......)
2, string value if it is an identifier, otherwise, it is empty
3, column count value
4, line count value
then you don't care about anything else, you just use the token, and do everything you like. So, quex's lexer stands on a low level, and you can implement the high level preprocessor on that.
I have implement a const value expression solver by "shunting yard algorithm" on the code. There is a quite similar one in the CC's source code. We can have further discussion to collaborate.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version