User forums > Help

Codecompletion parser bug on treating comments

<< < (2/3) > >>

MortenMacFly:
Something else that came into my mind just by now:
Why don't we kind of "pre-process" the buffer before CC analyses it in term of removing comments completely. I mean: Commented stuff is just useless for CC (unless we want to consider using Doxygen comments or alike) and probably operating the whole buffer could work with a "simple" RegEx?! In the end we would obsolete a lot of comment checking code.

ollydbg:
I build a new CC with Jen's patch, and solved my problem.

@MortenMacFly
I do think that some comments should be reserved especially in function declaration. If we do a pre-process, then we will parse a source file twice, which will take more time :D.

I'm not sure why the default argument value was stripped, see the screen shot below, I do suggest that the functiontip will show " bool skipWhiteAtEnd = true" .


[attachment deleted by admin]

Jenna:

--- Quote from: MortenMacFly on March 10, 2009, 07:03:36 am ---
--- Quote from: jens on March 09, 2009, 05:13:07 pm ---In other words, what about just "eating" all chars until EOL or EOF?

--- End quote ---
Nope - won't work. Consider this:

--- Code: ---void MyFun(bool myParam /* = true */, int MyOtherParam /* = 0 */)
{
  int a /* could be b */ = 1; /* probably 0 */
  int b; /* Descr:
           * Nice!
           */ return;
  string "hello
            world";
}

--- End code ---
...unless I am missing something...
(Will try the patch though...)

--- End quote ---

It should still work.
We only call SkipToEOL with second parameter skippingComment set to true, if we are in a c++-comment ("//"), and not inside a c-style comment ("/*").

MortenMacFly:

--- Quote from: jens on March 10, 2009, 10:41:49 am ---We only call SkipToEOL with second parameter skippingComment set to true, if we are in a c++-comment ("//"), and not inside a c-style comment ("/*").

--- End quote ---

--- Quote from: MortenMacFly on March 10, 2009, 07:03:36 am ---...unless I am missing something...

--- End quote ---
:lol: :lol: :lol:

Ceniza:

--- Quote from: ollydbg on March 10, 2009, 09:40:49 am ---... If we do a pre-process, then we will parse a source file twice, which will take more time :D.

--- End quote ---

Not true. It just divides the parsing into two stages. The preprocessing stage would usually return tokens. You do not need to do much on them to convert them into final tokens to feed a parser. The parser will just read the tag of the tokens indicating if they are strings, integers, identifiers, keywords, etc. In other words, think of the preprocessor as a smart lexer. The current implementation, on the other hand, tries to do everything at the same time. It would be true if the preprocessor just generated a text file. Then, you would have to "tokenize" the whole thing once more.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version