Topic: Why do C++ compilers interpret terminating >> as an error?
Author: Allan_W@my-dejanews.com (Allan W)
Date: Mon, 26 Aug 2002 23:45:46 GMT Raw View
Paul McKenzie <paulm@dynasoft.com> wrote
> Why do compilers interpret the ">>" as an error? Of course the solution
> is to place whitespace between the ">" characters. However, I came
> across the following in the standard.
>
> Lexical Conventions 2.4.2:
>
> "If the input stream has been parsed into preprocessing tokens up to a
> given character, the next preprocessing token is the longest sequence of
> characters that could constitute a preprocessing token, even if that
> would cause further lexical analysis to fail."
This has to be. Consider:
a = --b;
Without the rule in 2.4.2, the compiler could interpret this the same
way as:
a = - - b;
which would negate be, then negate it back, then assign the value to a.
> But before this rule is the following from 2.1.3:
>
> "...The process of dividing a source file's characters into
> preprocessing tokens is context dependent.
> [Example: see the handling of < within a #include preprocessing
> directive. ]"
Frankly, I'm not 100% certain what this is supposed to mean, other than
the specific example given.
However, consider the fact that 2.1/3 is rather vague, while 2.4.2 is
very specific.
> Someone in another group has stated that the C++ compilers are wrong in
> that vector<vector<int>> should not be considered an error, due to 2.1.3
> and the "context" of what is being accomplished (terminate a nested
> template definition using ">>" instead of "> >").
Someone is wrong. It happens sometimes!
> So does the
> "context-dependency" rule of 2.1.3 override what is stated in 2.4.2, or
> are the compiler writers correct in using 2.4.2 to justify the parsing
> error?
The compiler writers are correct.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: David Schwartz <davids@webmaster.com>
Date: Tue, 27 Aug 2002 03:17:18 GMT Raw View
Allan W wrote:
> > "...The process of dividing a source file's characters into
> > preprocessing tokens is context dependent.
> > [Example: see the handling of < within a #include preprocessing
> > directive. ]"
The word "conext" here means the preprocessor context. The preprocessor
is not smart enough to know any other context.
> > Someone in another group has stated that the C++ compilers are wrong in
> > that vector<vector<int>> should not be considered an error, due to 2.1.3
> > and the "context" of what is being accomplished (terminate a nested
> > template definition using ">>" instead of "> >").
This person is in error. He is expecting the preprocessor to understand
what a nested template definition is. Template definitions are not
preprocessor concepts, the are compiler concepts, and so they don't
affect the preprocessor context the way '#include' or '/*' does.
> > So does the
> > "context-dependency" rule of 2.1.3 override what is stated in 2.4.2, or
> > are the compiler writers correct in using 2.4.2 to justify the parsing
> > error?
> The compiler writers are correct.
Yep. And it's because the preprocessor is context-dependent upon things
that are meaninful to the preprocessor. There is no feedback of the
*compiler* context to the preprocessor.
DS
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: Paul McKenzie <paulm@dynasoft.com>
Date: Fri, 23 Aug 2002 20:08:39 GMT Raw View
Given the following:
---------------------------
#include <vector>
using namespace std;
vector<vector<int>> V; // Error
----------------------------
Why do compilers interpret the ">>" as an error? Of course the solution
is to place whitespace between the ">" characters. However, I came
across the following in the standard.
Lexical Conventions 2.4.2:
"If the input stream has been parsed into preprocessing tokens up to a
given character, the next preprocessing token is the longest sequence of
characters that could constitute a preprocessing token, even if that
would cause further lexical analysis to fail."
But before this rule is the following from 2.1.3:
"...The process of dividing a source file's characters into
preprocessing tokens is context dependent.
[Example: see the handling of < within a #include preprocessing
directive. ]"
Someone in another group has stated that the C++ compilers are wrong in
that vector<vector<int>> should not be considered an error, due to 2.1.3
and the "context" of what is being accomplished (terminate a nested
template definition using ">>" instead of "> >"). So does the
"context-dependency" rule of 2.1.3 override what is stated in 2.4.2, or
are the compiler writers correct in using 2.4.2 to justify the parsing
error?
Paul McKenzie
-----------== Posted via Newsfeed.Com - Uncensored Usenet News ==----------
http://www.newsfeed.com The #1 Newsgroup Service in the World!
-----= Over 100,000 Newsgroups - Unlimited Fast Downloads - 19 Servers =-----
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: "Adam H. Peterson" <ahp6@email.byu.edu>
Date: Fri, 23 Aug 2002 22:27:40 GMT Raw View
template<int N>
class X {
};
template<typename T>
class Y {
};
int main() {
Y< X< 8>>1 > > y;
}
As I understand it, the above translation unit is well-formed because the
">>" is legal within a template argument and the preprocessor is required to
make it a token. Because there are legal uses of ">>" within a template
argument, I think it would be ill-advised for a compiler to try to
second-guess whether ">>" is an operator or a template delimeter.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: kanze@gabi-soft.de (James Kanze)
Date: Mon, 26 Aug 2002 15:29:44 GMT Raw View
"Adam H. Peterson" <ahp6@email.byu.edu> wrote in message
news:<ak67gf$8n8$1@acs2.byu.edu>...
> template<int N>
> class X {
> };
> template<typename T>
> class Y {
> };
> int main() {
> Y< X< 8>>1 > > y;
> }
> As I understand it, the above translation unit is well-formed because
> the ">>" is legal within a template argument and the preprocessor is
> required to make it a token. Because there are legal uses of ">>"
> within a template argument, I think it would be ill-advised for a
> compiler to try to second-guess whether ">>" is an operator or a
> template delimeter.
That's certainly one very strong reason. Even without it, however...
The only places where the tokenization is context dependant today are
preprocessor context dependancies. I would hate to see this expanded.
What is possible is to "overload" >>, so that it would terminate two
nested templates as well as mean right shift. The actual meaning of the
tokne (but not the tokenization itself) would depend on the context, but
there is nothing new here; this is exactly what happens with a , as
well.
This change would probably break your code above, too. While I doubt
that the idiom is very frequent, I don't think that the problem is
serious enough to justify potentially breaking existing code. The
work-around is trivial, and the compiler gives an error if you
accidentally forget the space. (If I were writing a compiler, I'd
probably add code to catch this special case and give an intelligent
error message.)
--
James Kanze mailto:jkanze@caicheuvreux.com
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: eschmidt@safeaccess.com (EPerson)
Date: Mon, 26 Aug 2002 18:04:20 GMT Raw View
Paul McKenzie <paulm@dynasoft.com> wrote in message news:<3D668D10.7050001@dynasoft.com>...
> Given the following:
> ---------------------------
> #include <vector>
> using namespace std;
> vector<vector<int>> V; // Error
> ----------------------------
>
> Why do compilers interpret the ">>" as an error? Of course the solution
> is to place whitespace between the ">" characters. However, I came
> across the following in the standard.
>
> Lexical Conventions 2.4.2:
>
> "If the input stream has been parsed into preprocessing tokens up to a
> given character, the next preprocessing token is the longest sequence of
> characters that could constitute a preprocessing token, even if that
> would cause further lexical analysis to fail."
>
> But before this rule is the following from 2.1.3:
>
> "...The process of dividing a source file's characters into
> preprocessing tokens is context dependent.
> [Example: see the handling of < within a #include preprocessing
> directive. ]"
>
> Someone in another group has stated that the C++ compilers are wrong in
> that vector<vector<int>> should not be considered an error, due to 2.1.3
> and the "context" of what is being accomplished (terminate a nested
> template definition using ">>" instead of "> >"). So does the
> "context-dependency" rule of 2.1.3 override what is stated in 2.4.2, or
> are the compiler writers correct in using 2.4.2 to justify the parsing
> error?
>
> Paul McKenzie
>
2.1.3 does not specify *how* the division is dependant on the context.
That is specified in later passages.
Eric Schmidt
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]