Topic: Standard for inclusion protection
Author: jln2@cec2.wustl.edu (Sammy D.)
Date: Thu, 7 Oct 1993 15:26:11 GMT Raw View
In article <1993Oct6.073530.784@eisner.decus.org> saunders@eisner.decus.org writes:
>
>Well, I'm not a compiler writer, so I don't know for sure. I just thought that
>an explicit directive would be easier and less error-prone, both for the
>compiler and for the programmer, than scanning the include file and guessing.
>
>Perhaps I'm mistaken, and g++ doesn't "guess". Could someone look at the code
>and tell us what the algorithm is? I'm concerned that the compiler might
>mistake the programmer's intentions and fail to include something it should.
>This would not be a problem with "#include_once", which explicitly states the
>programmer's intentions.
>
>John Saunders
>saunders@eisner.decus.org
You know, the version of g++ that I have (version 2.3.1) seems to have
a lot of "#pragma once" scattered about the include files. This seems
to be the best of both worlds, since ANSI states that the compiler should
ignore any #pragmas that it doesn't recognize.
Of course, back when pragmas were first introduced by Ada, people pointed
out that nothing would keep two different companies from using the same
name for two very different purposes.
Author: jamshid@emx.cc.utexas.edu (Jamshid Afshar)
Date: 7 Oct 1993 20:04:44 -0500 Raw View
In article <1993Oct7.152611.24336@wuecl.wustl.edu>,
Sammy D. <jln2@cec2.wustl.edu> wrote:
>In article <1993Oct6.073530.784@eisner.decus.org> saunders@eisner.decus.org writes:
>>Well, I'm not a compiler writer, so I don't know for sure. I just thoughtthat
>>an explicit directive would be easier and less error-prone, both for the
>>compiler and for the programmer, than scanning the include file and guessing.
A new keyword is NOT easier for "end" programmers, for library
writers, or for compiler writers. Maintaining or worrying about
backward compatibilty would be a major hassle. Providing two
different ways to include files does not make things less complicated.
See the appended g++ docs about #import and `#pragma once' being
obsolete.
Performing the optimization is not error-prone nor does it involve any
guesswork. If the compiler sees that a #include'd file has a #ifndef
wrapper, the compiler can perform the optimization. There's no way
this optimization can alter the meaning of C/C++ code. If the header
does not have a #ifndef wrapper, the compiler doesn't try to perform
the optimization. Simple.
>>Perhaps I'm mistaken, and g++ doesn't "guess". Could someone look at the code
>>and tell us what the algorithm is?
No need to delve into the source code; it's discussed in the manual.
>>I'm concerned that the compiler might
>>mistake the programmer's intentions and fail to include something it should.
Your concerns are unfounded.
>You know, the version of g++ that I have (version 2.3.1) seems to have
>a lot of "#pragma once" scattered about the include files.
Although `#pragma once' was obsoleted a while back (I think 2.0), GNU
might not have gotten around to updating all their headers.
>This seems
>to be the best of both worlds, since ANSI states that the compiler should
>ignore any #pragmas that it doesn't recognize.
>Of course, back when pragmas were first introduced by Ada, people pointed
>out that nothing would keep two different companies from using the same
>name for two very different purposes.
It's for exactly this reason that #pragma's are to be avoided. E.g.,
what if some compiler made `#pragma once' mean "On Custom Evaluations"
which altered the evaluation of expressions? Yeah, that's silly, but
so is proposing that compilers implement an extension that would be no
easier to implement than an equivalent, silent optimization,
especially when the extension would cause compiler users more grief.
I'm appending an excerpt from the `GNU C Preprocessor' manual. Please
read it carefully and completely before proposing or encouraging any
more unecessary preprocessor extensions, inefficient or ineffectual
idioms, or doubts about the legality of the `gcc' optimization.
Sorry to be so testy, but this same debate happens every time this
topic comes up in a C or C++ newsgroup. I don't think people in this
thread are reading the articles to which they are replying because the
optimization has been explained multiple times and shouldn't be this
difficult to understand.
It's also frustrating to see a (rather tedious) idiom presented with
claims that it optimizes compile times when no effort has been made to
verify that the idiom does in fact speedup real compiles. In fact,
the idiom makes compiles slower under `gcc'. It *might* speed up
other compilers, but with factors like disk caching and the cost of
opening a separate file, it might not have any noticable effect or it
might even slow things down.
Jamshid Afshar
jamshid@emx.cc.utexas.edu
---excerpt from "GNU C Preprocessor" info page---
Copyright 1987, 1989, 1991, 1992, 1993 Free Software Foundation, Inc.
[see GPL]
File: cpp.info, Node: Once-Only, Next: Inheritance, Prev: Include Operation, Up: Header Files
Once-Only Include Files
-----------------------
Very often, one header file includes another. It can easily result
that a certain header file is included more than once. This may lead
to errors, if the header file defines structure types or typedefs, and
is certainly wasteful. Therefore, we often wish to prevent multiple
inclusion of a header file.
The standard way to do this is to enclose the entire real contents
of the file in a conditional, like this:
#ifndef __FILE_FOO_SEEN__
#define __FILE_FOO_SEEN__
THE ENTIRE FILE
#endif /* __FILE_FOO_SEEN__ */
The macro `__FILE_FOO_SEEN__' indicates that the file has been
included once already; its name should begin with `__' to avoid
conflicts with user programs, and it should contain the name of the file
and some additional text, to avoid conflicts with other header files.
The GNU C preprocessor is programmed to notice when a header file
uses this particular construct and handle it efficiently. If a header
file is contained entirely in a `#ifndef' conditional, then it records
that fact. If a subsequent `#include' specifies the same file, and the
macro in the `#ifndef' is already defined, then the file is entirely
skipped, without even reading it.
There is also an explicit command to tell the preprocessor that it
need not include a file more than once. This is called `#pragma once',
and was used *in addition to* the `#ifndef' conditional around the
contents of the header file. `#pragma once' is now obsolete and should
not be used at all.
In the Objective C language, there is a variant of `#include' called
`#import' which includes a file, but does so at most once. If you use
`#import' *instead of* `#include', then you don't need the conditionals
inside the header file to prevent multiple execution of the contents.
`#import' is obsolete because it is not a well-designed feature. It
requires the users of a header file--the applications programmers--to
know that a certain header file should only be included once. It is
much better for the header file's implementor to write the file so that
users don't need to know this. Using `#ifndef' accomplishes this goal.
Author: saunders@eisner.decus.org
Date: 7 Oct 93 22:44:33 -0400 Raw View
In article <9328023.20700@mulga.cs.mu.OZ.AU>, fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
> saunders@eisner.decus.org writes:
>
>>fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
>>
>>> From a compiler writer's point of view, I think that gcc's optimization
>>> would be just as easy to implement as #include_once.
>>> The only advantage of #include_once would be that it might make
>>> compiler vendors more likely to implement the optimization, but on the
>>> other hand it wouldn't be applied to existing code, and #include_once
>>> couldn't be used in portable code (at least not for 5 or ten years).
>>> I think you are better off lobbying your compiler vendor to do
>>> #include optimization rather than lobbying them to implement #include_once.
>>>
>>
>>Well, I'm not a compiler writer, so I don't know for sure. I just thought that
>>an explicit directive would be easier and less error-prone, both for the
>>compiler and for the programmer, than scanning the include file and guessing.
>>
>>Perhaps I'm mistaken, and g++ doesn't "guess". Could someone look at the code
>>and tell us what the algorithm is? I'm concerned that the compiler might
>>mistake the programmer's intentions and fail to include something it should.
>>This would not be a problem with "#include_once", which explicitly states the
>>programmer's intentions.
>
> The algorithm is that as it reads the files in, it checks whether
> they are of the form
> /* whitespace */
> #ifndef MACRO
> ...
> #endif
> /* whitespace */
> where the sections marked as /* whitespace */ can includes comments.
> [If you want more detail about exactly how it does this, read the code
> your self :-).] When it is about to include a file which matched this
> form in again, it checks whether MACRO is defined. If so, then clearly
> the conditional will fail and so the whole file will be whitespace
> after preprocessing.
I hope you meant to say it checks:
/* whitespace */
#ifndef MACRO
#define MACRO
...
#endif
/* whitespace */
Furthermore, I'd be concerned if it didn't check within the body for lines
which undefine MACRO. Also, I'd hope it would accept #if !defined(MACRO), and
that this may all be turned off with a compiler switch.
> --
> Fergus Henderson fjh@munta.cs.mu.OZ.AU
John Saunders
saunders@eisner.decus.org
Author: Robert Andrew Ryan <rr2b+@andrew.cmu.edu>
Date: Fri, 8 Oct 1993 10:56:04 -0400 Raw View
Excerpts from netnews.comp.lang.c++: 7-Oct-93 Re: Standard for inclusion
.. saunders@eisner.decus.or (2344)
> I hope you meant to say it checks:
> /* whitespace */
> #ifndef MACRO
> #define MACRO
> ...
> #endif
> /* whitespace */
> Furthermore, I'd be concerned if it didn't check within the body for lines
which undefine MACRO.
The #define is immaterial... The pre-processor remembers that if file
foo is included and macro BAR is defined the file is a no-op. How or
when the macro was set or unset doesn't matter.
-Rob
Author: jln2@cec2.wustl.edu (Sammy D.)
Date: Fri, 8 Oct 1993 15:35:36 GMT Raw View
In article <1993Oct7.224433.814@eisner.decus.org> saunders@eisner.decus.org writes:
>In article <9328023.20700@mulga.cs.mu.OZ.AU>, fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
>> The algorithm is that as it reads the files in, it checks whether
>> they are of the form
>> /* whitespace */
>> #ifndef MACRO
>> ...
>> #endif
>> /* whitespace */
>> where the sections marked as /* whitespace */ can includes comments.
>> [If you want more detail about exactly how it does this, read the code
>> your self :-).] When it is about to include a file which matched this
>> form in again, it checks whether MACRO is defined. If so, then clearly
>> the conditional will fail and so the whole file will be whitespace
>> after preprocessing.
>
>I hope you meant to say it checks:
> /* whitespace */
> #ifndef MACRO
> #define MACRO
> ...
> #endif
> /* whitespace */
>
>Furthermore, I'd be concerned if it didn't check within the body for lines
>which undefine MACRO. Also, I'd hope it would accept #if !defined(MACRO), and
>that this may all be turned off with a compiler switch.
I posted the second version originally, but I think that the first
version is what is actually used. In this way, it doesn't matter where
MACRO gets defined or how many times it may #undef'ed and re-#def'ed.
As was pointed out, as long as the file hasn't changed between
#includes, the compiler can assume that if the file preprocesses to
whitespace, there is no reason to include it again.
Author: fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Sat, 9 Oct 1993 05:23:37 GMT Raw View
saunders@eisner.decus.org writes:
>fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
>>
>> The algorithm is that as it reads the files in, it checks whether
>> they are of the form
>> /* whitespace */
>> #ifndef MACRO
>> ...
>> #endif
>> /* whitespace */
>> where the sections marked as /* whitespace */ can includes comments.
>> [If you want more detail about exactly how it does this, read the code
>> your self :-).] When it is about to include a file which matched this
>> form in again, it checks whether MACRO is defined. If so, then clearly
>> the conditional will fail and so the whole file will be whitespace
>> after preprocessing.
>
>I hope you meant to say it checks:
> /* whitespace */
> #ifndef MACRO
> #define MACRO
> ...
> #endif
> /* whitespace */
No, I didn't mean to say that - what I said above was quite correct,
and the optimization is perfectly safe.
>Furthermore, I'd be concerned if it didn't check within the body for lines
>which undefine MACRO.
Please read what I said above:
| When it is about to include a file which matched this
| form in again, it checks whether MACRO is defined.
>Also, I'd hope it would accept #if !defined(MACRO), and
Yes, that would be a good idea I guess, but it's not so important.
>that this may all be turned off with a compiler switch.
Why? If you want your compiler to run slower, just get a slower computer!
There's no reason at all to not apply this optimization.
--
Fergus Henderson fjh@munta.cs.mu.OZ.AU
Author: saunders@eisner.decus.org
Date: 6 Oct 93 07:35:30 -0400 Raw View
In article <9327513.2660@mulga.cs.mu.OZ.AU>, fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
> saunders@eisner.decus.org writes:
>
>>fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
>>> saunders@eisner.decus.org writes:
>>>
>>>>fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
>>>>> saunders@eisner.decus.org writes:
>>>>>
>>>>>>Personally, the reason _I_ care about the guards on library headers is
>>>>>>that the compiler has to
>>>>>>
>>>>>>4) Output source-line correlation records to the object file so that the
>>>>>> debugger can find the source line corresponding to the code address
>>>>>>
>>>>>>And do this for each of the times the file is included.
>>>
>>> After the first time, step 4 will also be a no-op, since there
>>> won't be any code generated in a header file.
>>
>>"int i = 1;" is "code" in the compilers I use, at least in debug mode.
>
> Yes, it's a definition. It shouldn't occur in header files.
> If it were to occur twice in one compilation unit, it would be an error.
>
>>Sometimes a member function prototype generates an interlude, etc.
>
> If there is a guard, then then file will be #ifdef'd out after the
> first time. If there's no guard, then declarations that occur twice
> should not cause anything extra to be written out, and definitions
> that occur twice like
> int = 1;
> int = 1;
> are illegal.
>
You're right, of course. The only time a header should be generating code is if
it contains the definition of a static.
>>> But even if you do want to do it, I don't see why you need the library
>>> developers help. You should be able to write a simple program to
>>> create the guard header files automatically.
>>
>>No. That doesn't change the includes within the library vendors' headers, which
>>is where a large part of the problem lies. For example, the DEC C++ compiler on
>>OpenVMS/VAX includes cxxl.hhx in just about every library header. Although I
>>have written a simple macro to create "my_string.hxx", etc., if I use, e.g.,
>>string.hxx and iostreams.hxx, I now have cxxl.hxx read twice. If cxxl.hxx had
>>been one of my header header files, four lines would have been read twice
>>rather than several hundred.
>
> Don't give your guards names like my_string.hxx, just call them string.hxx.
> Rename the vendor's header files to vendor_string.hxx (you can just
> create a directory of symbolic links to the original unmodified
> vendor header directory), and have your guard string.hxx look like
> #ifndef STRING_HXX_INCLUDED
> #define STRING_HXX_INCLUDED
> #include <vendor_string.hxx>
> #endif
> Then just ensure that your guards get included in preference to the
> vendor's header files in the original unmodified vendor header directory
> by setting the approriate compiler option.
>
Something like this could work on VMS, though under no circumstances would I
rename the header files (what happens when the compiler, or OS is upgraded?),
and VMS doesn't quite have symbolic links. Still, I could manipulate the header
search order so that:
#include <string.hxx>
in my code refers to my string.hxx, which could:
#include <cxx$library_include:string.hxx>
>>Finally, I want to make it clear: I'd prefer a "#include_once" directive added
>>to the language. The programmer could use this directive to indicate explicitly
>>what compilers like gcc apparently have to guess at: that it is the intention
>>of the programmer that this file be included only once. Hackery like inclusion
>>guards should not be necessary.
>
> From a compiler writer's point of view, I think that gcc's optimization
> would be just as easy to implement as #include_once.
> The only advantage of #include_once would be that it might make
> compiler vendors more likely to implement the optimization, but on the
> other hand it wouldn't be applied to existing code, and #include_once
> couldn't be used in portable code (at least not for 5 or ten years).
> I think you are better off lobbying your compiler vendor to do
> #include optimization rather than lobbying them to implement #include_once.
>
Well, I'm not a compiler writer, so I don't know for sure. I just thought that
an explicit directive would be easier and less error-prone, both for the
compiler and for the programmer, than scanning the include file and guessing.
Perhaps I'm mistaken, and g++ doesn't "guess". Could someone look at the code
and tell us what the algorithm is? I'm concerned that the compiler might
mistake the programmer's intentions and fail to include something it should.
This would not be a problem with "#include_once", which explicitly states the
programmer's intentions.
> --
> Fergus Henderson fjh@munta.cs.mu.OZ.AU
John Saunders
saunders@eisner.decus.org
Author: fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Thu, 7 Oct 1993 13:59:41 GMT Raw View
saunders@eisner.decus.org writes:
>fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
>
>> From a compiler writer's point of view, I think that gcc's optimization
>> would be just as easy to implement as #include_once.
>> The only advantage of #include_once would be that it might make
>> compiler vendors more likely to implement the optimization, but on the
>> other hand it wouldn't be applied to existing code, and #include_once
>> couldn't be used in portable code (at least not for 5 or ten years).
>> I think you are better off lobbying your compiler vendor to do
>> #include optimization rather than lobbying them to implement #include_once.
>>
>
>Well, I'm not a compiler writer, so I don't know for sure. I just thought that
>an explicit directive would be easier and less error-prone, both for the
>compiler and for the programmer, than scanning the include file and guessing.
>
>Perhaps I'm mistaken, and g++ doesn't "guess". Could someone look at the code
>and tell us what the algorithm is? I'm concerned that the compiler might
>mistake the programmer's intentions and fail to include something it should.
>This would not be a problem with "#include_once", which explicitly states the
>programmer's intentions.
The algorithm is that as it reads the files in, it checks whether
they are of the form
/* whitespace */
#ifndef MACRO
...
#endif
/* whitespace */
where the sections marked as /* whitespace */ can includes comments.
[If you want more detail about exactly how it does this, read the code
your self :-).] When it is about to include a file which matched this
form in again, it checks whether MACRO is defined. If so, then clearly
the conditional will fail and so the whole file will be whitespace
after preprocessing.
The nature of the "guess" is such that it won't ever fail to include
something it should. It could concievably fail to make the
optimization when the file doesn't actually need to be read twice, but
the worst possible result of this is slightly higher compile times, not
incorrect code. [Currently there is a limitation (bug) with the way it
checks whitespace at the start and end - C++ comments don't count -
which means that it fails to make the optimization in some cases where
it ought to.]
Note that gcc used to support #import which was exactly the same as
your #include_once. The use of #import is now discouraged (you get a
big long warning :-) for reasons discussed above.
--
Fergus Henderson fjh@munta.cs.mu.OZ.AU
Author: fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Wed, 29 Sep 1993 09:42:12 GMT Raw View
saunders@eisner.decus.org writes:
>fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
>> saunders@eisner.decus.org writes:
>>
>>>Personally, the reason _I_ care about the guards on library headers is
>>>that the compiler has to
>>>
>>>1) Open the file
>>>2) Read all records in the file, searching for the matching #endif
>>>3) Output all lines to the listing file, if requested
>>>4) Output source-line correlation records to the object file so that the
>>> debugger can find the source line corresponding to the code address
>>>5) Close the file
>>>
>>>And do this for each of the times the file is included.
>>
>> The above is factually incorrect - as has been explained by other posters,
>> there are compilers that don't do any of the above steps.
>
>I believe I have read the other posts. They describe what some compilers do. It
>was my belief that my statements still hold for most compilers, and I haven't
>heard anything to suggest otherwise.
Well, I took you a bit literally - you did say that "the compiler _has_
to...", and I assumed that you meant that this was true for all
compilers. My apologies if this assumption was not correct.
Your statements do hold for most compilers, but not for all of them, and
not for two of the most popular compilers (GNU & Borland).
Step 3 will usually be a no-op.
After the first time, step 4 will also be a no-op, since there
won't be any code generated in a header file.
>> But why do you care what the _compiler_ has to do anyway?
>> Seems to me that it's more important to minimize what the _programmer_
>> has to do.
>
>I care what the compiler has to do because the compiler takes time and disk
>space to do it. I'd rather not wait for the compiler to waste my time, nor do I
>like to waste disk space.
Well, speaking of wasting disk space, there's nothing like a bunch of
tiny files for using up all those free blocks on your hard drive.
I think your proposal for a lot of tiny header files would be wasteful
of disk space.
>In any case, if my suggestion were followed, the only programmers who would
>have to do anything are the developers of the libraries, who, if they followed
>my suggestion, would have to execute a macro of some sort to create files
>containing the guts of their headers, and to replace the original headers with
>guards and an additional include.
With my implementation (gcc), this would actually slow things down
rather than speed them up. So I don't think you can make it a standard.
But even if you do want to do it, I don't see why you need the library
developers help. You should be able to write a simple program to
create the guard header files automatically.
--
Fergus Henderson fjh@munta.cs.mu.OZ.AU
Author: saunders@eisner.decus.org
Date: 30 Sep 93 00:08:52 -0400 Raw View
In article <9327219.27078@mulga.cs.mu.OZ.AU>, fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
> saunders@eisner.decus.org writes:
>
>>fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
>>> saunders@eisner.decus.org writes:
>>>
>>>>Personally, the reason _I_ care about the guards on library headers is
>>>>that the compiler has to
>>>>
>>>>1) Open the file
>>>>2) Read all records in the file, searching for the matching #endif
>>>>3) Output all lines to the listing file, if requested
>>>>4) Output source-line correlation records to the object file so that the
>>>> debugger can find the source line corresponding to the code address
>>>>5) Close the file
>>>>
>>>>And do this for each of the times the file is included.
>>>
> Step 3 will usually be a no-op.
> After the first time, step 4 will also be a no-op, since there
> won't be any code generated in a header file.
>
"int i = 1;" is "code" in the compilers I use, at least in debug mode.
Sometimes a member function prototype generates an interlude, etc.
>>In any case, if my suggestion were followed, the only programmers who would
>>have to do anything are the developers of the libraries, who, if they followed
>>my suggestion, would have to execute a macro of some sort to create files
>>containing the guts of their headers, and to replace the original headers with
>>guards and an additional include.
>
> With my implementation (gcc), this would actually slow things down
> rather than speed them up. So I don't think you can make it a standard.
>
I don't think I originated the term "standard" as applied to this thread. This
is something I'd suggest the library vendors implement assuming there is no
"#include_once" directive added to the language.
> But even if you do want to do it, I don't see why you need the library
> developers help. You should be able to write a simple program to
> create the guard header files automatically.
>
No. That doesn't change the includes within the library vendors' headers, which
is where a large part of the problem lies. For example, the DEC C++ compiler on
OpenVMS/VAX includes cxxl.hhx in just about every library header. Although I
have written a simple macro to create "my_string.hxx", etc., if I use, e.g.,
string.hxx and iostreams.hxx, I now have cxxl.hxx read twice. If cxxl.hxx had
been one of my header header files, four lines would have been read twice
rather than several hundred.
Finally, I want to make it clear: I'd prefer a "#include_once" directive added
to the language. The programmer could use this directive to indicate explicitly
what compilers like gcc apparently have to guess at: that it is the intention
of the programmer that this file be included only once. Hackery like inclusion
guards should not be necessary.
> --
> Fergus Henderson fjh@munta.cs.mu.OZ.AU
John Saunders
saunders@eisner.decus.org
Author: fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Sat, 2 Oct 1993 03:56:48 GMT Raw View
saunders@eisner.decus.org writes:
>fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
>> saunders@eisner.decus.org writes:
>>
>>>fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
>>>> saunders@eisner.decus.org writes:
>>>>
>>>>>Personally, the reason _I_ care about the guards on library headers is
>>>>>that the compiler has to
>>>>>
>>>>>4) Output source-line correlation records to the object file so that the
>>>>> debugger can find the source line corresponding to the code address
>>>>>
>>>>>And do this for each of the times the file is included.
>>
>> After the first time, step 4 will also be a no-op, since there
>> won't be any code generated in a header file.
>
>"int i = 1;" is "code" in the compilers I use, at least in debug mode.
Yes, it's a definition. It shouldn't occur in header files.
If it were to occur twice in one compilation unit, it would be an error.
>Sometimes a member function prototype generates an interlude, etc.
If there is a guard, then then file will be #ifdef'd out after the
first time. If there's no guard, then declarations that occur twice
should not cause anything extra to be written out, and definitions
that occur twice like
int = 1;
int = 1;
are illegal.
>> But even if you do want to do it, I don't see why you need the library
>> developers help. You should be able to write a simple program to
>> create the guard header files automatically.
>
>No. That doesn't change the includes within the library vendors' headers, which
>is where a large part of the problem lies. For example, the DEC C++ compiler on
>OpenVMS/VAX includes cxxl.hhx in just about every library header. Although I
>have written a simple macro to create "my_string.hxx", etc., if I use, e.g.,
>string.hxx and iostreams.hxx, I now have cxxl.hxx read twice. If cxxl.hxx had
>been one of my header header files, four lines would have been read twice
>rather than several hundred.
Don't give your guards names like my_string.hxx, just call them string.hxx.
Rename the vendor's header files to vendor_string.hxx (you can just
create a directory of symbolic links to the original unmodified
vendor header directory), and have your guard string.hxx look like
#ifndef STRING_HXX_INCLUDED
#define STRING_HXX_INCLUDED
#include <vendor_string.hxx>
#endif
Then just ensure that your guards get included in preference to the
vendor's header files in the original unmodified vendor header directory
by setting the approriate compiler option.
>Finally, I want to make it clear: I'd prefer a "#include_once" directive added
>to the language. The programmer could use this directive to indicate explicitly
>what compilers like gcc apparently have to guess at: that it is the intention
>of the programmer that this file be included only once. Hackery like inclusion
>guards should not be necessary.
Author: plindsay@qds.com (Phillip A. Lindsay)
Date: Thu, 23 Sep 93 01:39:00 GMT Raw View
Being a typical victim of several different C++ compiler vendors, I
am wondering when different vendors will agree on a standard
convention for include file defines. I know some kind soul will flame
me for referencing the defines, but my problem stems from third party
libraries referencing the defines. Presently (based on the different
platforms and compilers I am using) I was forced to add ugliness like this
to parts of USL's standard components library:
#if defined(IOSTREAMH) || defined(__IOSTREAM_H) || defined(__iostream_h)
The first define is assumed in AT&T code, second Borland
and lastly IBM's C++. I would have assumed since IBM
licensed the USL stream classes, they would stick with USL's
convention--That obviously wasn't the case.
I also want to reference the include defines to help my compile times
by eliminating redundant opens:
#if !defined(__FOOBAR_H)
#include <foobar.h>
#endif
Can we get some vendors to agree on a standard?
Can the ANSI committee do something?
Should defines be compiler generated?
=======================================================================
Phillip A. Lindsay All opinions expressed are
Quantitative Data Systems, Inc. my own and do not reflect
9500 Toldeo Way the opinion of my employer.
Irvine, Ca 92718-1806
Ph. 714-588-5144 / Fax 714-588-5181 Internet: plindsay@qds.com
Author: jamshid@emx.cc.utexas.edu (Jamshid Afshar)
Date: 23 Sep 1993 21:58:51 -0500 Raw View
In article <1993Sep23.013900.4537@qds.com>,
Phillip A. Lindsay <plindsay@qds.com> wrote:
>Being a typical victim of several different C++ compiler vendors, I
>am wondering when different vendors will agree on a standard
>convention for include file defines.
ANSI C doesn't even require that header files exist. For example, the
compiler could just call an internal function in its code like:
extern SymbolTable stdlib_symbols;
...
if (include_file=="<stdlib.h>" && !stdlib_loaded) {
current_symbols += stdlib_symbols;
stdlib_loaded = 1;
}
>I know some kind soul will flame
>me for referencing the defines, but my problem stems from third party
>libraries referencing the defines.
What's the problem? Just let the 3rd party libraries #include the
standard headers, even if that means a file will be read and ignored.
Besides, once <thirdparty.h> gets included, it won't get included
again (because it has a #ifndef wrapper) so why worry about what it
#includes? The only way I can see any gains is if <thirdparty1.h>
includes standard headers and it also includes <thirdparty2.h> which
also includes standard headers. In any case, don't worry about it
(see below).
>I also want to reference the include defines to help my compile times
>by eliminating redundant opens:
>#if !defined(__FOOBAR_H)
>#include <foobar.h>
>#endif
First off, don't use any identifiers with a leading underscore or a
double underscore. Such identifiers are reserved for the
implementation. Second, have you ever timed a compile to see how much
of a difference #include wrappers make? Is it worth trippling the
size of the #include section in each .cc and .h file? Third,
compilers can perform this optimization by themselves by maintaining a
simple table containing header files and the identifier the header
used in their #ifndef wrapper (if there is a wrapper). gcc 2.x
performs this optimization (which makes their `#pragma once'
obsolete). Finally, we'd all be better off getting vendors to
implement this optimization. Better still encourage them to implement
precompiled headers. I think all the latest PC compilers do, as does
ObjectCenter on UNIX. Because *so* much time is spent parsing C++
headers (they're as complex and .cc files but reparsed many times
during a `make'), precompiled headers can save many times more time
(?) than piddly #include wrappers.
>Can we get some vendors to agree on a standard?
I doubt it.
>Can the ANSI committee do something?
No; the whole concept is outside their jurisdiction (scope?).
>Should defines be compiler generated?
Do you mean should the compiler automaticly not #include files twice?
<assert.h> is allowed to be #include'd multiple times with different
effects.
Jamshid Afshar
jamshid@emx.cc.utexas.edu
Author: tob@world.std.com (Tom O Breton)
Date: Fri, 24 Sep 1993 19:32:23 GMT Raw View
jamshid@emx.cc.utexas.edu (Jamshid Afshar) writes:
> Second, have you ever timed a compile to see how much of a difference
> #include wrappers make? Is it worth trippling the size of the #include
> section in each .cc and .h file?
Only informally, by sitting through hundreds of compiles. The wrapped ones:
#ifndef MY_CAKE_HPP
#include "MY_CAKE.HPP"
#endif
take neglibile time if they do not fire. I do agree that the idiom is
clumsy, for something so frequent. Worth changing? Only if it's *real*
easy to change.
> Do you mean should the compiler automaticly not #include files twice?
> <assert.h> is allowed to be #include'd multiple times with different
> effects.
Absolutely! I do NOT want compilers to do this automatically. That would
break some good code, for instance MINE.
For enums that address arrays, I use an idiom where I include a header
file *twice*, with pretty little #define's making it come out as an enum
the first time, and as data the second time. It's very useful, since it
keeps 'em perfectly in sync.
Tom
--
The Tom spreads its huge, scaly wings and soars into the sky...
(tob@world.std.com, TomBreton@delphi.com)
Author: saunders@eisner.decus.org
Date: 25 Sep 93 05:28:17 -0400 Raw View
Personally, the reason _I_ care about the guards on library headers is that the
compiler has to
1) Open the file
2) Read all records in the file, searching for the matching #endif
3) Output all lines to the listing file, if requested
4) Output source-line correlation records to the object file so that the
debugger can find the source line corresponding to the code address
5) Close the file
And do this for each of the times the file is included.
I'd like to see the library vendors, and the standard libraries do the
following:
In, say, string.hpp:
#ifndef _string_hpp_
#define _string_hpp_
#include "string.hpx"
#endif
And have string.hpx (pick your favorite extension) contain the current contents
of string.hpp, less the guards.
This way, I'm only getting four lines times the number of times a library
header includes string.hpp.
I got so tired of this in my project that I wrote a macro to go through all my
code and extract the guts of my .hxx files to .hhx files and create new .hxx
files with just the four lines, and to create equivalents to the system
headers.
This still doesn't solve the problem that the system headers themselves include
other system headers.
I have several modules with small classes where my code doesn't start until
line number 10000 in the listing, and only extends as far as line 10020!
John Saunders
saunders@eisner.decus.org
Author: jln2@cec2.wustl.edu (Jerry L Novak)
Date: Sat, 25 Sep 1993 19:51:38 GMT Raw View
In article <CDvIA0.196@world.std.com> tob@world.std.com writes:
>jamshid@emx.cc.utexas.edu (Jamshid Afshar) writes:
>> Do you mean should the compiler automaticly not #include files twice?
>> <assert.h> is allowed to be #include'd multiple times with different
>> effects.
>
>Absolutely! I do NOT want compilers to do this automatically. That would
>break some good code, for instance MINE.
Gnu C++ (and gcc for that matter) look at include files to see if
they match the format:
[whitespace and comments only]
#ifndef XYZ
...
#define XYZ
...
#endif
[whitespace and comments only]
If so, they note the file's name and ignore future attempts to include
it. Supposedly, this speeds up some compiles appreciably. I'm not
sure if they tie this to the defined symbol so that an #undef will
cause following #includes to work again. In any event, this seems like
an excellent idea for other compilers to adopt.
Author: fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Sun, 26 Sep 1993 12:20:34 GMT Raw View
saunders@eisner.decus.org writes:
>Personally, the reason _I_ care about the guards on library headers is that the
>compiler has to
>
>1) Open the file
>2) Read all records in the file, searching for the matching #endif
>3) Output all lines to the listing file, if requested
>4) Output source-line correlation records to the object file so that the
> debugger can find the source line corresponding to the code address
>5) Close the file
>
>And do this for each of the times the file is included.
The above is factually incorrect - as has been explained by other posters,
there are compilers that don't do any of the above steps.
But why do you care what the _compiler_ has to do anyway?
Seems to me that it's more important to minimize what the _programmer_
has to do.
This issue seems to come up often. Perhaps it would be appropriate
to include something in the FAQ list about it.
--
Fergus Henderson fjh@munta.cs.mu.OZ.AU
Author: saunders@eisner.decus.org
Date: 26 Sep 93 17:36:45 -0400 Raw View
In article <9326922.3499@mulga.cs.mu.OZ.AU>, fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
> saunders@eisner.decus.org writes:
>
>>Personally, the reason _I_ care about the guards on library headers is that the
>>compiler has to
>>
>>1) Open the file
>>2) Read all records in the file, searching for the matching #endif
>>3) Output all lines to the listing file, if requested
>>4) Output source-line correlation records to the object file so that the
>> debugger can find the source line corresponding to the code address
>>5) Close the file
>>
>>And do this for each of the times the file is included.
>
> The above is factually incorrect - as has been explained by other posters,
> there are compilers that don't do any of the above steps.
>
I believe I have read the other posts. They describe what some compilers do. It
was my belief that my statements still hold for most compilers, and I haven't
heard anything to suggest otherwise.
> But why do you care what the _compiler_ has to do anyway?
> Seems to me that it's more important to minimize what the _programmer_
> has to do.
>
I care what the compiler has to do because the compiler takes time and disk
space to do it. I'd rather not wait for the compiler to waste my time, nor do I
like to waste disk space.
In any case, if my suggestion were followed, the only programmers who would
have to do anything are the developers of the libraries, who, if they followed
my suggestion, would have to execute a macro of some sort to create files
containing the guts of their headers, and to replace the original headers with
guards and an additional include.
> This issue seems to come up often. Perhaps it would be appropriate
> to include something in the FAQ list about it.
>
> --
> Fergus Henderson fjh@munta.cs.mu.OZ.AU
John Saunders
saunders@eisner.decus.org