Topic: Why are c++ compilers soooo slow ????


Author: "Richard Chandler" <rchandler@ndsuk.com>
Date: 1998/12/04
Raw View

Andre Kaufmann wrote in message <73uosk$o9s$1@news01.btx.dtag.de>...
>
>First, thanx for you're statements  Jack...
>
>>
>>Personally I think this is a truly obnoxious idea.  Speed of compiling a
>>program, just like the speed and size of the final executable, is a
quality
>of
>>information issue.
>>
>
>sure, but every time a frequently used header file is touched, i can go
>coffee drinking ....
>because nearly the whole project has to be compiled -


Use pre-compiled headers, it's only done once for the first module compiled
(usually stdafx.h in the anonymous system we're referring to), then the rest
just scream along (well shout maybe). Most of the time you get a fast build,
only occasionally needing to rebuild the whole lot. Consider re-jigging your
header files too, if it's wasting soo much time, do it.

They are easy to use and well worth the investment of an afternoon getting
it working. Generally on the cl command line put /Yc "stdafx.h" on the
module you create it with (usually stdafx.cpp), and /Yu "stdafx.h" on all
the others, remembering to #include "stdafx.h" in all modules using it.

Richard.



[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: AllanW@my-dejanews.com
Date: 1998/12/03
Raw View
In article <000001be1e31$0163efb0$bbe4b78f@soliton.sc.intel.com>,
  "Geoff Fortytwo" <geoff.d.fortytwo@intel.com> wrote:
>
> I've read all the articles in this discussion and am surprised that no one
> has mentioned IBM's VisualAge C++ 4.0 compiler.

It has been mentioned in another post.

But I'd like to pull attention back to the original premise of this
thread. The original poster asks why C++ compilers are slow, and then
speculates on some possible reasons.

But are C++ compilers always or usually slower than compilers for
other languages? That hasn't been my experience. It seems to me that
programs of similar complexity usually have similar compile times,
when all other things are equal (same speed of computer, same
compiler vendor, etc.).

We've all assumed that the original poster was well-versed on the
topic of compile times. I have no evidence to suggest otherwise.
But just SUPPOSE that the questioner was a beginner. In this case,
it's possible that he didn't understand the difference between a
compiler and an interpreter, and/or ignored other issues directly
attributable to the program source. I've met beginners to whom
it's obvious that BASIC is much faster than C++, based only on
the observation that pressing the "RUN" button in BASIC begins
execution, while pressing the "RUN" button in C++ causes some
messages to scroll by first. In fact, the messages that scroll
by were evidence that the program had been re-compiled.

--
AllanW@my-dejanews.com is a "Spam Magnet" -- never read.
Please reply in USENET only, sorry.

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: cstromberger@psi.bellhowell.com.com.com.com (Chris Stromberger)
Date: 1998/12/03
Raw View
On 2 Dec 1998 21:08:00 GMT, "Geoff Fortytwo"
<geoff.d.fortytwo@intel.com> wrote:

>I've read all the articles in this discussion and am surprised that no one
>has mentioned IBM's VisualAge C++ 4.0 compiler. I read an article about it
>in Dr. Dobbs about a year ago (I don't recall which issue it was offhand)

You're right on the mark -- it was in the Dec. 97 issue.  It's a very
interesting article.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Francis Glassborow <francis@robinton.demon.co.uk>
Date: 1998/12/03
Raw View
In article <000001be1e31$0163efb0$bbe4b78f@soliton.sc.intel.com>, Geoff
Fortytwo <geoff.d.fortytwo@intel.com> writes
>Unfortunately, IBM is apparently only making VisualAge C++ 4.0 for AIX (At
>least, that's true as far as I know. I'm not associated with IBM in any
>way.). Perhaps the people developing egcs could think about doing something
>along these lines.

I believe a Windows NT/95/98 version will be released in the first
quarter of 1999.


Francis Glassborow      Chair of Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA          +44(0)1865 246490
All opinions are mine and do not represent those of any organisation


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: deepblack@geocities.com (Luis Coelho)
Date: 1998/12/03
Raw View
>    // ... Comments (optional) ...
>    #ifndef some_symbol   // or #if !defined(some_symbol)
>    // ... Comments and/or Content (optional) ...
>    #define some_symbol   // Note: NOT in a more-deeply-nested #if...#endif
>    // ... Comments and/or Content (optional) ...
>    #endif
>    // ... Comments (optional) ...
>Once this pattern is recognized, the compiler could treat all occurances of
>    #include "somefile"
>as if they read
>    #ifndef some_symbol
>    #include "somefile"
>    #endif
>
>I don't know of any compilers that currently do this automatically.

GPP (GNU C PreProcessor, which is used by GCC and G++) does it,
if the whole file is delimited by #ifndef

Regards,
Lums Coelho.
C++ Programming Language, 3rd Ed. by B. Stroustrup. My exercise answers at:
http://www.geocities.com/SiliconValley/Way/3972/index.html


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Pete Becker <petebecker@acm.org>
Date: 1998/12/02
Raw View
Siemel Naran wrote:
>
> Am I right in thinking that external #defines are rare (like NDEBUG
> in <cassert>)?

No, they're quite common. You see it all over most implementations of
the C standard library. The rule in C is that no standard header can
#include any other standard header, but there are things that are
supposed to be defined in several headers. One common solution is to put
identical definitions in each header, with guards:

#ifndef _NULL
#define _NULL
#define NULL (void*)0
#endif

If this occurs in two headers that are both #included in the same
translation unit, one of those headers will actually define NULL and the
other one won't.

Although there's less need for this sort of thing in C++, because it
allows inclusion of standard headers in other standard headers, the same
thing occurs, for example, with header guards.

--
Pete Becker
Dinkumware, Ltd.
http://www.dinkumware.com


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: "Geoff Fortytwo" <geoff.d.fortytwo@intel.com>
Date: 1998/12/02
Raw View
I've read all the articles in this discussion and am surprised that no one
has mentioned IBM's VisualAge C++ 4.0 compiler. I read an article about it
in Dr. Dobbs about a year ago (I don't recall which issue it was offhand)
and I remember getting really excited about it. Here's the gist of what I
remember about it:

They decided that the current way of linking was outdated and caused a lot
of problems. Rather than compile each file individually and then link all
the resulting object files together, this new model compiles on an entity by
entity basis. Order of declaration no longer matters and header files are no
longer necessary. There is only 1 translation unit.

Before namespaces this would be a problem because of name clashes, but with
namespaces there need not be a problem. If, in the old model, we wanted to
use FOO we'd include FOO.H in various other files and compile and link
FOO.CPP into the executable. With this new model we'd just add FOO.CPP to
the project and refer to its functions and objects using whatever namespace
the functions were declared in.

This solves several issues. Number one, no header files means no need to
recompile every source file that includes a header that had a small change
made to it. Number two, no need to worry about where templates are declared
any more. No header files means no need to worry about putting a all the
code for the templates in a header, resulting in major recompilation times
every time the implementation of the template is changed (the source files
should really only be recompiled when the interface changes, not the
implementation).

All together, besides reducing compilation and recompilation times
dramatically, it allows the programmer to stop worrying about issues that
revolve around files and let the compiler do the work for the programmer.

Of course, this new model does not follow the ISO C++ standard since the ISO
C++ standard includes all sorts of information about translation units, so a
compiler that does this can never claim to be fully ISO C++ conforming
unless it supported both models (IBM's VisualAge C++ 4.0 apparently supports
both models). But this shouldn't be a problem since it only gets rid of a
lot of the nuisances that waste programmers' time.

As I understand it then, there'd be no need for header files (#include
statements), prototypes, predeclarations (or whatever those things are
called that allow you to declare the existence of a class before actually
defining it), the extern keyword, the export keyword, and unnamed namespaces
(or global static variables).

Unfortunately, IBM is apparently only making VisualAge C++ 4.0 for AIX (At
least, that's true as far as I know. I'm not associated with IBM in any
way.). Perhaps the people developing egcs could think about doing something
along these lines.



[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/12/03
Raw View
Geoff Fortytwo wrote:
>
> I've read all the articles in this discussion and am surprised that no one
> has mentioned IBM's VisualAge C++ 4.0 compiler. I read an article about it
> in Dr. Dobbs about a year ago (I don't recall which issue it was offhand)
> and I remember getting really excited about it. Here's the gist of what I
> remember about it:
>
> They decided that the current way of linking was outdated and caused a lot
> of problems. Rather than compile each file individually and then link all
> the resulting object files together, this new model compiles on an entity by
> entity basis. Order of declaration no longer matters and header files are no
> longer necessary. There is only 1 translation unit.
>
[...]

> As I understand it then, there'd be no need for header files (#include
> statements), prototypes, predeclarations (or whatever those things are
> called that allow you to declare the existence of a class before actually
> defining it), the extern keyword, the export keyword, and unnamed namespaces
> (or global static variables).

Indeed, it has *increased* need for static functions or unnamed
namespaces:
Now a declaration is accessible *even if not explicitly included*.
So the *only* way to make private declarations is unnamed namespace
or file-static functions. However, if there's really only one
translation
unit, I don't see how they can work.
I like the idea of a "repository" of single entities. However, the
compiler should only enable those which are explicitly asked for with
#include. Or an alternate mechanism.
Of course, the better solution would be a real module system.
If VAC++ is non-conformant (or has non-conformant mode) anyway,
it would have been better to do it right. If it really works
as you described, I don't think I'd like to use it...

[...]


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: sbnaran@localhost.localdomain.COM (Siemel Naran)
Date: 1998/12/01
Raw View
On 1 Dec 1998 11:09:20 GMT, Pete Becker <petebecker@acm.org> wrote:
>Siemel Naran wrote:

>> Why not make a one .pcf file for each header instead of one .pcf
>> file for all the headers?

>Because the contents of a header file can depend on the context in which
>it is compiled. That prevents precompiling individual headers, but
>doesn't prevent creating a precompiled header file for each translation
>unit.

Do you think we could treat external #defines as template parameters?
Am I right in thinking that external #defines are rare (like NDEBUG
in <cassert>)?

What's really slow, I've found, is template instantiation.  Eg,
doing
     cout << Factorial<5>::value << '\n';
with an appropirate definition of struct Factorial<unsigned> takes
pretty long to compile as compared to
     cout << (5*4*3*2*1) << '\n';

Even in realistic functions, template instantiation is so slow.

--
----------------------------------
Siemel B. Naran (sbnaran@uiuc.edu)
----------------------------------


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: James Kuyper <kuyper@wizard.net>
Date: 1998/12/01
Raw View
Thomas A. Horsley wrote:
>
> >> I asked why the compiler has to parse the whole header file every time it's
> >> included -> every time i include for example
> >....
> >> in a #ifndef ... #define .... <code>  #endif block
> >
> >It has to parse the entire header file, in order to identify where the
> >matching #endif (or #else or #elif) preprocessor directive is...
>
> Actually, in theory a compilation environment could keep track of enough
> information to avoid doing all the multiple processing of the same include
> file if someone wanted to make a complex enough environment. Such a monster
> could make all but the first compilation go a lot faster unless you really
> did have different sets of #defines that affect the expansion of the header
> file in different compilation units (but probably at the expense of disk
> space to store everything it needs to keep track of - like every symbol
> appearing in the file so it can tell if another #include might result
> in a different macro expansion of the contents, and, of course, a cache
> for all the contents it read the first time, so it can reuse them).

I doubt that what you suggest could be done in such a way as to take
significantly less time than re-parsing the file.

> Within a single compilation, a sufficiently smart compiler (and some of them
> are this smart) can easily avoid reading the same header file more
> that once by remembering that the first time it saw the file the token
> stream looked like <space or comments>, #ifndef some_symbol, <no #else>,
> matching #endif, <space or comments>.

That sounds feasible, particularly if the implementation ensures that
all of its own headers match that pattern.

> If it sees another #include of the same file, all it has to do is check
> and see if "some_symbol" is already defined - if so, it can skip the
> entire #include.

A user-oriented solution that is fairly portable (header file names
outside the standard headers can't be portable, unfortunately), is to
write

public.h:

#ifndef PUBLIC_H
#define PUBLIC_H
#include "private.h"
#endif

This would minimize the amount of time spent unnecessarily re-parsing
public.h, at
the cost of having to open two files the first time it is parsed.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Tom.Horsley@worldnet.att.net (Thomas A. Horsley)
Date: 1998/12/01
Raw View
>I doubt that what you suggest could be done in such a way as to take
>significantly less time than re-parsing the file.

Well, that would be the challenge for the folks writing it :-).

Actually, though, the original post mentioned something like a 800% speedup
by using pre-compiled headers. All I'm proposing is a compiler that
automagically "pre-compiles" the headers, and always gets it right by
checking to make sure that no new macro definitions could possibly
change the meaning of the previously compiled stuff.

For sure you'd have to pay close attention to performance, but I suspect
it could be made to work quite well. If you take the concept even farther
than "mere" pre-compiled headers, and arrange to use object oriented
database technology for your compiler symbol table, then seeing a #include
in the compiler translates into mapping in a chunk of database, and "presto",
you've magically processed the whole include file in one swell foop :-).

Take it even farther, and your debugger just maps in the same info, so
reading debug info is also near instantaneous.

Tack on "smart recompilation" technology so you can figure out you
don't need to recompile things just because you added a new enum constant
or something trivial like that, and you're really blazing fast now...

(Gosh, this stuff is fun to design when you don't have to actually
implement it - 20 or 25 man years ought to do it, I think - about
the time computers are so fast no one cares anymore :-).
--
>>==>> The *Best* political site <URL:http://www.vote-smart.org/> >>==+
      email: Tom.Horsley@worldnet.att.net icbm: Delray Beach, FL      |
<URL:http://home.att.net/~Tom.Horsley> Free Software and Politics <<==+


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Pete Becker <petebecker@acm.org>
Date: 1998/12/01
Raw View
Siemel Naran wrote:

> On 29 Nov 1998 21:32:00 GMT, Andre Kaufmann <Andre.Kaufmann@t-online.de> wrote:

> >All you have to do is to include all frequently used header files in another
> >header file and
> >declare that header file as a precompiled header file.
> >But the disadvantage is, that you have to include this precompiled header
> >file in every cpp unit.
> >And that will slow down all other compilers :-(

> Why not make a one .pcf file for each header instead of one .pcf
> file for all the headers?

Because the contents of a header file can depend on the context in which
it is compiled. That prevents precompiling individual headers, but
doesn't prevent creating a precompiled header file for each translation
unit.

--
Pete Becker
Dinkumware, Ltd.
http://www.dinkumware.com


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: AllanW@my-dejanews.com
Date: 1998/12/01
Raw View
In article <73r886$3o1$1@news02.btx.dtag.de>,
  Andre.Kaufmann@t-online.de (Andre Kaufmann) wrote:
>
> Why are c++ compilers soooo slow ????
...
> and every time a header file is included the compiler has to parse the
> header
> file again and again and again, even if you enclosed the header file
> in a #ifndef .... #endif block.

Well, this isn't neccesarily so.

For user-written header files, the compiler is free to "cache" the
header's tokens in memory. It has to be re-parsed, of course, because
new #defines or context might change the meaning of these tokens.
    // Header file X.HH
    function(int a, XXX* b) { if (a) delete b; return 0; }
    // Translation unit
    #define XXX char *
    namespace one { int
    #include "X.HH" // Defines int one::function(int,char**);
    };
    #undef XXX
    class XXX {};
    char *
    #include "X.HH" // Defines ::function(int,XXX*)
Must of us aren't worried about optimizing code like this, because
it is terrible. (Personally I wouldn't mind if the standards committee
caused code like this to break, but it isn't likely. Presumably a
code generator might produce code that does some of this, 'though
probably not this terrible.)

You do have a point, of course. If the compiler doesn't cache the
header file contents or tokens, it must re-read the same source
file again. #include guards don't prevent this because we can't
see them until the file is open. I would like to hear about a
compiler that looked for special patterns, such as
    // ... Comments (optional) ...
    #ifndef some_symbol   // or #if !defined(some_symbol)
    // ... Comments and/or Content (optional) ...
    #define some_symbol   // Note: NOT in a more-deeply-nested #if...#endif
    // ... Comments and/or Content (optional) ...
    #endif
    // ... Comments (optional) ...
Once this pattern is recognized, the compiler could treat all occurances of
    #include "somefile"
as if they read
    #ifndef some_symbol
    #include "somefile"
    #endif

I don't know of any compilers that currently do this automatically.
(Then again, if *MY* compiler did this automatically, how could I
tell aside from performance?) However, at least one compiler I use
recognizes the implementation-dependant
    #pragma once
in a header file to mean "if you already processed this header file,
you may ignore additional #include directives for the same file"

For standard header files, the compiler can go much further. I asked
about this a few (6?) months ago. Your program isn't allowed to
declare any functions in namespace std, so technically there's very
little(*) reason not to have the compiler start with all of these
symbols already loaded. Then, when the compiler sees a line like
    #include <iostream>
in your program, it can simply ignore it -- the symbols that the
standard specifies in this header are ALREADY defined. This is VERY
quick.

(*)There's only one reason why this isn't permitted in standard C++
programs. If you have the program
    // (Note: no #include directives)
    int main(int, char*) {
        std::cout << "Hello C++" << std::endl;
    }
the compiler is required to issue a diagnostic because you didn't
#include <iostream> (and therefore cout and endl are not defined).
But it's not a difficult matter to extend the concept; have the
#include directive set a flag that states it's okay to use the
symbols defined by that include file. And there's no reason why
the compiler can't have a mode where it skips this warning.
Standards-compliant compilers don't have to be standards-compliant
all the time; it's permissable for the compiler to require a
special "ANSI mode" switch before it issues every required
diagnostic.

--
AllanW@my-dejanews.com is a "Spam Magnet" -- never read.
Please reply in USENET only, sorry.

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Andre.Kaufmann@t-online.de (Andre Kaufmann)
Date: 1998/11/30
Raw View
First, thanx for you're statements  Jack...

>
>Personally I think this is a truly obnoxious idea.  Speed of compiling a
>program, just like the speed and size of the final executable, is a quality
of
>information issue.
>

sure, but every time a frequently used header file is touched, i can go
coffee drinking ....
because nearly the whole project has to be compiled -
- i'm working in a team of several developers - therefore the project cannot
be easily rearranged ...
(i know all that tips - use a flat hierarchy .... and so on)
and ..as you mentioned the speed of the final executable -> the compiler is
an executable too ;-)


>Buy more memory and a faster processor.  If the computer you are using is more
>than 6 months old, upgrading and adding more memory will double or triple its
>speed.

I think that's not so easy. I have a P400 with 196 MB isn't that enough?
Well the processors are faster therefore the compiler ist faster, but
my cpp projects are growing too (as fast as the processor speed grows ;-))
, so compile time is nearly the same...

>
>Besides you are not placing the blame where it belongs.  It does not belong
>with the C++ language at all.  I am basing the following comments on the fact
>that your past was made from a Win32 operating system, and guessing that you
>are programming for this environment as well.....


Well i know that all...
The project i'm working on is platform independent
(a really hard work when you're using multithreading and networking....,
 but it's great - switch the compiler and the platform - compile the
project - and it runs perfect
  just like a java project :-) , besides "compiled" java byte code files are
not executables )

and compiles under both operating systems Win32 and !!! Linux !!!
and this is the ideal project to compare compile time under both systems !!
right ?

What i generally can say, without modifications gcc compiles faster (3-4
times) than every compiler for win32 platform, and that depends not only on
the compiler, there you're right, it depends on the operating system and its
(large) header files, too.

But that's not the point.
I asked why the compiler has to parse the whole header file every time it's
included -> every time i include for example
(as you said there's no comparable huge header file under UNIX ) windows.h
the compiler has to parse
the whole (windows.h) header file even if i enclosed it (as it's used to be
in c++)
in a #ifndef ... #define .... <code>  #endif block

and with some modifications (declaring a header file which includes all
large heade files like windows.h as
a precompiled header) the header files are parsed only once and now the
win32 compiler compiles much faster (factor 2 - 3) than gcc on linux.

-> I didn't change the platform. I used non "standard" compilation/parsing
of header files.
-> AFAIK all the steps how the compiler has to compile a cpp project and its
header files
-> are by the c++ standardization comittee , right ?

But now, with that modifications, the compile time on the linux platform is
much longer than it was before i made the modifications.... :-(   , that's
the problem

perhaps there's an easy solution for gcc but that question belongs to
another newsgroup...

Don't misunderstand me,
i love c++, but with some restrictions, the compile time could be "nearly"
as fast as pascal or java.
( and the compile time for a linux kernel would be reduced too   ;-)  )

-> well i think there's a solution for me, i'll compile my linux project on
another computer  ;-)

AMK

PS: besides pascal has a simpler syntax than c++ , in pascal projects
windows.pas is "included", too,
       and the pascal compiler isn't measureable slowed down - because the
unit is parsed only once



[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: James Kuyper <kuyper@wizard.net>
Date: 1998/11/30
Raw View
Andre Kaufmann wrote:
....
> I asked why the compiler has to parse the whole header file every time it's
> included -> every time i include for example
....
> in a #ifndef ... #define .... <code>  #endif block

It has to parse the entire header file, in order to identify where the
matching #endif (or #else or #elif) preprocessor directive is. For code
which has been conditionally excluded, it doesn't need to do any other
processing. It only needs to identifying preprocessor tokens,
distinguish comments, and make sure that #if blocks are nest properly.
Perhaps the problem is that the compiler you're using does more work
than it has to, when processing conditionally excluded blocks of code?




[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Tom.Horsley@worldnet.att.net (Thomas A. Horsley)
Date: 1998/11/30
Raw View
>> I asked why the compiler has to parse the whole header file every time it's
>> included -> every time i include for example
>....
>> in a #ifndef ... #define .... <code>  #endif block
>
>It has to parse the entire header file, in order to identify where the
>matching #endif (or #else or #elif) preprocessor directive is...

Actually, in theory a compilation environment could keep track of enough
information to avoid doing all the multiple processing of the same include
file if someone wanted to make a complex enough environment. Such a monster
could make all but the first compilation go a lot faster unless you really
did have different sets of #defines that affect the expansion of the header
file in different compilation units (but probably at the expense of disk
space to store everything it needs to keep track of - like every symbol
appearing in the file so it can tell if another #include might result
in a different macro expansion of the contents, and, of course, a cache
for all the contents it read the first time, so it can reuse them).

Within a single compilation, a sufficiently smart compiler (and some of them
are this smart) can easily avoid reading the same header file more
that once by remembering that the first time it saw the file the token
stream looked like <space or comments>, #ifndef some_symbol, <no #else>,
matching #endif, <space or comments>.

If it sees another #include of the same file, all it has to do is check
and see if "some_symbol" is already defined - if so, it can skip the
entire #include.
--
>>==>> The *Best* political site <URL:http://www.vote-smart.org/> >>==+
      email: Tom.Horsley@worldnet.att.net icbm: Delray Beach, FL      |
<URL:http://home.att.net/~Tom.Horsley> Free Software and Politics <<==+


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Andre.Kaufmann@t-online.de (Andre Kaufmann)
Date: 1998/11/29
Raw View
Why are c++ compilers soooo slow ????

Well i know the answer but i don't know why this has to be so.
Beside that c++ syntax is more complex than let's say pascal,
parsing all the included header files wastes so much time....
and every time a header file is included the compiler has to parse the
header
file again and again and again, even if you enclosed the header file
in a #ifndef .... #endif block.

I experienced a little bit with precompiled header files with compiler XXX
-- i say no names ;-)   --
and i managed to reduce the compile time for a large project from 9 minutes
down to (for c++ compilers) incredible 1 minute.
-> that's a speed burst of 800 - 900 %

All you have to do is to include all frequently used header files in another
header file and
declare that header file as a precompiled header file.
But the disadvantage is, that you have to include this precompiled header
file in every cpp unit.
And that will slow down all other compilers :-(

Every time when some new features  have been added to the c++ standard (like
templates ....)
-> the compile time for c++ projects is significantly increased - only for
information - i don't want to miss the new features

So my question (for discussion)
wouldn't it be better to change the c++ standard to decrease compile time
significantly, even if that standard is
somewhat incompatible - and that "older source's" , which depend on theold
standard, have to be included
in a special way, like c - projects are included in c++ projects ??
What do you think ?

AMK



[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: jackklein@att.net (Jack Klein)
Date: 1998/11/30
Raw View
On 29 Nov 1998 21:32:00 GMT, Andre.Kaufmann@t-online.de (Andre Kaufmann)
wrote:

> Why are c++ compilers soooo slow ????

> Well i know the answer but i don't know why this has to be so.
> Beside that c++ syntax is more complex than let's say pascal,
> parsing all the included header files wastes so much time....
> and every time a header file is included the compiler has to parse the
> header
> file again and again and again, even if you enclosed the header file
> in a #ifndef .... #endif block.

> I experienced a little bit with precompiled header files with compiler XXX
> -- i say no names ;-)   --
> and i managed to reduce the compile time for a large project from 9 minutes
> down to (for c++ compilers) incredible 1 minute.
> -> that's a speed burst of 800 - 900 %

> All you have to do is to include all frequently used header files in another
> header file and
> declare that header file as a precompiled header file.
> But the disadvantage is, that you have to include this precompiled header
> file in every cpp unit.
> And that will slow down all other compilers :-(

> Every time when some new features  have been added to the c++ standard (like
> templates ....)
> -> the compile time for c++ projects is significantly increased - only for
> information - i don't want to miss the new features

> So my question (for discussion)
> wouldn't it be better to change the c++ standard to decrease compile time
> significantly, even if that standard is
> somewhat incompatible - and that "older source's" , which depend on theold
> standard, have to be included
> in a special way, like c - projects are included in c++ projects ??
> What do you think ?

<Jack>

Personally I think this is a truly obnoxious idea.  Speed of compiling a
program, just like the speed and size of the final executable, is a quality of
information issue.

Buy more memory and a faster processor.  If the computer you are using is more
than 6 months old, upgrading and adding more memory will double or triple its
speed.

Besides you are not placing the blame where it belongs.  It does not belong
with the C++ language at all.  I am basing the following comments on the fact
that your past was made from a Win32 operating system, and guessing that you
are programming for this environment as well.

The problem is not caused by the C++ language which, like C before it, can be
quite small and modular.  One of the major culprits is the creators of
operating systems with APIs and header file setups that require the inclusion
of hundreds of kilobytes of header files to make even one OS API call.

Here is a very specific example, from a certain software vendor whose name I
will not mention, detailing the header files which came with version 5.0 of
their compiler:

To use any OS function at all you must include:

windows.h with a file size of 4,903 bytes.

So far, so good.  But windows.h unconditionally includes the following, file
names followed by their size in bytes:

excpt.h:  3960
stdarg.h:  4730
windef.h:  6760
winbase.h:  154540
wingdi.h:  147629
winuser.h:  192575
winnls.h:  39651
wincon.h:  14592
winver.h:  9504
winreg.h:  14324
winnetwk.h:  22739

Without even counting many of the conditional includes which might actually
occur, or looking to see if any of the above files have other includes of
their own, that is over 600,000 bytes which you are required to include to use
a single operating system function!

Neither operating systems like Unix nor the C or C++ standard libraries are
built like that!  An individual header includes a number or related functions.

Next is the supplied class library, a huge monolithic thing.  The file afx.h
is 56,392 bytes.  It is turn pulls in its own list of nested headers:

afxver_.h:  10689
string.h:  9170
stdio.h:  13087
stdlib.h:  19150
time.h:   7494
limits.h:  4133
stddef.h:  2458
stdarg.h:  4730
crtdbg.h:  13860
setjmp.h:  8092
afxcoll.h:  34424
afxstat_.h:  9547
afx.inl:  18687

Here we have more than 200,000 bytes.

So now you have over 800,000 bytes of headers to process in (probably) every
single source code file, before a single statement of your own code gets
compiled.  And if you want to include special features like sockets or
Direct-X or COM or whatever, add more hundreds of kilobytes.

If there were something in the C++ language standard which required the
include chain for windows.h to be so large (and remember, it is just as large
when included in a C program) there might be a reason to change the standard.

If the C++ language standard prevented building modular class libraries where
you had a small number of base classes and derived classes in their own
headers, that might be a reason to change the language.

But you need to place the blame squarely where it belongs.  That is on the
heads of the vendor of an operating system who can't or won't make the effort
to modularize their API headers, and who have designed a gargantuan,
monolithic class library on top of it.

The C++ language standard neither requires, or recommends, either of these
things.  But as long as market forces have pushed most of the commercial
programmers in the field today to use this OS and class library licensed from
a certain vendor, even when using a compiler of another brand, they will have
to put up with this.

</Jack>



[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: sbnaran@localhost.localdomain.COM (Siemel Naran)
Date: 1998/11/30
Raw View
On 29 Nov 1998 21:32:00 GMT, Andre Kaufmann <Andre.Kaufmann@t-online.de> wrote:

>All you have to do is to include all frequently used header files in another
>header file and
>declare that header file as a precompiled header file.
>But the disadvantage is, that you have to include this precompiled header
>file in every cpp unit.
>And that will slow down all other compilers :-(

Why not make a one .pcf file for each header instead of one .pcf
file for all the headers?

Stroustrup says that an implementation may store standard header
files in the compiler.  Then #include <header> just unlocks the
standard header rather than include and parse the header.  I don't
think any do this.


>Every time when some new features  have been added to the c++ standard (like
>templates ....)
>-> the compile time for c++ projects is significantly increased - only for
>information - i don't want to miss the new features

Yes, I like templates very much.  But template instantiation is
soooooooooooooo slowwwwwwwwww.


>So my question (for discussion)
>wouldn't it be better to change the c++ standard to decrease compile time
>significantly, even if that standard is
>somewhat incompatible - and that "older source's" , which depend on theold
>standard, have to be included
>in a special way, like c - projects are included in c++ projects ??
>What do you think ?

This might be complicated for maintainance.  Anyway, we can hope for
faster compilers, new software technologies, faster CPU's.

--
----------------------------------
Siemel B. Naran (sbnaran@uiuc.edu)
----------------------------------


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]