Topic: Randomly ordered fields !?!? (Was:


Author: stt@inmet.inmet.com
Date: 22 Aug 90 13:08:00 GMT
Raw View
Re: Allowing compilers to reorder fields "at will".

Having used a language without strict field ordering
rules, I can testify that it is a nightmare.  If you
want the compiler to reorder your fields, then you
should have to indicate it explicitly, perhaps via
a "pragma" or equivalent.  Otherwise, you lose
interoperability between successive versions of
the same compiler, let alone interoperability between
different compilers.

S. Tucker Taft
Intermetrics, Inc.
Cambridge, MA  02138




Author: jimad@microsoft.UUCP (Jim ADCOCK)
Date: 27 Aug 90 17:16:19 GMT
Raw View
In article <259400004@inmet> stt@inmet.inmet.com writes:

|Re: Allowing compilers to reorder fields "at will".
|
|Having used a language without strict field ordering
|rules, I can testify that it is a nightmare.  If you
|want the compiler to reorder your fields, then you
|should have to indicate it explicitly, perhaps via
|a "pragma" or equivalent.  Otherwise, you lose
|interoperability between successive versions of
|the same compiler, let alone interoperability between
|different compilers.

Maybe or maybe not, but C++ compilers already have the right to reorder
your field orderings.  Or are you proposing that the rules should be
changed to prevent compilers from reordering?  The only restriction
right now is that within a labeled section fields must be at increasing
addresses.  Which neither requires nor prevents compilers from changing
field orderings from your expectations, but rather leaves those decisions
to the sensibilities of the compiler vendor.  If some future C++ compiler
supported persistence and schema evolution, would you then demand a
user specified field ordering?




Author: peterson@csc.ti.com (Bob Peterson)
Date: 27 Aug 90 21:26:49 GMT
Raw View
In article <56940@microsoft.UUCP> jimad@microsoft.UUCP (Jim ADCOCK) writes:
>       ...
>                       Or are you proposing that the rules should be
>changed to prevent compilers from reordering?  The only restriction
>right now is that within a labeled section fields must be at increasing
>addresses.  Which neither requires nor prevents compilers from changing
>field orderings from your expectations, but rather leaves those decisions
>to the sensibilities of the compiler vendor.  If some future C++ compiler
>supported persistence and schema evolution, would you then demand a
>user specified field ordering?

  A frequently stated goal of object-oriented database (OODB) systems
developers is to support storage of objects over long periods of time
and concurrent access to the OODB by concurrently executing programs.
This implies that an object stored several years ago be fetched today.
This, in turn, implies that the OODB support evolution of a stored
object, should the definition of that object change.  Sharing implies
that different applications may request access to the same object at
roughly the same time.

  If a C++ compiler is able to reorder the fields of an object at will,
the object evolution problem faced by OODB developers is substantially
more complex than if field reordering is restricted.  Not only can an
object definition change as a result of an explicit user action, but
simply because the user has upgraded to a new version of the compiler
or added a compiler from a different vendor.  Requiring a recompilation
of the all programs may solve the problem for programs that don't save
objects.  However, for environments in which objects are retained for
substantial periods of time, or where objects are shared among
concurrently executing program compiled with compilers using different
ordering rules, a recompilation requirement doesn't seem a viable
solution, IMHO.

  In this organization there are pieces of some programs too big to
compile with cfront 2.1 and Sun's C compiler.  We use g++ for these
programs.  Other parts of the system don't require use of g++.  I would
like to think that different pieces of this system would be able to
access the OODB without incurring the performance penalty of converting
the stored data based not only on machine architecture but based on
which compiler is in use. At the very least, this is an additional, and
in my opinion unnecessary, constraint.

  If C++ standard continues to specify that, by default, fields can be
reordered, the standard should also require that the rules governing
such reordering be made explicit and public.  An additional requirement
I'd like to see is that a conforming compiler have available a warning
stating that reordering is happening.  If these two requirements are
included OODB vendors, as well as applications that write objects as
objects, would have some hope of understanding what a compiler is
doing.  Allowing compiler vendors to hide such details will be costly
in the long term.

    Bob

--
   Hardcopy    and       Electronic Addresses:        Office:
Bob Peterson           Compuserve: 70235,326          NB 2nd Floor CSC Aisle C3
P.O. Box 861686        Usenet: peterson@csc.ti.com
Plano, Tx USA 75086    (214) 995-6080 (work) or (214) 596-3720 (ans. machine)




Author: howell@bert.llnl.gov (Louis Howell)
Date: 27 Aug 90 22:25:40 GMT
Raw View
In article <1990Aug27.212649.16101@csc.ti.com>, peterson@csc.ti.com
(Bob Peterson) writes:
|> [...]
|>   A frequently stated goal of object-oriented database (OODB)
|> systems developers is to support storage of objects over long
|> periods of time and concurrent access to the OODB by
|> concurrently executing programs.  This implies that an object
|> stored several years ago be fetched today.  This, in turn,
|> implies that the OODB support evolution of a stored object,
|> should the definition of that object change.  Sharing implies
|> that different applications may request access to the same
|> object at roughly the same time.
|>
|>   If a C++ compiler is able to reorder the fields of an object
|> at will, the object evolution problem faced by OODB developers
|> is substantially more complex than if field reordering is
|> restricted.  Not only can an [...]

I don't buy this.  The representation of objects within a program
should have nothing to do with the representation of those objects
within files.  If an object needs a read/write capability, the
OODB developer should write explicit input and output functions
for that class, thus taking direct control of the ordering of
class members within a file.  For a large database this is
desirable anyway, since even in C the compiler is permitted to
leave space between members of a struct, and you would not want
to store this padding for every object in a file.

I really don't see why reordering of fields in C++ is any worse
than the padding between fields which already existed in C.
Both make the binary layout of a struct implementation-dependent.
I can sympathise with those who occasionally want direct control
over member arrangement, and it might be nice to provide some
kind of user override to give them the control they need, but
such control never really existed in an absolute sense even in C.
It certainly should not be the default, since most users never
need to worry about this type of detail.

Louis Howell

#include <std.disclaimer>




Author: jimad@microsoft.UUCP (Jim ADCOCK)
Date: 28 Aug 90 18:19:40 GMT
Raw View
In article <1990Aug27.212649.16101@csc.ti.com> peterson@csc.ti.com (Bob Peterson) writes:
>  If C++ standard continues to specify that, by default, fields can be
>reordered, the standard should also require that the rules governing
>such reordering be made explicit and public.  An additional requirement
>I'd like to see is that a conforming compiler have available a warning
>stating that reordering is happening.  If these two requirements are
>included OODB vendors, as well as applications that write objects as
>objects, would have some hope of understanding what a compiler is
>doing.  Allowing compiler vendors to hide such details will be costly
>in the long term.

Hm, I thought that OODBs were addressing this issue already.  If one has
software that is going to be used for a number of years, object formats
are going to change -- if not by automatic compiler intervention, then
by programmer manual intervention.  OODBs have to be designed to allow
for the evolution of the schemes used in the software, and the corresponding
object layouts.  Compilers have always hid field layouts from users --
details of packing have never been exposed.  If some other tool, like an
OODB needs to know these details, traditionally they are made available
via auxiliary files or information the compiler generates, such as debugging
information.  If your OODB strategy insists that the layout of objects
not change from now to eternity, then I don't understand how you can have
a reasonable software maintainance task.




Author: xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan)
Date: 28 Aug 90 21:17:52 GMT
Raw View
howell@bert.llnl.gov (Louis Howell) writes:
> peterson@csc.ti.com (Bob Peterson) writes:
>|> [...]
>|>   If a C++ compiler is able to reorder the fields of an object
>|> at will, the object evolution problem faced by OODB developers
>|> is substantially more complex than if field reordering is
>|> restricted.  Not only can an [...]
>
>I don't buy this.  The representation of objects within a program
>should have nothing to do with the representation of those objects
>within files. [...]

But the problem is more pervasive than just storage in _files_; you
have the identical situation in shared memory architectures, across
communication links, etc.  If you allow each compiler to make its
own decisions about how to lay out a structure, then you force _any_
programs sharing data across comm links _or_ time _or_ memory space
_or_ file storage to be compiled with compilers having the same
structure layout rules designed in, or to pack and unpack the data
with every sharing between separately compiled pieces of code, surely
an unreasonable requirement compared to the simpler one of setting
the structure layout rules once for all compilers?

The _data_ maintenance headaches probably overwhelm the _code_
maintanance headaches with the freely reordered structures paradigm.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>




Author: howell@bert.llnl.gov (Louis Howell)
Date: 29 Aug 90 00:35:53 GMT
Raw View
In article <1990Aug28.211752.24905@zorch.SF-Bay.ORG>,
 xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
|> howell@bert.llnl.gov (Louis Howell) writes:
|> > peterson@csc.ti.com (Bob Peterson) writes:
|> >|> [...]
|> >|>   If a C++ compiler is able to reorder the fields of an object
|> >|> at will, the object evolution problem faced by OODB developers
|> >|> is substantially more complex than if field reordering is
|> >|> restricted.  Not only can an [...]
|> >
|> >I don't buy this.  The representation of objects within a program
|> >should have nothing to do with the representation of those objects
|> >within files. [...]
|>
|> But the problem is more pervasive than just storage in _files_; you
|> have the identical situation in shared memory architectures, across
|> communication links, etc.  If you allow each compiler to make its
|> own decisions about how to lay out a structure, then you force
|> _any_ programs sharing data across comm links _or_ time _or_ memory
|> space _or_ file storage to be compiled with compilers having the
|> same structure layout rules designed in, or to pack and unpack the
|> data with every sharing between separately compiled pieces of code,
|> surely an unreasonable requirement compared to the simpler one of
|> setting the structure layout rules once for all compilers?
|>
|> The _data_ maintenance headaches probably overwhelm the _code_
|> maintanance headaches with the freely reordered structures
|> paradigm.

In short, you want four types of compatibility: "comm links", "time",
"memory space", and "file storage".  First off, "time" and "file
storage" look like the same thing to me, and as I said before I don't
think objects should be stored whole, but rather written and read
member by member by user-designed routines.  As for "memory space",
I think it reasonable that every processor in a MIMD machine, whether
shared memory or distributed memory, should use the same compiler.
This, and a requirement that a compiler should always produce the
same memory layout from a given class definition, even in different
pieces of code, give enough compatibility for MIMD architectures.

Finally, the issue of communication over comm links strikes me as
very similar to that of file storage.  If compatibility is essential,
design the protocol yourself; don't expect the compiler to do it for
you.  Pack exactly the bits you want to send into a string of bytes,
and send that.  You wouldn't expect to send structures from a Mac
to a Cray and have them mean anything, so why expect to be able to
send structures from an ATT-compiled program to a GNU-compiled
program?  If you want low-level compatibility, write low-level code
to provide it, but don't handicap the compiler writers.

Louis Howell

#include <std.disclaimer>




Author: diamond@tkou02.enet.dec.com (diamond@tkovoa)
Date: 31 Aug 90 01:58:09 GMT
Raw View
In article <259400004@inmet> stt@inmet.inmet.com writes:

>Re: Allowing compilers to reorder fields "at will".
>... you lose
>interoperability between successive versions of
>the same compiler, let alone interoperability between
>different compilers.

You lose nothing, because you never had such interoperability.
In C, padding can change from one release to the next, or depending on
optimization level.  In Pascal, a compiler can decline to do packing.
If I'm not mistaken, in Ada, bit-numbering does not have to match the
intuitive (big-endian or little-endian) numbering.

>Having used a language without strict field ordering
>rules, I can testify that it is a nightmare.

Surely you have such nightmares in every language you have used.
Except maybe assembly.  In order to avoid such nightmares, you have to
impose additional quality-of-implementation restrictions on the choice
of implementations that you are willing to buy.

--
Norman Diamond, Nihon DEC       diamond@tkou02.enet.dec.com
We steer like a sports car:  I use opinions; the company uses the rack.




Author: xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan)
Date: 1 Sep 90 13:10:41 GMT
Raw View
howell@bert.llnl.gov (Louis Howell) writes:
> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>|> howell@bert.llnl.gov (Louis Howell) writes:
>|> > peterson@csc.ti.com (Bob Peterson) writes:
>|> >|> [...]
>|> >|>   If a C++ compiler is able to reorder the fields of an object
>|> >|> at will, the object evolution problem faced by OODB developers
>|> >|> is substantially more complex than if field reordering is
>|> >|> restricted.  Not only can an [...]
>|> >
>|> >I don't buy this.  The representation of objects within a program
>|> >should have nothing to do with the representation of those objects
>|> >within files. [...]
>|>
>|> [...] If you allow each compiler to make its own decisions about how
>|> to lay out a structure, then you force _any_ programs sharing data
>|> across comm links _or_ time _or_ memory space _or_ file storage to
>|> be compiled with compilers having the same structure layout rules
>|> designed in, or to pack and unpack the data with every sharing
>|> between separately compiled pieces of code, [...]

>In short, you want four types of compatibility: "comm links", "time",
>"memory space", and "file storage".  First off, "time" and "file
>storage" look like the same thing to me,

Not so.  If a program compiled with compiler A stores data in a file,
and a program compiled with compiler B can't extract it, that is one
type of compatibility problem to solve, and it can be solved with the
compilers at hand.

But if a program compiled with compiler A revision 1.0 stores data in
a file, and a program compiled with compiler A revision 4.0 cannot
extract it, that is a compatibility problem to solve of a different
type.  Mandating no standard for structure layout forces programmers
in both these cases to anticipate problems, unpack the data, and store
it in some unstructured format.  Tough on the programmer who realizes
this only when compiler A revision 4.0 can no longer read the structures
written to the file with compiler A revision 1.0; it may not be around
any more to allow a program to be compiled to read and rewrite that data.

>and as I said before I don't
>think objects should be stored whole, but rather written and read
>member by member by user-designed routines.

That is a portability versus time/space efficiency choice.  By refusing
to accept mandatory structure layout standards, compiler writers would
force that choice to be made in one way only.

>As for "memory space",
>I think it reasonable that every processor in a MIMD machine, whether
>shared memory or distributed memory, should use the same compiler.

That isn't good enough.  I've worked in shops with several million lines
of code (about 7.0) in executing software.  By mandating _no_ standards
for structure layout, you force that _all_ of this code be recompiled with
every new release of the compiler, if the paradigm of data sharing is a
shared memory environment.  Again, by refusing to make one choice, you
force several other choices in ways perhaps unacceptable to the compiler
user.  In this situation, that might well involve several man-years of
effort, and it is sure to invoke every bug in the new release of the
compiler simultaneously, and would very likely bring operations to a
standstill.  With no data structure layout standard, you have removed the
user's choice to recompile and test incrementally, or else forced him to
pack and unpack data even to share it in memory.

>This, and a requirement that a compiler should always produce the
>same memory layout from a given class definition, even in different
>pieces of code, give enough compatibility for MIMD architectures.

Not if you don't mandate that compatibility across time, it doesn't.

>Finally, the issue of communication over comm links strikes me as
>very similar to that of file storage.  If compatibility is essential,
>design the protocol yourself; don't expect the compiler to do it for
>you.  Pack exactly the bits you want to send into a string of bytes,
>and send that.  You wouldn't expect to send structures from a Mac
>to a Cray and have them mean anything, so why expect to be able to
>send structures from an ATT-compiled program to a GNU-compiled
>program?  If you want low-level compatibility, write low-level code
>to provide it, but don't handicap the compiler writers.

Same comments apply.  In a widespread worldwide network of communicating
hardware, lack of a standard removes the option to send structures intact.
One choice (let compiler writers have free reign for their ingenuity in
packing structures for size/speed) removes another choice (let programmers
have free reign for their ingenuity in accomplishing speedy and effective
communications).  Somebody loses in each case, and I see the losses on
the user side to far outweigh in cost and importance the losses on the
compiler vendor side.

Then again, I write application code, not compilers, which could
conceivably taint my ability to make an unbiased call in this case. ;-)

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>




Author: stephen@estragon.stars.flab.Fujitsu.JUNET (Stephen P Spackman)
Date: 3 Sep 90 03:57:43 GMT
Raw View
In article <1990Sep1.131041.15411@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
   howell@bert.llnl.gov (Louis Howell) writes:
   > xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
   >|> [...] If you allow each compiler to make its own decisions about how
   >|> to lay out a structure, then you force _any_ programs sharing data
   >|> across comm links _or_ time _or_ memory space _or_ file storage to
   >|> be compiled with compilers having the same structure layout rules
   >|> designed in, or to pack and unpack the data with every sharing
   >|> between separately compiled pieces of code, [...]
   >In short, you want four types of compatibility: "comm links", "time",
   >"memory space", and "file storage".  First off, "time" and "file
   >storage" look like the same thing to me,
   [...]
   But if a program compiled with compiler A revision 1.0 stores data in
   a file, and a program compiled with compiler A revision 4.0 cannot
   extract it, that is a compatibility problem to solve of a different
   type.  Mandating no standard for structure layout forces programmers
   in both these cases to anticipate problems, unpack the data, and store
   it in some unstructured format.  Tough on the programmer who realizes
   this only when compiler A revision 4.0 can no longer read the structures
   written to the file with compiler A revision 1.0; it may not be around
   any more to allow a program to be compiled to read and rewrite that data.

I feel a little odd posting here, being radically anti-c++ myself, but
I *did* drop in and something about this thread is bothering me. (-:
(honestly!) Look, if you're going to take an ugly language and then
bolt on a kitchen sink and afterburners :-), why not put in the one
thing that is REALLY missing from C:

                    * STANDARD BINARY TRANSPUT *

Can't we have the compiler provide the trivial little shim function
that will read and write arbitrary data structures to binary streams?
Most operating systems now seem to have standard external data
representations, which are intended for precisely this kind of
purpose, and a "default" version would not be hard to cook up (or
appropriate). The runtime overhead is usually dwarfed by transput
cost, and you get better RPC support as a freebie. That way only the
*external* representation format needs to be specified, and the
runtime image can be as zippy as it please.

You can even read and write code if you're willing to interpret it -
or call the compiler, if you've got it kicking around.

   >As for "memory space",
   >I think it reasonable that every processor in a MIMD machine, whether
   >shared memory or distributed memory, should use the same compiler.
   >[...]

Ahem. Offhand I can think of any number of HETEROGENEOUS shared-memory
machines. Mainframes have IOPs. Amigae share memory between bigendian,
word-align 680x0s and littlendian, byte-align 80x86es (-: amusingly
enough you can do things about bytesex by negating all the addresses
on one side and offsetting all the bases, but... :-).  Video
processors with very *serious* processing power and their own
(vendor-supplied) C compilers abound. Now, I'll grant, a *real*
operating system, (-: if someone would only write one for me, :-)
would mandate a common intermediate code so that vendors only supplied
back-end, but even then I'm NOT going to pay interpretation overhead
on all variable accesses on, say, my video processor, my database
engine, by graph reduction engine and my vector processor, just
because my "main" CPU wants a certain packing strategy!

There *must* be a mechanism for binary transmission of binary data,
this must *not* be the programmer's responsibility, and while
localising all code generation in the operating system's caching and
replication component is the obvious thing to do, (a) that day hasn't
arrived yet and (b) the compiler will STILL have to get involved and
generate the appropriate type descriptors and shim functions.

stephen p spackman  stephen@estragon.uchicago.edu  312.702.3982




Author: howell@bert.llnl.gov (Louis Howell)
Date: 4 Sep 90 23:31:32 GMT
Raw View
In article <1990Sep1.131041.15411@zorch.SF-Bay.ORG>,
xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
|> howell@bert.llnl.gov (Louis Howell) writes:
|> >In short, you want four types of compatibility: "comm links",
"time",
|> >"memory space", and "file storage".  First off, "time" and "file
|> >storage" look like the same thing to me,
|>
|> Not so.  If a program compiled with compiler A stores data in a
file,
|> and a program compiled with compiler B can't extract it, that is one
|> type of compatibility problem to solve, and it can be solved with
the
|> compilers at hand.
|>
|> But if a program compiled with compiler A revision 1.0 stores data
in
|> a file, and a program compiled with compiler A revision 4.0 cannot
|> extract it, that is a compatibility problem to solve of a different
|> type.  Mandating no standard for structure layout forces programmers
|> in both these cases to anticipate problems, unpack the data, and
store
|> it in some unstructured format.  Tough on the programmer who
realizes
|> this only when compiler A revision 4.0 can no longer read the
structures
|> written to the file with compiler A revision 1.0; it may not be
around
|> any more to allow a program to be compiled to read and rewrite that
data.

I don't want to reduce this discussion to finger-pointing and
name-calling, but I think this hypothetical programmer deserved
what he got.  I think it's a useful maxim to NEVER write anything
important in a format that you can't read.  This doesn't
necessarily mean ASCII---there's nothing wrong with storing
signed or unsigned integers, IEEE format floats, etc., in binary
form, since you can always read the data back out of the file
in these formats.  If a programmer whines because he depended on
some nebulous "standard structure format" and got burned, then I
say let him whine.  Now if there actually were a standard---IEEE,
ANSI, or whatever---then the compilers should certainly support
it.  Recent comments in this newsgroup show, however, that there
isn't even a general agreement on what a standard should look like.
Let's let the state of the art develop to that point before we
start mandating standards.

|> [...]

|> >As for "memory space",
|> >I think it reasonable that every processor in a MIMD machine,
whether
|> >shared memory or distributed memory, should use the same compiler.
|>
|> That isn't good enough.  I've worked in shops with several million
lines
|> of code (about 7.0) in executing software.  By mandating _no_
standards
|> for structure layout, you force that _all_ of this code be recompiled
with
|> every new release of the compiler, if the paradigm of data sharing is
a
|> shared memory environment.  Again, by refusing to make one choice,
you
|> force several other choices in ways perhaps unacceptable to the
compiler
|> user.  In this situation, that might well involve several man-years
of
|> effort, and it is sure to invoke every bug in the new release of the
|> compiler simultaneously, and would very likely bring operations to a
|> standstill.  With no data structure layout standard, you have removed
the
|> user's choice to recompile and test incrementally, or else forced him
to
|> pack and unpack data even to share it in memory.

This is the only one of your arguments that I can really sympathize
with.  I've never worked directly on a project of anywhere near that
size.  As a test, however, I just timed the compilation of my own
current project.  4500 lines of C++ compiled from source to
executable in 219 seconds on a Sun 4.  Scaling linearly to 7 million
lines gives 3.41e5 seconds or about 95 hours of serial computer
time---large, but doable.  Adding in the human time required to
deal with the inevitable bugs and incompatibilities, it becomes
clear that switching compilers is a major undertaking that should
not be undertaken more often than once a year or so.

The alternative, though, dealing with a multitude of different
modules each compiled under slightly different conditions, sounds
to me like an even greater nightmare.  Imagine a code that only
works when module A is compiled with version 1.0, module B only
works under 2.3, and so on.  Much better to switch compilers very
seldom.  If you MUST work that way, though, note that you would
not expect the ordering methods to change with every incremental
release.  Changes like that would constitute a major compiler
revision, and would happen only rarely.

You can still recompile and test incrementally if you maintain
separate test suites for each significant module of the code.  If
the only test is to run a single 7 million line program and see if
it smokes, your project is doomed from the start.  (1/2 :-) )

Again, most users don't work in this type of environment.  A
monolithic code should be written in a very stable language to
minimize revisions.  (Fortran 66 comes to mind. :-)  The price is
not using the most up to date tools.  C++ just isn't old enough
yet to be very stable.  If I suggested changing the meaning of
a Fortran format statement, I'd be hung from the nearest tree,
and I'd deserve it, too.

|> [...]

|> >Finally, the issue of communication over comm links strikes me as
|> >very similar to that of file storage.  If compatibility is
essential,
|> >design the protocol yourself; don't expect the compiler to do it
for
|> >you.  Pack exactly the bits you want to send into a string of
bytes,
|> >and send that.  You wouldn't expect to send structures from a Mac
|> >to a Cray and have them mean anything, so why expect to be able to
|> >send structures from an ATT-compiled program to a GNU-compiled
|> >program?  If you want low-level compatibility, write low-level code
|> >to provide it, but don't handicap the compiler writers.
|>
|> Same comments apply.  In a widespread worldwide network of
communicating
|> hardware, lack of a standard removes the option to send structures
intact.
|> One choice (let compiler writers have free reign for their ingenuity
in
|> packing structures for size/speed) removes another choice (let
programmers
|> have free reign for their ingenuity in accomplishing speedy and
effective
|> communications).  Somebody loses in each case, and I see the losses
on
|> the user side to far outweigh in cost and importance the losses on
the
|> compiler vendor side.

I think Stephen Spackman's suggestion of standarizing the stream
protocol, but not the internal storage management, is the proper
way to go here.

|> Then again, I write application code, not compilers, which could
|> conceivably taint my ability to make an unbiased call in this case.
;-)

Hey, I'm a user too!  I do numerical analysis and fluid mechanics.
What I do want is the best tools available for doing my job.  If
stability were a big concern I'd work in Fortran---C++ is considered
pretty radical around here.  I think the present language is a
big improvement over alternatives, but it still has a way to go.
If we clamp down on the INTERNAL details of the compiler now, we
just shut the door on possible future improvements, and the action
will move on to the the next language (D, C+=2, or whatever).  C++
just isn't old enough yet for us to put it out to pasture.

As a compromise, why don't we add to the language the option of
specifying every detail of structure layout---placement as well
as ordering.  This will satisfy users who need low-level control
over structures, without forcing every user to painfully plot
out every structure.  Just don't make it the default; most people
don't need this capability, and instead should be given the best
machine code the compiler can generate.

Louis Howell

#include <std.disclaimer>




Author: peter@objy.objy.com (Peter Moore)
Date: 8 Sep 90 22:46:22 GMT
Raw View
In article <1990Sep1.131041.15411@zorch.SF-Bay.ORG>,
xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

<< I will paraphrase, since the >>'s were getting too deep:
       If the standard doesn't mandate structure layout, then compiler
 writers will be free to change structure layout over time and
 render all existing stored binary data obsolete
>>

Now hold on.  There may be arguments for enforcing structure layout,
but this sure isn't one of them. If the different releases of the
compiler change the internal representation of structures, then all old
binaries and libraries will become incompatible.  This is immediately
unacceptable to me, without any secondary worries about old binary
data.  If a vendor tried to do that, I would simple change vendors, and
never come back.  And no sane vendor would try such a change.  The
vendor himself has linkers, debuggers, libraries, and compilers for
other languages that all would change, not to mention thousands of
irate customers burning the vendor in effigy.

There are so many things that a vendor could change that would cause
incompatibility:  calling formats, floating point formats, object file
formats, etc..  The vendor could take 3 years to upgrade to ANSI or
insist on supplying non-standard features that can't be turned off.
Structure layout is just a small part.  By your argument, ANSI should
legislate them all, and that is unreasonable and WAY too restrictive on
implementations.

The standard can never protect you from an incompetent or malicious
vendor.  It can only act as a common ground for well intentioned
vendors and customers to meet.

 Peter Moore
 peter@objy.com




Author: stephen@estragon.uchicago.edu (Stephen P Spackman)
Date: 9 Sep 90 18:04:17 GMT
Raw View
[Apologies in advance for the temperature level - I think my
thermostat is broke]

In article <1990Sep8.154622@objy.objy.com> peter@objy.objy.com (Peter Moore) writes:
   In article <1990Sep1.131041.15411@zorch.SF-Bay.ORG>,
   xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

   << I will paraphrase, since the >>'s were getting too deep:
   If the standard doesn't mandate structure layout, then compiler
    writers will be free to change structure layout over time and
    render all existing stored binary data obsolete
   >>

   Now hold on.  There may be arguments for enforcing structure layout,
   but this sure isn't one of them. If the different releases of the
   compiler change the internal representation of structures, then all old
   binaries and libraries will become incompatible.  This is immediately
   unacceptable to me, without any secondary worries about old binary
   data.  If a vendor tried to do that, I would simple change vendors, and
   never come back.  And no sane vendor would try such a change.  The
   vendor himself has linkers, debuggers, libraries, and compilers for
   other languages that all would change, not to mention thousands of
   irate customers burning the vendor in effigy.

Now you hold on. Since the binary layout of structures is NOT defined,
any code that relies on it is BROKEN. Your old binaries, if they are
properly written, are not "damaged" by a binary representation
upgrade, any more than installing a new machine with a different
architecture on your local net breaks existing applications: if the
applications WEREN'T broken, they still aren't, because they do not
rely on undefined behaviour.

As for the libraries, I'm sure the vendor will be glad to supply new
copies of the ones he provides with the upgrade (and I assure you that
his recompiling them will not be half such a chore as you imply), and
your own you can rebuild with make(1).

Furthermore, you notice that your "solution" is insane: changing
vendors regularly will make the "problem" happen more often and more
severely, unless you ultimately succeed in finding one who has a
strict bug-maintenance policy and effectively never upgrades his
product. A more sane solution would be never to install upgrades, and
just learn to work around the limitations you encounter (like not
being able to generate code for a new architecture, for example).

   The standard can never protect you from an incompetent or malicious
   vendor.  It can only act as a common ground for well intentioned
   vendors and customers to meet.

Or, in this case, competent and well-intentioned vendors who try to
keep up with the technology.

I don't know if I'll ever end up writing commercial compilers, but if
I do, Mr. Moore, please do us both a favour and don't buy them.
Because you can *bet* that more than just binary layouts are going to
change as the optimisation library is extended.

stephen p spackman  stephen@estragon.uchicago.edu  312.702.3982