Topic: Signed integral arithmetic overflow sh
Author: sj@aracnet.com (Scott Johnson)
Date: 1996/09/03 Raw View
In article <m320glnn93.fsf@gabi-soft.fr>, J. Kanze <kanze@gabi-soft.fr> wrote:
>clamage@taumet.eng.sun.com (Steve Clamage) writes:
>
>> In article 96Aug30210403@slsvhrt.lts.sel.alcatel.de, kanze@lts.sel.alcatel.de (James Kanze US/ESC 60/3/141 #40763) writes:
>> >In article <199608300840.KAA00419@mwt616.at.mdv.de> Andreas Krueger
>> ><andreas.krueger@it-mannesmann.de> writes:
>> >
>> >|> This is, imho, inconsistent with the general philosophy of
>> >|> the language. When converting unsigned to signed, on signed
>> >|> "out of bounds", an implementation may generate any value it
>> >|> pleases, but: "The show must go on!"
>> >
>> >There has been some discussion of this in comp.std.c. I had always
>> >thought that overflow was undefined behavior in C, but the gentlemen
>> >there convinced me that I was wrong, and that, in fact, the standard
>> >requires pretty much what you are asking for.
>>
>> I wonder. The section on conversions says if you convert a value to a
>> signed type which cannot represent that value, the results are
>> implementation-defined. The section on expressions says that if an
>> exception (mathematically undefined or not representable by the type)
>> occurs, the results are undefined. In the latter case, "undefined"
>> means anything can happen and the implementation doesn't have to
>> document it.
>
>Correct. I was being careless in describing what had been discussed in
>comp.std.c. I think that it is pretty well agreed that overflow during
>an arithmetic operation on signed integral values results in undefined
>behavior. Previous to that discussion, it had been my (unfounded)
>opinion that this was also true when converting an unsigned integral
>type to signed. The argument was that the C standard says "the result
>is implementation defined", with emphisis on the word result. Thus, for
>example, according to the experts in comp.std.c, there must be a result;
>i.e.: the program cannot core dump, for example, in such cases.
My take on it is that overflows should be dealt with however the
underlying processor deals with them. The DWP does not say this, of
course...but imagine if the standard were to REQUIRE specific behavior,
such as throwing an exception. Lots of architectures don't support
detection of this in hardware; every subtraction would need to be followed
by a specific check for overflow. Likewise, what if the standard required
two's complement modulus, and code was compiled for a machine which used
one's complement, sign-magnitude, or some other goofier system of numbers
(anybody for residue math? :) ). That requirement would be expensive to
emulate in software.
The above ignores completely the C compatibility issues.
If you want to force particular semantics...write a class.
>> >On the other hand, I do have a very pertinent standards related
>> >question: in an expression of the form "new T[ n ]", suppose that "n" is
>> >a valid value for size_t. Can this expression result in undefined
>> >behavior? (One would hope not, but I'm willing to bet that the first
>> >thing most implementations do is multiply n by sizeof( T ). Without
>> >checking for overflow:-).)
>> I would say that would be an implementation error. When you make a
>> request for memory, either it can be satisfied or not. The
>> library should not fall apart in trying to figure out whether it
>> can satisfy the request.
>Then I guess I should send in a bug report to Sun:-). The following
>program core dumps when compiled with Sun CC 4.1 (and all other
>compilers I could get my hands on):
>
> #include <iostream.h>
> #include <stddef.h>
>
> struct C { char a[ 4 ] ; } ;
>
> int
> main()
> {
> size_t x = 0x20000000 ;
> try
> {
> C* p = new C[ 2 * x + 1 ] ;
> p[ x ].a[ 0 ] = 0 ;
> } catch ( ... )
> {
> cerr << "Not enough memory" << endl ;
> }
> return 0 ;
> }
>
>I'd be surprised if there are many compilers which get this right.
>Obviously, the constants should be adjusted. `x' should be assigned
>(max(size_t)+1)/4. Calculated by hand, of course, since max(size_t)+1
>is very likely to give 0 if you let the compiler do it. (Anyone out
>there need a compiler tester. I'm looking for a job:-).)
Did you check and see if your attempt towrite to the end of the array
caused the core dump, or new itself? I'd suspect the latter.
Scott
--
/--------------------------------------------------------------------------\
|Scott Johnson -- Professional (sometimes) SW Engineer and all-purpose Geek|
|I don't speak for nobody but myself, which everyone else is thankful for |
\--------------------------------------------------------------------------/
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: kanze@lts.sel.alcatel.de (James Kanze US/ESC 60/3/141 #40763)
Date: 1996/09/04 Raw View
In article <50hmr1$t2c@shelob.aracnet.com> sj@aracnet.com (Scott
Johnson) writes:
|> In article <m320glnn93.fsf@gabi-soft.fr>, J. Kanze <kanze@gabi-soft.fr> wrote:
|> >clamage@taumet.eng.sun.com (Steve Clamage) writes:
|> >
|> >> In article 96Aug30210403@slsvhrt.lts.sel.alcatel.de, kanze@lts.sel.alcatel.de (James Kanze US/ESC 60/3/141 #40763) writes:
|> >> >In article <199608300840.KAA00419@mwt616.at.mdv.de> Andreas Krueger
|> >> ><andreas.krueger@it-mannesmann.de> writes:
|> My take on it is that overflows should be dealt with however the
|> underlying processor deals with them.
If you mean that the intent is to allow the implementation to do this,
then you are right. I don't think that it was the intent to require
such; I think the intent was more to allow alternatives. Thus, even if
the underlying hardware doesn't support checking, a good implementation
will, with an option to turn it off if performance becomes an issue.
|> The DWP does not say this, of
|> course...but imagine if the standard were to REQUIRE specific behavior,
|> such as throwing an exception. Lots of architectures don't support
|> detection of this in hardware; every subtraction would need to be followed
|> by a specific check for overflow. Likewise, what if the standard required
|> two's complement modulus, and code was compiled for a machine which used
|> one's complement, sign-magnitude, or some other goofier system of numbers
|> (anybody for residue math? :) ). That requirement would be expensive to
|> emulate in software.
Except that the two's complement modulus is required when converting
signed to unsigned.
|> The above ignores completely the C compatibility issues.
In this area, I think that the C and the C++ standard say basically the
same thing.
|> If you want to force particular semantics...write a class.
This is expensive. The compiler has lots of possibilities to optimize
the checks it might generate. This is much harder if the checks are in
user written code. (Not impossible, of course. But in the case of
compiler generated checks, the compiler has the metaknowledge of the
intent and the actual semantics. In user code, it must glean the
semantics by analysing the code.)
|> >> >On the other hand, I do have a very pertinent standards related
|> >> >question: in an expression of the form "new T[ n ]", suppose that "n" is
|> >> >a valid value for size_t. Can this expression result in undefined
|> >> >behavior? (One would hope not, but I'm willing to bet that the first
|> >> >thing most implementations do is multiply n by sizeof( T ). Without
|> >> >checking for overflow:-).)
|> >> I would say that would be an implementation error. When you make a
|> >> request for memory, either it can be satisfied or not. The
|> >> library should not fall apart in trying to figure out whether it
|> >> can satisfy the request.
|> >Then I guess I should send in a bug report to Sun:-). The following
|> >program core dumps when compiled with Sun CC 4.1 (and all other
|> >compilers I could get my hands on):
|> >
|> > #include <iostream.h>
|> > #include <stddef.h>
|> >
|> > struct C { char a[ 4 ] ; } ;
|> >
|> > int
|> > main()
|> > {
|> > size_t x = 0x20000000 ;
|> > try
|> > {
|> > C* p = new C[ 2 * x + 1 ] ;
|> > p[ x ].a[ 0 ] = 0 ;
|> > } catch ( ... )
|> > {
|> > cerr << "Not enough memory" << endl ;
|> > }
|> > return 0 ;
|> > }
|> >
|> >I'd be surprised if there are many compilers which get this right.
|> >Obviously, the constants should be adjusted. `x' should be assigned
|> >(max(size_t)+1)/4. Calculated by hand, of course, since max(size_t)+1
|> >is very likely to give 0 if you let the compiler do it. (Anyone out
|> >there need a compiler tester. I'm looking for a job:-).)
|> Did you check and see if your attempt towrite to the end of the array
|> caused the core dump, or new itself? I'd suspect the latter.
Of course, the latter. So what? I asked the implementation to give me
0x40000001 C's. By not throwing an exception, it told me it had done
so. So I access one right in the middle. Where is the error in my
code?
The only possible error I can see in the above code is that overflow
during the evaluation of the new expression caused undefined behavior.
As far as I can see, the only time a program is allowed to core dump is
if there is undefined behavior. So where is the undefined behavior
here? Justify by citing a recent draft of the working papers (or the C
standard or the ARM, for that matter). I'm not saying that it shouldn't
be undefined, simply that the current wording doesn't make it so.
As was pointed out to me in email, and as I mentioned in another
posting, there is not even any overflow in this example. size_t is
required to be an unsigned type, and unsigned arithmetic is defined. On
a 32 bit machine, the result of multiplying 0x40000001 by sizeof(C)
(==4) is 4. The standard probably needs some words to the effect that
the expression "new T[n]" is undefined if n > numeric_limits<
size_t >().max() / sizeof(T).
I think that one might also say that the above program has violated an
implementation limit, in that it has attempted to define an object (a
C[...]) larger than the maximum size allowed for an object. Unless I'm
mistaken, however, violating an implementation limit requires a
diagnostic. I think that this was the case in C. Looking at the
paragraphs in intro.compliance in the draft standard, I get the
impression that violating an implementation limit in C++ is undefined
behavior. I don't think that this is generally acceptable; if the
implementation only supports a maximum of 257 cases in a switch, and I
have 258, I want a diagnostic, and not e.g. the compiler just throwing
some of the cases away and choosing another one as default (a behavior
that I have already experienced).
--
James Kanze Tel.: (+33) 88 14 49 00 email: kanze@gabi-soft.fr
GABI Software, Sarl., 8 rue des Francs-Bourgeois, F-67000 Strasbourg, France
Conseils, tudes et r alisations en logiciel orient objet --
-- A la recherche d'une activit dans une region francophone
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: clamage@taumet.eng.sun.com (Steve Clamage)
Date: 1996/08/30 Raw View
In article 96Aug30210403@slsvhrt.lts.sel.alcatel.de, kanze@lts.sel.alcatel.de (James Kanze US/ESC 60/3/141 #40763) writes:
>In article <199608300840.KAA00419@mwt616.at.mdv.de> Andreas Krueger
><andreas.krueger@it-mannesmann.de> writes:
>
>|> This is, imho, inconsistent with the general philosophy of
>|> the language. When converting unsigned to signed, on signed
>|> "out of bounds", an implementation may generate any value it
>|> pleases, but: "The show must go on!"
>
>There has been some discussion of this in comp.std.c. I had always
>thought that overflow was undefined behavior in C, but the gentlemen
>there convinced me that I was wrong, and that, in fact, the standard
>requires pretty much what you are asking for.
I wonder. The section on conversions says if you convert a value to a
signed type which cannot represent that value, the results are
implementation-defined. The section on expressions says that if an
exception (mathematically undefined or not representable by the type)
occurs, the results are undefined. In the latter case, "undefined"
means anything can happen and the implementation doesn't have to
document it.
>|> In my opinion, there should be a rule in the standard which
>|> says something to the respect:
>
>|> "When you do signed integral arithmetic and it overflows,
>|> all bets are off regarding the result value, but the program
>|> will continue to work."
>
>This is quasi-impossible.
You could say that overflow is allowed in signed arithmetic, and
the results are implementation-defined. But I agree with James
that once your result is known to be garbage, it seems silly
to REQUIRE that the program plunge ahead regardless.
Anecdote: Many years ago an Algol compiler was installed at a site where
I worked, and the Fortran programmers in one group were required to
switch to Algol. One programmer in particular didn't want to change, and
gleefully showed how the Algol implementation was broken because his
Fortran programs failed with runtime errors when converted to Algol.
It turned out that his Fortran programs created floating-point overflow,
which a Fortran compiler was forbidden to diagnose, but which an Algol
compiler was required to diagnose. He had been getting invalid results
all along without knowing it.
>On the other hand, I do have a very pertinent standards related
>question: in an expression of the form "new T[ n ]", suppose that "n" is
>a valid value for size_t. Can this expression result in undefined
>behavior? (One would hope not, but I'm willing to bet that the first
>thing most implementations do is multiply n by sizeof( T ). Without
>checking for overflow:-).)
I would say that would be an implementation error. When you make a
request for memory, either it can be satisfied or not. The
library should not fall apart in trying to figure out whether it
can satisfy the request.
---
Steve Clamage, stephen.clamage@eng.sun.com
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: kanze@gabi-soft.fr (J. Kanze)
Date: 1996/09/02 Raw View
clamage@taumet.eng.sun.com (Steve Clamage) writes:
> In article 96Aug30210403@slsvhrt.lts.sel.alcatel.de, kanze@lts.sel.alcatel.de (James Kanze US/ESC 60/3/141 #40763) writes:
> >In article <199608300840.KAA00419@mwt616.at.mdv.de> Andreas Krueger
> ><andreas.krueger@it-mannesmann.de> writes:
> >
> >|> This is, imho, inconsistent with the general philosophy of
> >|> the language. When converting unsigned to signed, on signed
> >|> "out of bounds", an implementation may generate any value it
> >|> pleases, but: "The show must go on!"
> >
> >There has been some discussion of this in comp.std.c. I had always
> >thought that overflow was undefined behavior in C, but the gentlemen
> >there convinced me that I was wrong, and that, in fact, the standard
> >requires pretty much what you are asking for.
>
> I wonder. The section on conversions says if you convert a value to a
> signed type which cannot represent that value, the results are
> implementation-defined. The section on expressions says that if an
> exception (mathematically undefined or not representable by the type)
> occurs, the results are undefined. In the latter case, "undefined"
> means anything can happen and the implementation doesn't have to
> document it.
Correct. I was being careless in describing what had been discussed in
comp.std.c. I think that it is pretty well agreed that overflow during
an arithmetic operation on signed integral values results in undefined
behavior. Previous to that discussion, it had been my (unfounded)
opinion that this was also true when converting an unsigned integral
type to signed. The argument was that the C standard says "the result
is implementation defined", with emphisis on the word result. Thus, for
example, according to the experts in comp.std.c, there must be a result;
i.e.: the program cannot core dump, for example, in such cases.
> >On the other hand, I do have a very pertinent standards related
> >question: in an expression of the form "new T[ n ]", suppose that "n" is
> >a valid value for size_t. Can this expression result in undefined
> >behavior? (One would hope not, but I'm willing to bet that the first
> >thing most implementations do is multiply n by sizeof( T ). Without
> >checking for overflow:-).)
>
> I would say that would be an implementation error. When you make a
> request for memory, either it can be satisfied or not. The
> library should not fall apart in trying to figure out whether it
> can satisfy the request.
Then I guess I should send in a bug report to Sun:-). The following
program core dumps when compiled with Sun CC 4.1 (and all other
compilers I could get my hands on):
#include <iostream.h>
#include <stddef.h>
struct C { char a[ 4 ] ; } ;
int
main()
{
size_t x = 0x20000000 ;
try
{
C* p = new C[ 2 * x + 1 ] ;
p[ x ].a[ 0 ] = 0 ;
} catch ( ... )
{
cerr << "Not enough memory" << endl ;
}
return 0 ;
}
I'd be surprised if there are many compilers which get this right.
Obviously, the constants should be adjusted. `x' should be assigned
(max(size_t)+1)/4. Calculated by hand, of course, since max(size_t)+1
is very likely to give 0 if you let the compiler do it. (Anyone out
there need a compiler tester. I'm looking for a job:-).)
Note that the equivalent program in C, using malloc and explicitly
multiplying by sizeof( C ) definitly invokes undefined behavior.
--
James Kanze (+33) 88 14 49 00 email: kanze@gabi-soft.fr
GABI Software, Sarl., 8 rue des Francs Bourgeois, 67000 Strasbourg, France
Conseils en informatique industrielle --
-- Beratung in industrieller Datenverarbeitung
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: kanze@gabi-soft.fr (J. Kanze)
Date: 1996/09/03 Raw View
I have received the following interesting response to my posting from
William Aitken, with permission to post it. I've added my comments, as
if it had appeared normally in the group.
William E. Aitken writes:
> In article <m320glnn93.fsf@gabi-soft.fr> you write:
> >Then I guess I should send in a bug report to Sun:-). The following
> >program core dumps when compiled with Sun CC 4.1 (and all other
> >compilers I could get my hands on):
> >
> > #include <iostream.h>
> > #include <stddef.h>
> >
> > struct C { char a[ 4 ] ; } ;
> >
> > int
> > main()
> > {
> > size_t x = 0x20000000 ;
> > try
> > {
> > C* p = new C[ 2 * x + 1 ] ;
> > p[ x ].a[ 0 ] = 0 ;
> > } catch ( ... )
> > {
> > cerr << "Not enough memory" << endl ;
> > }
> > return 0 ;
> > }
> >
> >I'd be surprised if there are many compilers which get this right.
> >Obviously, the constants should be adjusted. `x' should be assigned
> >(max(size_t)+1)/4. Calculated by hand, of course, since max(size_t)+1
> >is very likely to give 0 if you let the compiler do it. (Anyone out
> >there need a compiler tester. I'm looking for a job:-).)
> >
> >Note that the equivalent program in C, using malloc and explicitly
> >multiplying by sizeof( C ) definitly invokes undefined behavior.
> Where are you getting the undefined behavior from? In the C
> program where you do a malloc(sizeof(C) * (2 * x + 1)) there doesn't
> seem to be any. Since x is is unsigned, and 2 is an int, 2 * x is
> unsigned. Then, since 1 is an int, 2 * x + 1 is unsigned. sizeof(C)
> is of type size_t, which is required to be an unsigned type. Unless
> size_t is unsigned long, the multiplication
>
> sizeof (C) * (2 * x + 1)
>
> is unsigned int multiply. If it is unsigned
> long, the multiplication is an unsigned long multiply.
> Either way, there is no opportunity for overflow,
> since unsigned arithmetic is defined to wrap around
> rather than overflow.
Correct so far. The malloc is "defined" to allocate 4 bytes. Which
means that accessing a byte 0x80000000 beyond the returned address, as
in the following expression, is undefined behavior. (You're right,
though, in that I had overlooked the fact that size_t must be unsigned,
and so the arithmetic itself is defined. In this case, not usefully
defined, but that can be considered the programmer's fault.)
Another way to look at it: we both agree that the above program (at
least when modified to use malloc) will core dump, without a compiler
diagnostic. A core dump is not defined in the C/C++ standard, and so
can only be a result of undefined behavior (or a compiler error).
> About the only way to get undefined behavior like this
> in C is when size_t is unsigned int, sizeof(long) > sizeof(int),
> and you multiply the sizeof C by a long. I would argue that
> such a design is somewhat perverse, but it is also probably rather common.
Agreed. In most 16 bit 80x86 compilers, size_t is unsigned int, so if I
modify the above changing the type of x to long, instead of size_t, we
get undefined behavior in the malloc expression.
> Of course, the absense of undefined behavior doesn't make the
> program any less buggy, but at least it's buggy in well defined ways.
It's well defined that it has undefined behavior. Just not where I
expected it. There are, from a standards point of view, only two types
of errors: undefined behavior, and a diagnosible error. There is no
diagnosible error; if there were no undefined behavior, then the program
must terminate with a successful return code (which is the only visible
behavior in this case). It won't.
> C++ has two options. Either we can make behavior undefined if
> new[] is used to allocate an array with too many elements --- that is,
> we can codify the existing state of the art. Or we can require that
> implementations check for this condition and handle it as an
> allocation failure. Unfortunately, I am of the opinion that
> there is currently a hole in the sttandard here. All the text concerned
> with failure of allocation is concerned with possible failures of
> the allocation function itself, not failures that preceed its call.
> I would agree that high quality implementations shouldn't silently
> allocate random amounts of memory, but am not convinced that doing
> so clearly viol;ates the standard.
I agree. Although I generally push for more safety, in this case, I
would have no trouble accepting undefined behavior. It's worth noting
that *IF* the standard insists on an allocation failure (which is what a
strict reading of the current wording would probably do), then the
generated code practically has to call ::operator new() with an
outrageously large value if it detects overflow due to the
multiplication by sizeof(T), since the user may have replaced the new
handler (say to throw an exception derived from bad_alloc) or ::operator
new itself. (IMHO: the only valid alternative to undefined behavior is
to say that in such cases, the implementation will call ::operator new
with the value (size_t)( -1 ), and that all attempts to actually
acquire that many bytes must fail.)
It's also worth noting that although the standard doesn't say so, an
attempt to do e.g. allocate something like "C a[ 0x20000000 ]" as a
local variable will also result in undefined behavior (a core dump in
Unix, but possibly overwriting arbitrary data in a system without memory
protection) in all implementations I know of. Off hand, I can think of
no words in either the C standard or the C++ standard which allow an
application to take such liberties. In fact, unless I've missed
something, taken literally, both of these standards require an
implementation to support infinite recursion.
I'm not sure in fact how this should be treated in the standard. One
could add language to the clause which defines a function call in the
expressions chapter, stating that a function call implicitly uses
resources, and that if the function call would require more resources
than available, the implementation is free to generate undefined
behavior. The problem with this is: no program which contains a
function call can be strictly conforming. Which I don't find
particularly satisfactory.
I think that we are approaching the point of diminishing returns. We
known that in fact, there are no conforming implementations, since all
implementation contain at least one error which violates the standard.
And as Steve Clamage has pointed out in the past, a standard cannot
mandate that the implementation actually be useful. Perhaps we need the
concept of a "usefully conforming" implementation, except that there is
no way to determine if a given compiler meets the criteria.
> It is (somewhat) interesting to note that the naive implementation
> of this check (compare the requested number of elements with a
> constant determined at compile time by dividing the maximum size_t
> value by the size of the element type) makes a check that the
> requested number of elements be positive free (except when the
> element size is 1). This also true (I think) of implementations that
> do the multiplication first and then check for overflow. So
> if we are going to make it explicit that compilers must detect this case, we
> should also require them to detect negative array bounds, rather than
> making them triggers for undefined behavior.
Good point. Consider, however (on a 32 bit machine):
new char[ -2147483648 ] ;
new char[ 0x80000000u ] ;
I think that it is actually possible for me to configure my machine to
support arrays this big, provided I have the disk space available. And
it's only 2 Gigabytes, not a lot for modern systems.
(I'm not sure what the above example should be proving. But I find it
interesting. And it may cause problems with your proposal about
checking for negative; the compiler should only generate the check if
the expression has a signed type. Or maybe not even then. ::operator
new takes a size_t, and the conversion of -2147483648 to size_t is well
defined in this case.)
> Please feel free to distribute this reply to others or to the group.
> The only reason I am responding by mail is that the news set up here makes
> following up to moderated newsgroups too difficult, and because
> I am not really certain that this discussion is of truly general interest
> to the comp.std.c++ community.
>
> --- Bill.
> --
> William E. Aitken | Formal verification is the
> email: aitken@halcyon.com | future of computer science ---
> Snail: 8500 148th Ave NE #H1026 Redmond WA | Always has been, always will be.
> ===============================================================================
>
--
James Kanze (+33) 88 14 49 00 email: kanze@gabi-soft.fr
GABI Software, Sarl., 8 rue des Francs Bourgeois, 67000 Strasbourg, France
Conseils en informatique industrielle --
-- Beratung in industrieller Datenverarbeitung
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]