Topic: why'd they skip short
Author: "Stephen Howe" <SPAMGUARDsjhowe@dial.pipex.co.uk>
Date: Tue, 19 Dec 2000 17:52:43 GMT Raw View
"Greg Brewer" <nospam.greg@brewer.net> wrote in message
news:913ku2$2c6t$1@news.hal-pc.org...
> I would love to have 7S to go with 7, 7L, 7F, and 7LL. I would also like
to
> know why short was left out of that notation!
I have a problem with these.
If we have
long t;
:
cout << t;
and t is changed from long to short, we can recompile and do not have to
bother worrying about type, the compiler will figure it out. In contrast, if
that was
printf("t= %ld\n", t);
we now have to march all around our code and change %ld's to %hd. One of the
benefits of C++ streams.
Now consider lines of code that are
t += 7L;
With such a change, we similarly have to remove the L if the context demands
it, to be consistent. To my mind, these suffixes are as bad as printf()
notation.
Stephen Howe
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: "Greg Brewer" <nospam.greg@brewer.net>
Date: Tue, 19 Dec 2000 18:15:21 GMT Raw View
"Stephen Howe" <SPAMGUARDsjhowe@dial.pipex.co.uk> wrote in message
news:91mj9v$sat$1@lure.pipex.net...
>
> "Greg Brewer" <nospam.greg@brewer.net> wrote in message
> news:913ku2$2c6t$1@news.hal-pc.org...
>
> > I would love to have 7S to go with 7, 7L, 7F, and 7LL. I would also
like to
> > know why short was left out of that notation!
> I have a problem with these.
> Now consider lines of code that are
>
> t += 7L;
>
> With such a change, we similarly have to remove the L if the context
demands
> it, to be consistent. To my mind, these suffixes are as bad as printf()
> notation.
I agree and disagree. I agree that this is subject to miscoding. However,
I disagree in that when properly used, it provides the compiler with useful
information in short-hand form. Consider the following,
long days35 = 35*24*60*60;
in 32 bit code, this provides the number of seconds in a 35 day period. In
16 bit code though, the number overflows the 16 bit value and days35 gets
some other value. A quick change to
long days35 = 35L*24L*60L*60L
has no problem in either 32 bit or 16 bit compilation. Even when your
example is taken into consideration, changing the type of t from long to
short results in a compiler warning that you do not get when using printf.
Moreover, the answer you get is ultimately correct. However, when written
as simply
t += 7;
you still get a compiler warning when t is short and you have a 32 bit
environement. However,
t += 7S;
will get you a correct result and no warning for all types of t above char
in both 16 bit and 32 bit compiles.
Greg
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: James.Kanze@dresdner-bank.com
Date: Wed, 13 Dec 2000 16:23:46 GMT Raw View
In article <newscache$kdgg5g$pue$1@firewall.thermoteknix.co.uk>,
"Ken Hagan" <K.Hagan@thermoteknix.co.uk> wrote:
> Secondly, there are folks out there who think the current behaviour
> for floating point types is an abomination. Consider multiplying two
> floats together (on an IEEE machine).
> // assume f1 and f2 are floats
> float f = f1*f2;
> double d = f1*f2;
> In the above case, "d" is guaranteed to be accurate, since doubles
> have more than twice the precision, and much larger range. At least,
> it used to be. Then someone decided that the orthogonality of the
> type system was more important than fitness for purpose, and decreed
> that henceforth "f1*f2" would be *evaluated* at float precision and
> then extended to double for the assignment. Not only does this allow
> inaccurate results and overflows which did not previously occur, but
> on certain popular CPUs (x86 and 68k spring to mind) it incurs such
> a huge performance penalty that no vendor actually does it.
It's not required, as far as I know. Whether an implementation
actually supports float arithmetic or not is implementation defined.
The reason for allowing the behavior is that on certain
implementations, float arithmetic IS significantly faster than double.
> A similar argument can be deployed for the case of two shorts on
> a 32-bit machine.
> // assume s1 and s2 are shorts and int is twice as wide
> int i = s1*s2;
> Can this overflow?
No, but the results of adding two ints and assigning to a long, even
with 16 bit ints and 32 bit longs, can.
--
James Kanze mailto:kanze@gabi-soft.de
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
Ziegelh ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627
Sent via Deja.com
http://www.deja.com/
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: James.Kanze@dresdner-bank.com
Date: Wed, 13 Dec 2000 17:12:55 GMT Raw View
In article <91364v$ark$1@mach.thp.univie.ac.at>,
jthorn@galileo.thp.univie.ac.at (Jonathan Thornburg) wrote:
> >The
> >PDP-11 (and others) couldn't do simple precision floating point,
> >either
> Didn't the original pdp-11 floating point instruction sets have both
> single and double precision? I thought I recalled seeing both in a
> 11/{5,10,35,40}-vintage pdp-11 processor handbook...
I seem to remember (but I could be confusing it with another machine,
it's been so long ago) that the PDP-11 used a stack based floating
point unit, with only double registers, somewhat like the current
Intel FP. There was a load and a store for floats, but the internal
calculation was always in double. And that the only way to force it
to float was to store and reload after each operation.
--
James Kanze mailto:kanze@gabi-soft.de
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
Ziegelh ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627
Sent via Deja.com
http://www.deja.com/
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: "Ken Hagan" <K.Hagan@thermoteknix.co.uk>
Date: Thu, 14 Dec 2000 14:06:45 GMT Raw View
Regarding the evaluation of float expressions at float precision...
<James.Kanze@dresdner-bank.com> wrote...
>
> It's not required, as far as I know. Whether an implementation
> actually supports float arithmetic or not is implementation defined.
> The reason for allowing the behavior is that on certain
> implementations, float arithmetic IS significantly faster than double.
The nearest thing to a standard I can lay my hands on this evening is
the C9X draft rationale. It says (6.3.1.8.2)
"The values of floating operands and of the results of floating
expressions may be represented in greater precision and range
than that required by the type; the types are not changed
thereby."
which I take to indicate that you are right. (nibbles humble pie)
Still, this may not be good news for folks trying to write portable
floating point code. If I write
double d = 2.0 * float(f1*f2);
then does the cast force the reduced precision, under the as-if rule,
or is the compiler allowed to ignore it, under the above clause?
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: James.Kanze@dresdner-bank.com
Date: Thu, 14 Dec 2000 18:46:20 GMT Raw View
In article <newscache$k9pi5g$an8$1@firewall.thermoteknix.co.uk>,
"Ken Hagan" <K.Hagan@thermoteknix.co.uk> wrote:
> Regarding the evaluation of float expressions at float precision...
> <James.Kanze@dresdner-bank.com> wrote...
> > It's not required, as far as I know. Whether an implementation
> > actually supports float arithmetic or not is implementation
> > defined. The reason for allowing the behavior is that on certain
> > implementations, float arithmetic IS significantly faster than
> > double.
> The nearest thing to a standard I can lay my hands on this evening
> is the C9X draft rationale. It says (6.3.1.8.2)
> "The values of floating operands and of the results of floating
> expressions may be represented in greater precision and range
> than that required by the type; the types are not changed
> thereby."
> which I take to indicate that you are right. (nibbles humble pie)
> Still, this may not be good news for folks trying to write portable
> floating point code. If I write
> double d = 2.0 * float(f1*f2);
> then does the cast force the reduced precision, under the as-if
> rule, or is the compiler allowed to ignore it, under the above
> clause?
I'm not sure in the case of a cast. There may be a rule somewhere
that says that casting must perform as if the value were assigned to
the corresponding type. (The wording for static_cast in the C++
standard says this.)
But standard C/C++ don't give many guarantees with regards to floating
point anyway. In particular, what the paragraph you quote says also
gives the implementation the right to use larger precision than
double. So:
double d = 1.2 ;
d *= 2.0 ;
assert( d == 2.0 * 1.2 ) ; // May fail!!!
The reason, as usual, is performance. (Note that Java originally
required correct results, but then relaxed the standard. Because one
of the processors where this makes a real performance difference is
the Intel.)
The problem is further complicated by the fact that using larger
intermediate results may actually improve the numeric results of the
na ve user.
--
James Kanze mailto:kanze@gabi-soft.de
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
Ziegelh ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627
Sent via Deja.com
http://www.deja.com/
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: Christian Bau <christian.bau@isltd.insignia.com>
Date: Fri, 15 Dec 2000 16:36:50 GMT Raw View
Ken Hagan wrote:
>
> The nearest thing to a standard I can lay my hands on this evening is
> the C9X draft rationale. It says (6.3.1.8.2)
>
> "The values of floating operands and of the results of floating
> expressions may be represented in greater precision and range
> than that required by the type; the types are not changed
> thereby."
>
> which I take to indicate that you are right. (nibbles humble pie)
> Still, this may not be good news for folks trying to write portable
> floating point code. If I write
>
> double d = 2.0 * float(f1*f2);
>
> then does the cast force the reduced precision, under the as-if rule,
> or is the compiler allowed to ignore it, under the above clause?
I think there are explicit rules for cast and assignment. If f1 and f2
are float, then the result of f1*f2 could have even higher precision
than double, but the cast MUST change it back to float with no extra
precision or range.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: James.Kanze@dresdner-bank.com
Date: Mon, 11 Dec 2000 17:43:07 GMT Raw View
In article <976316263.26113.6.nnrp-14.d4e5bde1@news.demon.co.uk>,
"Mike Dimmick" <mike@dimmick.demon.co.uk> wrote:
> "Greg Brewer" <nospam.greg@brewer.net> wrote in message
> news:90oh6v$1c5u$1@news.hal-pc.org...
> > I was reading Randy Meyers' column in the Dec CUJ on "The New
> > C". I'm figuring many of the issues addressed here need to be
> > addressed in C++. Near the end of the acticle, he talks about
> > constants and says "a decimal integer constant ... has the first
> > type form this list that can represent its values: int, long, or
> > long long."
> > I'm curious why short was omitted. I'm constantly get warnings
> > because my shorts get promoted to ints too easily. Take the code,
> > short i = 0;
> > i = i + 7; // warning
> > If seven were a short, there wouldn't be a problem.
> The current promotion rules are quite well ingrained, and the rule
> is that anything shorter than an int gets promoted to an int when
> any arithmetic operation is performed. I believe the aim was to
> perform all operations (that might possibly overflow) at maximum
> precision without requiring the user to do anything. However, the
> thinking is somewhat flawed, as two operands the same size can cause
> an overflow.
The *aim* was simple to correspond to what machines actually did.
Most machines at the time C was being developed couldn't do short
arithmetic. So the "feature" was ingrained into the language. The
PDP-11 (and others) couldn't do simple precision floating point,
either, so the original C compiler automatically promoted everything
to double, too. In the case of integral arithmetic, this is dubious
reasoning, since the results of doing integer arithmetic on int or
short will generally be the same if the results are assigned to a
short.
In practice, I'd say that a C compiler which warned about implicit
narrowing conversions would be close to unusable, since the standard C
idiom for character input typically involves assigning an int to a
char. In order to be in any way useful, the warning must track the
expression enough to eliminate cases where the sources were actually
narrow, were small constants, or derived from getc et al. after a
comparison to EOF.
> However, we now have processor architectures that can perform
> multiple operations on shorter-than-a-register data in parallel (I'm
> thinking specifically of Intel's MMX instructions, but I'm sure
> there are many other examples). So perhaps it would be better to
> have the integer conversions the same as the floating ones; all
> operations are performed at the same precision, which is the same as
> that of the longest operand, or that of the target, if there is one.
> The compiler should emit a warning for a narrowing conversion.
The problem isn't changed, since 7 has type int, and 1.2 has type
double. You also need some way of specifying that you want a short
7.
The alternative is to define some sort of rules for determining the
type of 7 according to context.
--
James Kanze mailto:kanze@gabi-soft.de
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
Ziegelh ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627
Sent via Deja.com http://www.deja.com/
Before you buy.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: Jack Klein <jackklein@spamcop.net>
Date: Tue, 12 Dec 2000 14:33:52 GMT Raw View
On Fri, 8 Dec 2000 14:55:55 GMT, "Greg Brewer"
<nospam.greg@brewer.net> wrote in comp.std.c++:
> I was reading Randy Meyers' column in the Dec CUJ on "The New C". I'm
> figuring many of the issues addressed here need to be addressed in C++.
> Near the end of the acticle, he talks about constants and says "a decimal
> integer constant ... has the first type form this list that can represent
> its values: int, long, or long long."
>
> I'm curious why short was omitted. I'm constantly get warnings because my
> shorts get promoted to ints too easily. Take the code,
> short i = 0;
> i = i + 7; // warning
> If seven were a short, there wouldn't be a problem.
>
> Any insights into this?
>
> Greg
That would not help. You could create a short like this:
i = i + (short)7; /* or short(7) */
But in C and C++ no expression ever operates on anything shorter than
an int. Consider:
short a = 1, b = 2, c;
c = a + b;
There are no ints as the expression is written, but a and b are both
widened to int, these two ints are added to produce an int result, and
this int value is then converted to a short and assigned to c. If the
result of the addition is outside the range that can be represented in
a short the result is implementation-defined.
That sort of overflow is not possible using the values 1 and 2, but on
an implementation where int has a larger range of values than short:
#include <climits>
/* ... */
short a = SHRT_MAX, b = SHRT_MAX, c;
c = a + b;
This is not undefined behavior, but the value of c is
implementation-defined and so might vary from one compiler to another.
Jack Klein
--
Home: http://jackklein.home.att.net
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: jthorn@galileo.thp.univie.ac.at (Jonathan Thornburg)
Date: Tue, 12 Dec 2000 14:43:17 GMT Raw View
In article <912k1d$6a1$1@nnrp1.deja.com>,
<James.Kanze@dresdner-bank.com> wrote:
>Most machines at the time C was being developed couldn't do short
>arithmetic. So the "feature" was ingrained into the language.
In article <kusn0lINNh74@spim.mips.com>
(comp.lang.c, comp.std.c, and comp.arch, 17 April 1992),
John Mashey <mash@sgi.com> dated the introduction of `long':
# 1) long was introduced during late 1975 / early 1976. [My Sixth Edition
# manual, May 1975, has no trace of long ... except for inserted manual
# pages gotten off the research machine, labeled 4/25/76 that show long used
# for tell(II), times(II), etc. I don't remember exactly when Dennis put
# long in, but 1Q76 seems about right.
Didn't `short' appear at the same time as `long'?
John Mashey went on to say
# 2) Recall the world of that time: UNIX basically ran on PDP-11s, with up
# to 248KB of memory ... sometimes supporting as many as 24 users.
# We mostly had 11/45s; we got our first 11/70 in 2Q76. BTL PY had one of
# the *larger* collections of UNIX systems in one place - we had 2 11/45s
# and 1 11/70 and at that time, it was one of the very few places where you
# could get access to a UNIX machine unless you begged time on research,
# or were working on a project to be delivered on a UNIX machine.
This suggests that `short' arithmetic would have been well-supported
by typical C hardware in that time frame, since on a pdp-11, short = int
typically.
>The
>PDP-11 (and others) couldn't do simple precision floating point,
>either
Didn't the original pdp-11 floating point instruction sets have
both single and double precision? I thought I recalled seeing both
in a 11/{5,10,35,40}-vintage pdp-11 processor handbook...
--
-- Jonathan Thornburg <jthorn@thp.univie.ac.at>
http://www.thp.univie.ac.at/~jthorn/home.html
Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik
"There's no such thing as a simple cache bug." --Rob Pike
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: James Dennett <james@evtechnology.com>
Date: Tue, 12 Dec 2000 14:43:39 GMT Raw View
James.Kanze@dresdner-bank.com wrote:
>
> In article <976316263.26113.6.nnrp-14.d4e5bde1@news.demon.co.uk>,
> "Mike Dimmick" <mike@dimmick.demon.co.uk> wrote:
>
> > "Greg Brewer" <nospam.greg@brewer.net> wrote in message
> > news:90oh6v$1c5u$1@news.hal-pc.org...
> > > I was reading Randy Meyers' column in the Dec CUJ on "The New
> > > C". I'm figuring many of the issues addressed here need to be
> > > addressed in C++. Near the end of the acticle, he talks about
> > > constants and says "a decimal integer constant ... has the first
> > > type form this list that can represent its values: int, long, or
> > > long long."
>
> > > I'm curious why short was omitted. I'm constantly get warnings
> > > because my shorts get promoted to ints too easily. Take the code,
> > > short i = 0;
> > > i = i + 7; // warning
> > > If seven were a short, there wouldn't be a problem.
>
> > The current promotion rules are quite well ingrained, and the rule
> > is that anything shorter than an int gets promoted to an int when
> > any arithmetic operation is performed. I believe the aim was to
> > perform all operations (that might possibly overflow) at maximum
> > precision without requiring the user to do anything. However, the
> > thinking is somewhat flawed, as two operands the same size can cause
> > an overflow.
>
> The *aim* was simple to correspond to what machines actually did.
> Most machines at the time C was being developed couldn't do short
> arithmetic. So the "feature" was ingrained into the language. The
> PDP-11 (and others) couldn't do simple precision floating point,
> either, so the original C compiler automatically promoted everything
> to double, too. In the case of integral arithmetic, this is dubious
> reasoning, since the results of doing integer arithmetic on int or
> short will generally be the same if the results are assigned to a
> short.
>
> In practice, I'd say that a C compiler which warned about implicit
> narrowing conversions would be close to unusable, since the standard C
> idiom for character input typically involves assigning an int to a
> char.
It may be more than luck which lead you to say "_close_ to unusable"
there. Such compilers exist, and such warnings are occasionally useful,
e.g., when porting code with assumptions of 32-bit long to a 64-bit
architecture.
IIRC, I came across one compiler which produced a warning for any use
of += on a short, hence forcing us to write such abominations as
s = static_cast<short>(s + t)
instead of
s += t;
in order to comply with (IMO reasonable) coding standards requiring
the elimination of all compiler warnings. Another solution for such
compilers can be to use int, but there was presumably some reason for
choosing short in the first place. (If memory serves, this actually
happened most often when playing about with bytes in network protocols.)
> In order to be in any way useful, the warning must track the
> expression enough to eliminate cases where the sources were actually
> narrow, were small constants, or derived from getc et al. after a
> comparison to EOF.
That would be nice, I agree.
-- James Dennett <jdennett@acm.org>
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: "Greg Brewer" <nospam.greg@brewer.net>
Date: Tue, 12 Dec 2000 14:46:11 GMT Raw View
<James.Kanze@dresdner-bank.com> wrote in message
news:912k1d$6a1$1@nnrp1.deja.com...
> In article <976316263.26113.6.nnrp-14.d4e5bde1@news.demon.co.uk>,
> The problem isn't changed, since 7 has type int, and 1.2 has type
> double. You also need some way of specifying that you want a short
> 7.
>
> The alternative is to define some sort of rules for determining the
> type of 7 according to context.
I would love to have 7S to go with 7, 7L, 7F, and 7LL. I would also like to
know why short was left out of that notation!
Greg
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: "Ken Hagan" <K.Hagan@thermoteknix.co.uk>
Date: Tue, 12 Dec 2000 14:47:46 GMT Raw View
"Mike Dimmick" <mike@dimmick.demon.co.uk> wrote...
>
> However, we now have processor architectures that can perform
> multiple operations on shorter-than-a-register data in parallel
> (I'm thinking specifically of Intel's MMX instructions, but I'm
> sure there are many other examples). So perhaps it would be
> better to have the integer conversions the same as the floating
> ones; all operations are performed at the same precision, which
> is the same as that of the longest operand, or that of the target,
> if there is one. The compiler should emit a warning for a
> narrowing conversion.
Firstly, you will find that Intel's MMX instructions have strict
alignment requirements on their arguments, and I think it is very
unlikely that a compiler could use these instructions on ordinary
code. If you have extraordinary code, then you already have the
option of declaring a function and writing it in assembly language.
Secondly, there are folks out there who think the current behaviour
for floating point types is an abomination. Consider multiplying
two floats together (on an IEEE machine).
// assume f1 and f2 are floats
float f = f1*f2;
double d = f1*f2;
In the above case, "d" is guaranteed to be accurate, since doubles
have more than twice the precision, and much larger range. At least,
it used to be. Then someone decided that the orthogonality of the
type system was more important than fitness for purpose, and decreed
that henceforth "f1*f2" would be *evaluated* at float precision and
then extended to double for the assignment. Not only does this allow
inaccurate results and overflows which did not previously occur, but
on certain popular CPUs (x86 and 68k spring to mind) it incurs such a
huge performance penalty that no vendor actually does it.
A similar argument can be deployed for the case of two shorts on
a 32-bit machine.
// assume s1 and s2 are shorts and int is twice as wide
int i = s1*s2;
Can this overflow?
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: christian.bau@isltd.insignia.com (Christian Bau)
Date: Tue, 12 Dec 2000 16:36:42 GMT Raw View
In article <3A353A3A.E79E7DCE@evtechnology.com>, James Dennett
<james@evtechnology.com> wrote:
> It may be more than luck which lead you to say "_close_ to unusable"
> there. Such compilers exist, and such warnings are occasionally useful,
> e.g., when porting code with assumptions of 32-bit long to a 64-bit
> architecture.
>
> IIRC, I came across one compiler which produced a warning for any use
> of += on a short, hence forcing us to write such abominations as
> s = static_cast<short>(s + t)
> instead of
> s += t;
> in order to comply with (IMO reasonable) coding standards requiring
> the elimination of all compiler warnings. Another solution for such
> compilers can be to use int, but there was presumably some reason for
> choosing short in the first place. (If memory serves, this actually
> happened most often when playing about with bytes in network protocols.)
I am in the lucky situation of being the user of such a compiler :-(
I think there is a reasonable argument against such a warning:
On an implementation with 16 bit int, with a declaration int a, b, c; an
assignment a = b + c; would produce no warning. If the mathematical value
b + c is >= 32768 I will have undefined behavior (I think; it might be
implementation defined), anyway, I will not get the mathematically correct
result.
Now if I have a declaration short a, b, c; and an assignment a = b + c;
the set of possible values is the same as with an implementation with 16
bit int, and exactly the same combinations of values will produce
mathematically incorrect result. The only difference is that if int has 17
or more bits then the addition is correct, and the assignment will be
implementation defined behavior. But the result is the same: Incorrect
result in exactly the same cases. So why give a warning in one case and
not in the other case, when exactly the same combination of values produce
results that the programmer (probably) did not want?
Moreover, the problem itself is there whether I use short, int, long int
or long long int: Adding two values of type T and storing the result into
an object of type T might overflow at some point. And finally, the
static_cast that was used to shut up the compiler is of course a bug
waiting to happen: If the environment of the program changes, and whatever
quantity that is stored in s can become greater than 32767, and a
maintenance programmer changes the type of s to long or int, failing to
remove the static_cast means a subtle bug is introduced. I wonder if any
compilers are clever enough to give a warning for an assignement <lvalue
of type int> = static_cast<short> (<expression of type int>)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: "Greg Brewer" <nospam.greg@brewer.net>
Date: Fri, 8 Dec 2000 14:55:55 GMT Raw View
I was reading Randy Meyers' column in the Dec CUJ on "The New C". I'm
figuring many of the issues addressed here need to be addressed in C++.
Near the end of the acticle, he talks about constants and says "a decimal
integer constant ... has the first type form this list that can represent
its values: int, long, or long long."
I'm curious why short was omitted. I'm constantly get warnings because my
shorts get promoted to ints too easily. Take the code,
short i = 0;
i = i + 7; // warning
If seven were a short, there wouldn't be a problem.
Any insights into this?
Greg
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]
Author: "Mike Dimmick" <mike@dimmick.demon.co.uk>
Date: Sun, 10 Dec 2000 00:10:54 GMT Raw View
"Greg Brewer" <nospam.greg@brewer.net> wrote in message
news:90oh6v$1c5u$1@news.hal-pc.org...
> I was reading Randy Meyers' column in the Dec CUJ on "The New C". I'm
> figuring many of the issues addressed here need to be addressed in C++.
> Near the end of the acticle, he talks about constants and says "a decimal
> integer constant ... has the first type form this list that can represent
> its values: int, long, or long long."
>
> I'm curious why short was omitted. I'm constantly get warnings because my
> shorts get promoted to ints too easily. Take the code,
> short i = 0;
> i = i + 7; // warning
> If seven were a short, there wouldn't be a problem.
The current promotion rules are quite well ingrained, and the rule is that
anything shorter than an int gets promoted to an int when any arithmetic
operation is performed. I believe the aim was to perform all operations
(that might possibly overflow) at maximum precision without requiring the
user to do anything. However, the thinking is somewhat flawed, as two
operands the same size can cause an overflow.
However, we now have processor architectures that can perform multiple
operations on shorter-than-a-register data in parallel (I'm thinking
specifically of Intel's MMX instructions, but I'm sure there are many other
examples). So perhaps it would be better to have the integer conversions
the same as the floating ones; all operations are performed at the same
precision, which is the same as that of the longest operand, or that of the
target, if there is one. The compiler should emit a warning for a narrowing
conversion.
The type of the literal '7' is int. If you'd written '7L', it would be a
long. Be thankful that you don't have to do this merely in order to store
'7' in a long variable.
--
Mike Dimmick
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
[ Note that the FAQ URL has changed! Please update your bookmarks. ]