Topic: Allowing floating point, string literals, and UDTs for template parameters
Author: kanze@alex.gabi-soft.fr (James Kanze)
Date: Sun, 4 May 2003 21:44:17 +0000 (UTC) Raw View
Simon@sfbone.fsnet.co.uk ("Simon F Bone") writes:
[...]
|> > What has been pretty well established, I think, is that it is
|> > easier to get more accurate results, and above all, to know just
|> > how accurate your results are, with powers of two.
|> Well, I think (in essence ;-)) that how accurate you *know* your
|> results are is what deterimines their accuracy...
Quite my opinion, at any rate. (On the other hand, there seem to be
people who consider that if it doesn't crash...)
|> Anyway, there is little point in my trying to pretend to be an
|> expert on this. The paper I referred to says it all far better
|> than I could.
Sounds like something we have in common. My knowledge of floating
point arithmetic is just sufficient for me to know that I shouldn't be
using it myself.
--=20
James Kanze mailto:kanze@gabi-soft.fr
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France Tel. +33 1 41 89 80 93
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: kanze@gabi-soft.de (James Kanze)
Date: Wed, 30 Apr 2003 18:45:28 +0000 (UTC) Raw View
francis@robinton.demon.co.uk (Francis Glassborow) wrote in message
news:<kf2MzXBU5rr+EwMC@robinton.demon.co.uk>...
> In article <C1yra.1$7k7.2022@news.ecrc.de>, cody <deutronium@web.de>
> writes
> >can i constant like 1.23456789 treated different on different
> >compilers? are all 1.23456789 binary the same? internally floating
> >point is based on decimal powers, so 1.23456789 should be always the
> >same, no matter which compiler was used, shouldn't it?
> No reason that all compilers will produce exactly the same bit
> pattern. BTW internally coding is based on powers of two.
On some machines. IBM mainframes still use base 16.
And of course, the size of the mantissa field also varies.
--
James Kanze GABI Software mailto:kanze@gabi-soft.fr
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, T l. : +33 (0)1 30 23 45 16
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: ron@sensor.com ("Ron Natalie")
Date: Wed, 30 Apr 2003 18:48:57 +0000 (UTC) Raw View
""cody"" <deutronium@web.de> wrote in message news:nnNra.1$WE1.698@news.ecrc.de...
> > No reason that all compilers will produce exactly the same bit pattern.
> > BTW internally coding is based on powers of two.
>
>
> i thought a floating point number is stored this way:
>
> mantissa * (10 ^ exponent)?
>
You are wrong in most cases. Most machines use a binary format.
The most common of these is one that is compliant with the IEEE
754 Floating Point standard (just about every processor on the
market these days).
The floating point type is divided into a binary mantissa (which has
an implicit 1 leading it that is not stored), a sign bit, and a binary
base2 exponent. The exponent is stored "biased" (that is it's
an unsigned quanitiy where the most negative value is 0).
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: Simon@sfbone.fsnet.co.uk ("Simon F Bone")
Date: Wed, 30 Apr 2003 18:49:01 +0000 (UTC) Raw View
"cody" <deutronium@web.de> wrote in message
news:nnNra.1$WE1.698@news.ecrc.de...
> > No reason that all compilers will produce exactly the same bit pattern.
> > BTW internally coding is based on powers of two.
>
>
> i thought a floating point number is stored this way:
>
> mantissa * (10 ^ exponent)?
>
Nope, at least not usually. I think IBM mainframes might use
powers of ten, but that's because they were mostly targeting
users like banks that wanted currency data processed. It isn't
a good idea to use powers of ten anyway.
You should look online for an article "What every computer
scientist should know about floating-point arithmetic", by David
Goldberg. I think it was published by the ACM.
In essence, powers of two are more accurate.
HTH,
Simon Bone.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Wed, 30 Apr 2003 19:04:02 +0000 (UTC) Raw View
In article <nnNra.1$WE1.698@news.ecrc.de>, cody <deutronium@web.de>
writes
>> No reason that all compilers will produce exactly the same bit pattern.
>> BTW internally coding is based on powers of two.
>
>
>i thought a floating point number is stored this way:
>
> mantissa * (10 ^ exponent)?
Actually the representation is implementation defined but I believe that
most implementations use a power of two exponent internally, in fact
doing it otherwise would seem to make internal manipulations harder than
necessary for no real benefits.
--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: kanze@gabi-soft.de (James Kanze)
Date: Fri, 2 May 2003 14:37:23 +0000 (UTC) Raw View
Simon@sfbone.fsnet.co.uk ("Simon F Bone") wrote in message
news:<b8p0fp$89o$1@news6.svr.pol.co.uk>...
> "cody" <deutronium@web.de> wrote in message
> news:nnNra.1$WE1.698@news.ecrc.de...
> > > No reason that all compilers will produce exactly the same bit
> > > pattern. BTW internally coding is based on powers of two.
> > i thought a floating point number is stored this way:
> > mantissa * (10 ^ exponent)?
> Nope, at least not usually. I think IBM mainframes might use powers of
> ten,
Until a couple of years ago, IBM mainframes used powers of 16. (There
is also very good hardware support for BCD, but it typically isn't
accessible from C or C++.) Since then, they support both the
tranditional, base 16 format and IEEE, although I seem to recall that in
our tests at the time, the IEEE format was significantly slower.
> but that's because they were mostly targeting users like banks that
> wanted currency data processed. It isn't a good idea to use powers of
> ten anyway.
> You should look online for an article "What every computer scientist
> should know about floating-point arithmetic", by David Goldberg. I
> think it was published by the ACM.
> In essence, powers of two are more accurate.
In essence, all are 100% accurate for their definition of the
operations:-). The only real problem is that according to their
definition, floating point arithmetic doesn't obey very many of the laws
of real arithmetic -- addition isn't associative, for example.
What has been pretty well established, I think, is that it is easier to
get more accurate results, and above all, to know just how accurate your
results are, with powers of two.
--
James Kanze GABI Software mailto:kanze@gabi-soft.fr
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, T l. : +33 (0)1 30 23 45 16
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: Simon@sfbone.fsnet.co.uk ("Simon F Bone")
Date: Fri, 2 May 2003 15:42:10 +0000 (UTC) Raw View
James Kanze <kanze@gabi-soft.de> wrote in message
news:d6651fb6.0305020631.413471c0@posting.google.com...
> Simon@sfbone.fsnet.co.uk ("Simon F Bone") wrote in message
> news:<b8p0fp$89o$1@news6.svr.pol.co.uk>...
> > "cody" <deutronium@web.de> wrote in message
> > news:nnNra.1$WE1.698@news.ecrc.de...
> > > > No reason that all compilers will produce exactly the same bit
> > > > pattern. BTW internally coding is based on powers of two.
>
> > > i thought a floating point number is stored this way:
>
> > > mantissa * (10 ^ exponent)?
>
> > Nope, at least not usually. I think IBM mainframes might use powers of
> > ten,
>
> Until a couple of years ago, IBM mainframes used powers of 16. (There
> is also very good hardware support for BCD, but it typically isn't
> accessible from C or C++.) Since then, they support both the
> tranditional, base 16 format and IEEE, although I seem to recall that in
> our tests at the time, the IEEE format was significantly slower.
>
Yeah, I was getting mixed up thinking of BCD support.
Sorry for any confusion.
> > but that's because they were mostly targeting users like banks that
> > wanted currency data processed. It isn't a good idea to use powers of
> > ten anyway.
>
> > You should look online for an article "What every computer scientist
> > should know about floating-point arithmetic", by David Goldberg. I
> > think it was published by the ACM.
>
> > In essence, powers of two are more accurate.
>
> In essence, all are 100% accurate for their definition of the
> operations:-). The only real problem is that according to their
> definition, floating point arithmetic doesn't obey very many of the laws
> of real arithmetic -- addition isn't associative, for example.
>
> What has been pretty well established, I think, is that it is easier to
> get more accurate results, and above all, to know just how accurate your
> results are, with powers of two.
>
Well, I think (in essence ;-)) that how accurate you *know* your results
are is what deterimines their accuracy...
Anyway, there is little point in my trying to pretend to be an expert on
this. The
paper I referred to says it all far better than I could.
Simon Bone
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: anthony.williamsNOSPAM@anthonyw.cjb.net (Anthony Williams)
Date: Tue, 29 Apr 2003 05:54:48 +0000 (UTC) Raw View
"terjes."@chello.no (Terje Sletteb=F8) writes:
> template<double v>
> struct test
> {
> static const double value=3Dv;
> };
>=20
> The same extensions could also enable new uses for metaprogramming,
> such as compile-time computation with floating point values, or string
> literals, including string manipulation.
>=20
> None of this requires dynamic initialisation; it may be specified in
> such a way that the data is stored in the data segment.
Floating point values are not compile-time constants and cannot be, as th=
e
precise behaviour of operations on floating point types is not defined by=
the
standard. Is test<1.0> the same instantiation as test<1.01> or test<1.000=
0001>
or test<1.00000000000000000001>? On some platforms these numbers are the =
same,
on some not. It might be different between the compiler's platform and th=
e
target platform.
As for compile-time computation, what is the result of 1.0000000000000001=
*
1.0000000000000001? Is it the same as any of the above?
Allowing compile-time computation with floating point numbers requires th=
at
the compiler can answer these questions. Currently it is possible for the
compiler to store the text representation of floating point constants in =
the
executable, and generate code for the target platform that converts these=
to
numbers at runtime. Under such an implementation, the compiler cannot kno=
w the
answer to these questions unless it has a complete emulation layer for th=
e
floating point operations on the target system.
String literals are another matter, and are considerably simpler, since t=
he
semantics of operations on string-literals are well-defined.
Anthony
--=20
Anthony Williams
Senior Software Engineer, Beran Instruments Ltd.
Remove NOSPAM when replying, for timely response.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: deutronium@web.de ("cody")
Date: Tue, 29 Apr 2003 17:19:07 +0000 (UTC) Raw View
> Floating point values are not compile-time constants and cannot be, as the
> precise behaviour of operations on floating point types is not defined by
the
> standard. Is test<1.0> the same instantiation as test<1.01> or
test<1.0000001>
> or test<1.00000000000000000001>? On some platforms these numbers are the
same,
> on some not. It might be different between the compiler's platform and the
> target platform.
ok would be the reason to disallow floating point expressions as compiletime
expressions. but why generally forbid floating point initializatin constants
in header files?
can i constant like 1.23456789 treated different on different compilers? are
all 1.23456789 binary the same?
internally floating point is based on decimal powers, so 1.23456789 should
be always the same, no matter which compiler was used, shouldn't it?
--
cody
Freeware Tools, Games and Humour
http://www.deutronium.de.vu
[noncommercial and no fucking ads]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Tue, 29 Apr 2003 18:08:06 +0000 (UTC) Raw View
In article <C1yra.1$7k7.2022@news.ecrc.de>, cody <deutronium@web.de>
writes
>can i constant like 1.23456789 treated different on different compilers? are
>all 1.23456789 binary the same?
>internally floating point is based on decimal powers, so 1.23456789 should
>be always the same, no matter which compiler was used, shouldn't it?
No reason that all compilers will produce exactly the same bit pattern.
BTW internally coding is based on powers of two.
--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: deutronium@web.de ("cody")
Date: Wed, 30 Apr 2003 15:51:02 +0000 (UTC) Raw View
> No reason that all compilers will produce exactly the same bit pattern.
> BTW internally coding is based on powers of two.
i thought a floating point number is stored this way:
mantissa * (10 ^ exponent)?
--
cody
Freeware Tools, Games and Humour
http://www.deutronium.de.vu
[noncommercial and no fucking ads]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: "terjes."@chello.no (=?ISO-8859-1?Q?Terje_Sletteb=F8?=)
Date: Mon, 28 Apr 2003 12:43:34 +0000 (UTC) Raw View
francis.glassborow@ntlworld.com (Francis Glassborow) wrote in message news:<7TILQ8DgMno+Ew2v@robinton.demon.co.uk>...
>Subject: Re: Why only integral const statics inside a class?
>
> In article <Wlmoa.5381$b71.84095@news4.e.nsc.no>, Espen Ruud Schultz
> <default@nospam.invalid> writes
> >You can argue all you want about why static const integrals should be able
> >to be initialized inside a class. But I haven't seen a single point on why
> >a static const float shouldn't be...
>
> Consistency seems fine, but in fact we have nasty problems concerned
> with dynamic initialisation so we would be unlikely to be able to
> support a general license for in class initialisers for static const
> objects. The question is how to specify a clear unambiguous rule for
> when it is allowed. We have one that seems to cover the useful cases. Do
> you wish to formulate an alternative that covers a wider range? That
> would require justification
There are some cases where it could be useful to be able to have
in-class initialisation of non-integral values, they do tend to rely
on another possible extension, though: Template non-type parameters of
floating point, and possibly compound (POD) types. This extension for
template parameters is discussed in "C++ Templates" by
Josuttis/Vandevoorde.
This could allow one to get rid of some cases which now require
macros, or manual code duplication, and I think you'll agree that this
is a good reason. :)
An example is if you have classes like this:
struct A
{
static const double value;
};
A::value=...;
struct B
{
static const double value;
};
B::value=...;
Recognising the duplication, you may want to generate the classes from
a template, but, no, you can't, as you can't pass the floating point
constant as a template parameter, or initialise the static double in
the class. If you could, you could have done this:
template<double v>
struct test
{
static const double value=v;
};
Instead, you now have to do something like the following, to achieve
the same:
#define TEST(name,v)\
struct name\
{\
static const double value;\
}\
\
name::value=v;
This is not an academic issue; I have code just like this, and it
ain't pretty, but that's how it has to be done, if you want to be able
to easily construct new such classes.
An alternative is to pass a _reference_ to a double, which is allowed.
That requires the object to have external linkage, though, so you
still end up with some boilerplate code:
extern const double valueA=...;
typedef test<valueA> testA;
In other words, you can't specify the value at the template
instantiation.
The same goes for allowing passing string literals as template
parameters. Various ways that may be implemented is discussed in "C++
Templates".
The same extensions could also enable new uses for metaprogramming,
such as compile-time computation with floating point values, or string
literals, including string manipulation.
None of this requires dynamic initialisation; it may be specified in
such a way that the data is stored in the data segment.
Regards,
Terje
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]