Topic: limits


Author: falk.tannhauser@crf.canon.fr (Falk =?iso-8859-1?Q?Tannh=E4user?=)
Date: Mon, 12 Jul 2004 21:32:38 +0000 (UTC)
Raw View
Gabriel Dos Reis wrote:
>=20
> falk.tannhauser@crf.canon.fr (Falk Tannh=E4user) writes:
> | How about this one:
> |
> |   template<double x> struct foo          { typedef void  bar; };
> |
> |   template<> struct foo<0.0>             { typedef int   bar; };
> |   template<> struct foo<1.0 - 1.0/41*41> { typedef char* bar; };
> |
> | Should it compile or not? (On my compiler, 41 is the smallest positiv=
e
> | integer 'n' for which '1.0 - 1.0/n*n !=3D 0.0'). Should it be
> | Implementation Defined Behaviour? Undefined Behaviour?
>=20
> My point in the previous message is that _if_ the compiler is not
> required to emulate floating point arithmetic at compile time, _then_
> the next reasonable thing to do is to consider the foundational idea
> of ODR and use token stream to decide when two specializations are same
> -- in a sense, that is more or less how templates are specified.  From
> that perspective, the two specializations foo<0.0> and foo<1.0 -
> 1.0/41*41> are not the same, irrespective of whether after reduction,
> the arguments are semantically equivalent or not.

In this case, foo<0.0> would also be different from foo<6.55957 * 0.0> or
foo<1.95583 - 1.95583>? It seems that this would make floating point temp=
late
parameters behave quite differently from integer ones (and make them prob=
ably
much less useful than the latter)! Or maybe in such simple, obvious cases=
 as
I gave above, the expressions should be simplified, as long as the result
can be shown not to depend on the machine precision (I can't imagine
a machine on which '6.55957 * 0.0 !=3D 0.0', or let's say, '6.0 * 7.0 !=3D=
 42.0').

Falk

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Tue, 13 Jul 2004 20:29:02 +0000 (UTC)
Raw View
falk.tannhauser@crf.canon.fr (Falk Tannh=E4user) writes:

| Gabriel Dos Reis wrote:
| >=20
| > falk.tannhauser@crf.canon.fr (Falk Tannh=E4user) writes:
| > | How about this one:
| > |
| > |   template<double x> struct foo          { typedef void  bar; };
| > |
| > |   template<> struct foo<0.0>             { typedef int   bar; };
| > |   template<> struct foo<1.0 - 1.0/41*41> { typedef char* bar; };
| > |
| > | Should it compile or not? (On my compiler, 41 is the smallest posit=
ive
| > | integer 'n' for which '1.0 - 1.0/n*n !=3D 0.0'). Should it be
| > | Implementation Defined Behaviour? Undefined Behaviour?
| >=20
| > My point in the previous message is that _if_ the compiler is not
| > required to emulate floating point arithmetic at compile time, _then_
| > the next reasonable thing to do is to consider the foundational idea
| > of ODR and use token stream to decide when two specializations are sa=
me
| > -- in a sense, that is more or less how templates are specified.  Fro=
m
| > that perspective, the two specializations foo<0.0> and foo<1.0 -
| > 1.0/41*41> are not the same, irrespective of whether after reduction,
| > the arguments are semantically equivalent or not.
|=20
| In this case, foo<0.0> would also be different from foo<6.55957 * 0.0> =
or
| foo<1.95583 - 1.95583>?

Yes.

| It seems that this would make floating point template
| parameters behave quite differently from integer ones (and make them pr=
obably
| much less useful than the latter)!

Floating point behaves inherently differently, and failing requirement
that compilers emulate targets' FP at compile-time.

| Or maybe in such simple, obvious cases as
| I gave above, the expressions should be simplified, as long as the resu=
lt
| can be shown not to depend on the machine precision (I can't imagine
| a machine on which '6.55957 * 0.0 !=3D 0.0', or let's say, '6.0 * 7.0 !=
=3D 42.0').

It is easy to give specific examples where one may know what
should happen.  What we need is a general simple rule, not a long list of
special cases -- this is one of weakest aspects of the standard.
The description of templates, for example, in the current standard
is too much of series of long switch statement ; machines have no problem
handling that, it is much harder for humans.  The outcome is
previsible: one could manage to get the first 67 cases right, but I
doubt one would not forget the 68th.  Leave alone interactions between
those switch statements.  Look at the list of defects about templates.
One may argue that they were the most experimental feature, but I
suspect that the switch-statement-oriented description is not
germane to that.

--=20
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: qrczak@knm.org.pl ("Marcin 'Qrczak' Kowalczyk")
Date: Tue, 6 Jul 2004 19:56:34 +0000 (UTC)
Raw View
On Tue, 06 Jul 2004 18:36:25 +0000, Bo Persson wrote:

> The problem isn't just <limits>, it is constant expressions. Once I get
> std::limits<long double>::max as a compile time constant, of course I
> want to use it too:
>
> const long double pi = 3.14xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxL;
> const long double half_pi = pi / 2.0L;

The latter is not a compile-time constant, so the initialization happens
at runtime. What is the problem?

> const long double selector = std::limits<long double>::max / (42.05L *
> half_pi);
>
> template<>
> void f<selector>()

Nobody proposes floating point values as template parameters! It has
nothing to do with limits being constants rather than functions.

--
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Wed, 7 Jul 2004 03:29:03 +0000 (UTC)
Raw View
qrczak@knm.org.pl ("Marcin 'Qrczak' Kowalczyk") writes:

| On Tue, 06 Jul 2004 18:36:25 +0000, Bo Persson wrote:
|
| > The problem isn't just <limits>, it is constant expressions. Once I get
| > std::limits<long double>::max as a compile time constant, of course I
| > want to use it too:
| >
| > const long double pi = 3.14xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxL;
| > const long double half_pi = pi / 2.0L;
|
| The latter is not a compile-time constant, so the initialization happens
| at runtime.

Why would you stop there?  There is no requirement that a compile-time
constant should only be literals.  If you find the current definition
limited, your suggestion isn't really an improvement.  It is just an
instance of special-casing that introduces more confusion that
anything else.

[...]

| Nobody proposes floating point values as template parameters!

Yes, but that looks to me a logical step.  If you give me a compile-time
constant and I can't use it as a template-argument you rather call it
something different!

--
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net (James Kuyper)
Date: Wed, 7 Jul 2004 06:32:25 +0000 (UTC)
Raw View
bop@gmb.dk ("Bo Persson") wrote in message news:<2kpri7F4tps7U1@uni-berlin.de>...
Bo Persson wrote:

> "James Kuyper" <kuyper@wizard.net> skrev i meddelandet
> news:8b42afac.0407031802.c241626@posting.google.com...
>
>> "Prateek R Karandikar" <kprateek88@yahoo.com> wrote in message

..

>> There's no reason why floating constants couldn't be manipulated at
>> compile time, and many implementations will perform compile-time
>> floating point calculations as an optimization, if the target
floating
>> point environment is expected to match the compilation environment.
>
>
>
> But manipulation of floating point expressions would require ALL
> implementations to emulate the target precision, even if they are not
> the least equvalent.


How do you derive that requirement? There's no requirement that
different implementations of C++ targeted at the same platform have to
produce the same results from floating point calculations. If floating
point constants were allowed as template non-type arguments, a failure
to produce the same result on different implementations would
seriously inhibit linking together code compiled with different
implementations. But that's an issue which falls outside the scope of
the C++ standard.

> Would you for example expect a GCC cross compiler to be able to emulate
> all target platforms on every possible host platform? That would bring
> porting to a new level!


I didn't say that the result would be portable; merely that it could
be done.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Marcin 'Qrczak' Kowalczyk" <qrczak@knm.org.pl>
Date: Wed, 7 Jul 2004 12:18:57 CST
Raw View
On Wed, 07 Jul 2004 03:29:03 +0000, Gabriel Dos Reis wrote:

> | The latter is not a compile-time constant, so the initialization happens
> | at runtime.
>
> Why would you stop there?  There is no requirement that a compile-time
> constant should only be literals.  If you find the current definition
> limited, your suggestion isn't really an improvement.

It is an improvement for integer types. It doesn't have to be for floating
point types too, and it's not the fault of this proposal that floating
point can't benefit from it as well (because of having different rules of
compile-time constants).

> | Nobody proposes floating point values as template parameters!
>
> Yes, but that looks to me a logical step.

No. It is all motivated by integer types only. The argument that it
doesn't at the same time change the situation for FP types is silly.
So what? It does for integer types, and doesn't make FP situation worse,
so it's OK.

--
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Wed, 7 Jul 2004 21:02:24 +0000 (UTC)
Raw View
In article <pan.2004.07.06.18.42.59.865073@knm.org.pl>, Marcin 'Qrczak'
Kowalczyk <qrczak@knm.org.pl> writes
>Nobody proposes floating point values as template parameters! It has
>nothing to do with limits being constants rather than functions.

You sure about your first assertion? And it strikes me that C99's
hexadecimal floating point representation might be worth consideration.

--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dsp@bdal.de (=?ISO-8859-1?Q?=22Daniel_Kr=FCgler_=28ne_Spangenberg=29=22?=)
Date: Thu, 8 Jul 2004 19:58:20 +0000 (UTC)
Raw View
Good morning Francis Glassborow,

Francis Glassborow schrieb:

> In article <pan.2004.07.06.18.42.59.865073@knm.org.pl>, Marcin=20
> 'Qrczak' Kowalczyk <qrczak@knm.org.pl> writes
>
>> Nobody proposes floating point values as template parameters! It has
>> nothing to do with limits being constants rather than functions.
>
>
> You sure about your first assertion? And it strikes me that C99's=20
> hexadecimal floating point representation might be worth consideration.

I remember that "the" template book of David Vandevoorde and Nicolai=20
Josuttis
mentions the possibility to allow floating point nontype template=20
parameters. The
ideas in the book go beyond the idea of hexadecimal floating point=20
representations.

Actually I always was very suspecious concerning this relaxation of the=20
current rules,
and I would appreciate any explanatory statement which also takes into=20
account the
basic difference between floating point numbers and integral values,=20
that is the fact
of the inexact arithmetic of floating point numbers.

Would these new rules imply additional relaxation of the ICE rules and=20
allow more
floating point interaction in ICE's?

Furtheron, while the portability of current integral nontype templates=20
is limited only
by the implementation-defined limits of the corresponding integral=20
types, which is
usually not a problem (OK, I see some existing problems concerning=20
enumeration
types as nontype template params because of the lack of knowledge of the
underlying integral type), floating point numbers have the=20
**additional** number-of-digits
issue (aka "precision"), which complicates matters. This lead to=20
situations, where we
have to consider situations like the following:

template <float F> struct A{};

typedef A<1.0f> A1;
typedef A<1.00000001f> A2;

typedef bool IsSame[boost::is_same<A1, A2>::value ? -1 : 1];

Thanks for your ideas,

Daniel Kr=FCgler

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Fri, 9 Jul 2004 05:44:01 +0000 (UTC)
Raw View
"Marcin 'Qrczak' Kowalczyk" <qrczak@knm.org.pl> writes:

| On Wed, 07 Jul 2004 03:29:03 +0000, Gabriel Dos Reis wrote:
|
| > | The latter is not a compile-time constant, so the initialization happens
| > | at runtime.
| >
| > Why would you stop there?  There is no requirement that a compile-time
| > constant should only be literals.  If you find the current definition
| > limited, your suggestion isn't really an improvement.
|
| It is an improvement for integer types.

Your suggestion that triggered this subthread was:

   > But manipulation of floating point expressions would require ALL
   > implementations to emulate the target precision, even if they are not
   > the least equvalent.

   For limits being constants it would not be necessary to emulate the whole
   FP arithmetic, but *only* to be able to write the necessary constants as
   literals. A compiler must be able to process FP literals anyway.

I'm puzzled by your current message in connection with the above.  You
seem to believe that a constant-expression or compile-time constant
should only be a literal.  But that isn't the case.

--
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Fri, 9 Jul 2004 05:44:01 +0000 (UTC)
Raw View
kuyper@wizard.net (James Kuyper) writes:

[...]

| > Would you for example expect a GCC cross compiler to be able to emulate
| > all target platforms on every possible host platform? That would bring
| > porting to a new level!
|
|
| I didn't say that the result would be portable; merely that it could
| be done.

So, you don't expect foo<1.0/3> to designate the same type across plateforms?

--
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net (James Kuyper)
Date: Fri, 9 Jul 2004 21:36:21 +0000 (UTC)
Raw View
gdr@cs.tamu.edu (Gabriel Dos Reis) wrote in message news:<m3n02ahct5.fsf@merlin.cs.tamu.edu>...
> kuyper@wizard.net (James Kuyper) writes:
>
> [...]
>
> | > Would you for example expect a GCC cross compiler to be able to emulate
> | > all target platforms on every possible host platform? That would bring
> | > porting to a new level!
> |
> |
> | I didn't say that the result would be portable; merely that it could
> | be done.
>
> So, you don't expect foo<1.0/3> to designate the same type across plateforms?

I don't expect 1.0/3 to represent the same value across platforms, so
why should I expect foo<1.0/3> to be the same type (if it were made
legal)?

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: non-existent@iobox.com ("Sergey P. Derevyago")
Date: Sat, 10 Jul 2004 05:36:42 +0000 (UTC)
Raw View
"Daniel Kr=FCgler (ne Spangenberg)" wrote:
> floating point numbers have the **additional** number-of-digits issue (=
aka
> "precision"), which complicates matters. This lead to situations, where=
 we
> have to consider situations like the following:
>=20
> template <float F> struct A{};
>=20
> typedef A<1.0f> A1;
> typedef A<1.00000001f> A2;
>=20
> typedef bool IsSame[boost::is_same<A1, A2>::value ? -1 : 1];
>=20
 IMHO we should not ban floating point template parameters for this reaso=
n
because we have the similar problem with integers.

 Here is a snippet from my Defect Report
http://groups.google.com/groups?selm=3D3C0F49A3.4E733C4F%40iobox.com
(disappeared somewhere in the committee):

It is not clear whether the following program is well-formed:
-----------------------------------8<-----------------------------------
template <int TI> struct A { };

template<> struct A<4294967291> { };  // specialization 1
template<> struct A<-5> { };          // specialization 2

int main() { }
-----------------------------------8<-----------------------------------
Note that on some implementations int(4294967291)=3D=3D-5 so the second
specialization can collide with the first one.
--
         With all respect, Sergey.               http://ders.angen.net/
         mailto : ders at skeptik.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: usenet-nospam@nmhq.net (Niklas Matthies)
Date: Sat, 10 Jul 2004 05:36:58 +0000 (UTC)
Raw View
On 2004-07-09 05:44, Gabriel Dos Reis wrote:
> kuyper@wizard.net (James Kuyper) writes:
:
>| > Would you for example expect a GCC cross compiler to be able to emulate
>| > all target platforms on every possible host platform? That would bring
>| > porting to a new level!
>|
>| I didn't say that the result would be portable; merely that it could
>| be done.
>
> So, you don't expect foo<1.0/3> to designate the same type across
> plateforms?

That's not really the problem, IMHO. (Not more than e.g. foo<INT_MAX>
cannot be expected to be the same on all platforms.)

The real problem is that in

   template <double x> struct X
   {
      static double f() { return x; }
   };

   double g() { return 1.0/3; }

   double y = X<1.0/3>::f();
   double z = g();

y and z may be initialized with quite different values when the
template parameter is calculated by the compiler (on architecture A)
while the expression in g() is evaluated at runtime (on architecture B).

-- Niklas Matthies

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net (James Kuyper)
Date: Sun, 11 Jul 2004 05:32:37 +0000 (UTC)
Raw View
non-existent@iobox.com ("Sergey P. Derevyago") wrote in message news:<40EE75DB.5E3AAE9C@iobox.com>...
> "Daniel Kr gler (ne Spangenberg)" wrote:
> > floating point numbers have the **additional** number-of-digits issue (
>  aka
> > "precision"), which complicates matters. This lead to situations, where
>  we
> > have to consider situations like the following:
> >
> > template <float F> struct A{};
> >
> > typedef A<1.0f> A1;
> > typedef A<1.00000001f> A2;
> >
> > typedef bool IsSame[boost::is same<A1, A2>::value ? -1 : 1];
> >
>  IMHO we should not ban floating point template parameters for this reaso
> n
> because we have the similar problem with integers.
>
>  Here is a snippet from my Defect Report
> http://groups.google.com/groups?selm=3C0F49A3.4E733C4F%40iobox.com
> (disappeared somewhere in the committee):
>
> It is not clear whether the following program is well-formed:
> -----------------------------------8<-----------------------------------
> template <int TI> struct A { };
>
> template<> struct A<4294967291> { };  // specialization 1
> template<> struct A<-5> { };          // specialization 2
>
> int main() { }
> -----------------------------------8<-----------------------------------
> Note that on some implementations int(4294967291)==-5 so the second
> specialization can collide with the first one.

There's a key difference here. For each of the integer types, there's
a large and useful set of values for which it is guaranteed that there
will be no problems.

For floating point numbers, this problem comes up with essentially
every value.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net (James Kuyper)
Date: Sun, 11 Jul 2004 05:32:37 +0000 (UTC)
Raw View
usenet-nospam@nmhq.net (Niklas Matthies) wrote in message news:<slrncet4s6.2bms.usenet-nospam@nmhq.net>...
..
> That's not really the problem, IMHO. (Not more than e.g. foo<INT_MAX>
> cannot be expected to be the same on all platforms.)
>
> The real problem is that in
>
>    template <double x> struct X
>    {
>       static double f() { return x; }
>    };
>
>    double g() { return 1.0/3; }
>
>    double y = X<1.0/3>::f();
>    double z = g();
>
> y and z may be initialized with quite different values when the
> template parameter is calculated by the compiler (on architecture A)
> while the expression in g() is evaluated at runtime (on architecture B).

I think that if such constructs were legalized, the standard could and
should require a compiler running on architecture A to emulate
architecture B for those purposes, when cross-compiling for
architecture B. If it can't perform such emulation, it should generate
machine code that defers the evaluation until run-time. I can't
imagine how this could be prohibitively difficult to do - if the
compiler knows enough about architecture B to cross-compile for it,
shouldn't it be possible to emulate it?

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Sun, 11 Jul 2004 05:32:39 +0000 (UTC)
Raw View
kuyper@wizard.net (James Kuyper) writes:

| gdr@cs.tamu.edu (Gabriel Dos Reis) wrote in message news:<m3n02ahct5.fsf@merlin.cs.tamu.edu>...
| > kuyper@wizard.net (James Kuyper) writes:
| >
| > [...]
| >
| > | > Would you for example expect a GCC cross compiler to be able to emulate
| > | > all target platforms on every possible host platform? That would bring
| > | > porting to a new level!
| > |
| > |
| > | I didn't say that the result would be portable; merely that it could
| > | be done.
| >
| > So, you don't expect foo<1.0/3> to designate the same type across plateforms?
|
| I don't expect 1.0/3 to represent the same value across platforms, so
| why should I expect foo<1.0/3> to be the same type (if it were made
| legal)?

That is a viewpoint.  Another one is that foo<1.0/3> is the same token
stream across platforms, therefore, at compile time should represent
the same entity (that viewpoint is, for example, the foundation of the
ODR rule). C++ took special steps to allow foo<2 + 2> to be the same
as foo<4>, but that decision reflects a semantic equality, after
reduction.  It is an optimization.  An optimization does not change
semantics.

--
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: falk.tannhauser@crf.canon.fr (Falk =?iso-8859-1?Q?Tannh=E4user?=)
Date: Mon, 12 Jul 2004 12:41:55 +0000 (UTC)
Raw View
Gabriel Dos Reis wrote:
>
> kuyper@wizard.net (James Kuyper) writes:
> | I don't expect 1.0/3 to represent the same value across platforms, so
> | why should I expect foo<1.0/3> to be the same type (if it were made
> | legal)?
>
> That is a viewpoint.  Another one is that foo<1.0/3> is the same token
> stream across platforms, therefore, at compile time should represent
> the same entity (that viewpoint is, for example, the foundation of the
> ODR rule). C++ took special steps to allow foo<2 + 2> to be the same
> as foo<4>, but that decision reflects a semantic equality, after
> reduction.  It is an optimization.  An optimization does not change
> semantics.

How about this one:

  template<double x> struct foo          { typedef void  bar; };

  template<> struct foo<0.0>             { typedef int   bar; };
  template<> struct foo<1.0 - 1.0/41*41> { typedef char* bar; };

Should it compile or not? (On my compiler, 41 is the smallest positive
integer 'n' for which '1.0 - 1.0/n*n != 0.0'). Should it be
Implementation Defined Behaviour? Undefined Behaviour?

Falk

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: non-existent@iobox.com ("Sergey P. Derevyago")
Date: Mon, 12 Jul 2004 15:30:05 +0000 (UTC)
Raw View
James Kuyper wrote:
> > -----------------------------------8<-----------------------------------
> > template <int TI> struct A { };
> >
> > template<> struct A<4294967291> { };  // specialization 1
> > template<> struct A<-5> { };          // specialization 2
> >
> > int main() { }
> > -----------------------------------8<-----------------------------------
> > Note that on some implementations int(4294967291)==-5 so the second
> > specialization can collide with the first one.
>
> There's a key difference here. For each of the integer types, there's
> a large and useful set of values for which it is guaranteed that there
> will be no problems.
>
> For floating point numbers, this problem comes up with essentially
> every value.
>
 The same is true for the whole floating point arithmetic in general: it's
imprecise and the precision varies radically between implementations. But no
one is going to throw it out of C++, isn't he?

 My point is: there exist some cases where floating point template parameters
are of great value and the workarounds are ugly. And professionals are aware
of inherent limitations of floating point arithmetic to use it thoughtfully.
--
         With all respect, Sergey.               http://ders.angen.net/
         mailto : ders at skeptik.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Mon, 12 Jul 2004 16:55:12 +0000 (UTC)
Raw View
falk.tannhauser@crf.canon.fr (Falk Tannh=E4user) writes:

| Gabriel Dos Reis wrote:
| >=20
| > kuyper@wizard.net (James Kuyper) writes:
| > | I don't expect 1.0/3 to represent the same value across platforms, =
so
| > | why should I expect foo<1.0/3> to be the same type (if it were made
| > | legal)?
| >=20
| > That is a viewpoint.  Another one is that foo<1.0/3> is the same toke=
n
| > stream across platforms, therefore, at compile time should represent
| > the same entity (that viewpoint is, for example, the foundation of th=
e
| > ODR rule). C++ took special steps to allow foo<2 + 2> to be the same
| > as foo<4>, but that decision reflects a semantic equality, after
| > reduction.  It is an optimization.  An optimization does not change
| > semantics.
|=20
| How about this one:
|=20
|   template<double x> struct foo          { typedef void  bar; };
|=20
|   template<> struct foo<0.0>             { typedef int   bar; };
|   template<> struct foo<1.0 - 1.0/41*41> { typedef char* bar; };
|=20
| Should it compile or not? (On my compiler, 41 is the smallest positive
| integer 'n' for which '1.0 - 1.0/n*n !=3D 0.0'). Should it be
| Implementation Defined Behaviour? Undefined Behaviour?

My point in the previous message is that _if_ the compiler is not
required to emulate floating point arithmetic at compile time, _then_
the next reasonable thing to do is to consider the foundational idea
of ODR and use token stream to decide when two specializations are same
-- in a sense, that is more or less how templates are specified.  From
that perspective, the two specializations foo<0.0> and foo<1.0 -
1.0/41*41> are not the same, irrespective of whether after reduction,
the arguments are semantically equivalent or not.=20

--=20
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Mon, 12 Jul 2004 17:14:16 +0000 (UTC)
Raw View
In article <40F24710.71A8FE71@iobox.com>, Sergey P. Derevyago
<non-existent@iobox.com> writes
>       The same is true for the whole floating point arithmetic in general: it's
>imprecise and the precision varies radically between implementations. But no
>one is going to throw it out of C++, isn't he?

However the new decimal float proposals open up extra potential. I think
in this case we do have the possibility for requiring an extensive range
of values to be portably representable.


--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net (James Kuyper)
Date: Sun, 4 Jul 2004 02:32:45 +0000 (UTC)
Raw View
"Prateek R Karandikar" <kprateek88@yahoo.com> wrote in message news:<cc17k5$mvi@odah37.prod.google.com>...
..
>> I infer from <limits> and your
>> comments that floating points constants are not portable or not as
>
>
> safe as
>
>> values returned via functions.  Why is that?
>
>
>
> It is not about safety or portability. ICEs can be manipulated at
> compile-time, floating constants cannot be (AFAIK). So static const
> with in-class definitions are allowed only for integral types.


There's no reason why floating constants couldn't be manipulated at
compile time, and many implementations will perform compile-time
floating point calculations as an optimization, if the target floating
point environment is expected to match the compilation environment.
ICE's are given a special significance that doesn't apply to floating
point constants, and that's partly a matter of portability. In
general, whether or not two floating point expressions compare equal
depends upon the precise implementation-specific details of how the
they are evaluated.

The other reason ICE's are given special treatment is simpler: the
special treatment applies mostly in context where fractional values
would be meaningless, such as array bounds, bit-field lengths, and
enumerator initializers. However, that doesn't apply to case
expressions or non-type template arguments; in both of those contexts
a non-integer constant could have been meaningful, and it was a matter
of deliberate choice to prohibit their use in those contexts.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Mon, 5 Jul 2004 16:15:48 +0000 (UTC)
Raw View
"James Kuyper" <kuyper@wizard.net> skrev i meddelandet
news:8b42afac.0407031802.c241626@posting.google.com...
> "Prateek R Karandikar" <kprateek88@yahoo.com> wrote in message
news:<cc17k5$mvi@odah37.prod.google.com>...
> ..
> >> I infer from <limits> and your
> >> comments that floating points constants are not portable or not as
> >
> >
> > safe as
> >
> >> values returned via functions.  Why is that?
> >
> >
> >
> > It is not about safety or portability. ICEs can be manipulated at
> > compile-time, floating constants cannot be (AFAIK). So static const
> > with in-class definitions are allowed only for integral types.
>
>
> There's no reason why floating constants couldn't be manipulated at
> compile time, and many implementations will perform compile-time
> floating point calculations as an optimization, if the target floating
> point environment is expected to match the compilation environment.

But manipulation of floating point expressions would require ALL
implementations to emulate the target precision, even if they are not
the least equvalent.

Would you for example expect a GCC cross compiler to be able to emulate
all target platforms on every possible host platform? That would bring
porting to a new level!


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: qrczak@knm.org.pl ("Marcin 'Qrczak' Kowalczyk")
Date: Mon, 5 Jul 2004 22:03:57 +0000 (UTC)
Raw View
On Mon, 05 Jul 2004 16:15:48 +0000, Bo Persson wrote:

> But manipulation of floating point expressions would require ALL
> implementations to emulate the target precision, even if they are not
> the least equvalent.

For limits being constants it would not be necessary to emulate the whole
FP arithmetic, but *only* to be able to write the necessary constants as
literals. A compiler must be able to process FP literals anyway.

--
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: gdr@cs.tamu.edu (Gabriel Dos Reis)
Date: Tue, 6 Jul 2004 03:27:42 +0000 (UTC)
Raw View
qrczak@knm.org.pl ("Marcin 'Qrczak' Kowalczyk") writes:

| On Mon, 05 Jul 2004 16:15:48 +0000, Bo Persson wrote:
|
| > But manipulation of floating point expressions would require ALL
| > implementations to emulate the target precision, even if they are not
| > the least equvalent.
|
| For limits being constants it would not be necessary to emulate the whole

They are constant at runtime, that does not mean they are constant at
compile-time.  Before the program actually get executed, there may be
needs to set appropriate flags to get things right, for example, there
might be needs to link against external library support (software
emulation).  There is more to FP than just parsing 1.0e293.

| FP arithmetic, but *only* to be able to write the necessary constants as
| literals. A compiler must be able to process FP literals anyway.

For a given definition of "process".

--
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: jdennett@acm.org (James Dennett)
Date: Tue, 6 Jul 2004 05:12:29 +0000 (UTC)
Raw View
Marcin 'Qrczak' Kowalczyk wrote:
> On Mon, 05 Jul 2004 16:15:48 +0000, Bo Persson wrote:
>
>
>>But manipulation of floating point expressions would require ALL
>>implementations to emulate the target precision, even if they are not
>>the least equvalent.
>
>
> For limits being constants it would not be necessary to emulate the whole
> FP arithmetic, but *only* to be able to write the necessary constants as
> literals. A compiler must be able to process FP literals anyway.

Are you sure it can't defer FP processing to runtime?

The only ways in which I can think of forcing a compiler
to use FP numbers at compile time is to cast them
immediately to an integral type, which doesn't require
the compiler to ever "really" deal with FP.  Even
static initialization of FP constants could be done
by calling a library function to parse a string, without
breaking the C++ model (though it would be odd to use
this dynamic mechanism for static initialization, I can't
see that it violates the standard).

-- James

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Tue, 6 Jul 2004 18:36:25 +0000 (UTC)
Raw View
""Marcin 'Qrczak' Kowalczyk"" <qrczak@knm.org.pl> skrev i meddelandet
news:pan.2004.07.05.19.22.53.940315@knm.org.pl...
> On Mon, 05 Jul 2004 16:15:48 +0000, Bo Persson wrote:
>
> > But manipulation of floating point expressions would require ALL
> > implementations to emulate the target precision, even if they are
not
> > the least equvalent.
>
> For limits being constants it would not be necessary to emulate the
whole
> FP arithmetic, but *only* to be able to write the necessary constants
as
> literals. A compiler must be able to process FP literals anyway.
>

The problem isn't just <limits>, it is constant expressions. Once I get
std::limits<long double>::max as a compile time constant, of course I
want to use it too:

const long double pi = 3.14xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxL;
const long double half_pi = pi / 2.0L;

const long double selector = std::limits<long double>::max / (42.05L *
half_pi);

template<>
void f<selector>()
{

}


Would this be required to work? Even when cross compiling for the
Pentium 6 with 128 bit SSE5 technology?


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dont.spam.me@pool.domainsite.com (Xenos)
Date: Wed, 30 Jun 2004 23:00:23 +0000 (UTC)
Raw View
"Daniel Kr=FCgler (ne Spangenberg)" <dsp@bdal.de> wrote in message
news:40E253FE.6060208@bdal.de...
> Hello Xenos,
>
> The reason for this is, that std::numeric_limits is supposed to be
> useful for any scalar type. But not
> all those types are integrals or enumerations, which would allow
> in-class-initialization. The alternative
> (i.e. normal static data members) wouldn't help you either in this
> situation.
>
> If you need ICE's, I propose to the class template integer_traits from
> the boost library
> (http://www.boost.org/), which provides the missing members
> const_min/const_max. (You can
> view it as an corresponding extension of std::numeric_limits)
>
> Hope that helps,
>
Yes, some.  What does ICE stand for?  I infer from <limits> and your
comments that floating points constants are not portable or not as safe a=
s
values returned via functions.  Why is that?

Thanks for you time.



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Gabriel Dos Reis <gdr@cs.tamu.edu>
Date: Thu, 1 Jul 2004 00:23:37 CST
Raw View
dont.spam.me@pool.domainsite.com (Xenos) writes:

| What was the rational for defining the max/min for types in <limits> as
| functions?

That question has been asked frequent times on this group or
comp.lang.c++.moderated. I don't know of the exact rationale, but I
can imagine reasons why I would prefer a function (provided we have
"constant-valued" functions); for example, the order of initialization
problem is bypassed in case of numeric_limits<MyBignum>.

My hope is that the "constant-valued function" proposal would, as a
by-product, turn those into compile-time constants -- therefore
removing that embarassment.  (there are others that would be cured too).

--
                                                        Gabriel Dos Reis
                                                         gdr@cs.tamu.edu
  Texas A&M University -- Computer Science Department
 301, Bright Building -- College Station, TX 77843-3112

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Seungbeom Kim <musiphil@bawi.org>
Date: Thu, 1 Jul 2004 12:14:53 CST
Raw View
Daniel Kr   gler (ne Spangenberg) wrote:

> Hello Xenos,
>
> Xenos schrieb:
>
>> What was the rational for defining the max/min for types in <limits> as
>> functions?  I usually have to resort to using the macros in <climits>
>> because I need a compile-time constant.
>>
>>
> The reason for this is, that std::numeric_limits is supposed to be
> useful for any scalar type. But not
> all those types are integrals or enumerations, which would allow
> in-class-initialization. The alternative
> (i.e. normal static data members) wouldn't help you either in this
> situation.

There's no reason why they should always be either in-class-initialized
constant expressions or functions; things could have been better if they
had been defined to be static const member variables, and only for those
types which allow in-class-initialization they could have been ICEs.
Is there anything that I'm missing here?

--
Seungbeom Kim

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Prateek R Karandikar" <kprateek88@yahoo.com>
Date: 1 Jul 2004 20:15:11 GMT
Raw View
> > Hello Xenos,
> >
> > The reason for this is, that std::numeric_limits is supposed to be
> > useful for any scalar type. But not
> > all those types are integrals or enumerations, which would allow
> > in-class-initialization. The alternative
> > (i.e. normal static data members) wouldn't help you either in this
> > situation.
> >
> > If you need ICE's, I propose to the class template integer_traits
from
> > the boost library
> > (http://www.boost.org/), which provides the missing members
> > const_min/const_max. (You can
> > view it as an corresponding extension of std::numeric_limits)
> >
> > Hope that helps,
> >
> Yes, some.  What does ICE stand for?

ICE stands for Integral Constant Expression. Informally, they are
expressions of some integral or enumeration type whose value is known
at compile-time. ("compile-time constant" from your 1st post)

>  I infer from <limits> and your
> comments that floating points constants are not portable or not as
safe as
> values returned via functions.  Why is that?

It is not about safety or portability. ICEs can be manipulated at
compile-time, floating constants cannot be (AFAIK). So static const
with in-class definitions are allowed only for integral types.

> Thanks for you time.

Welcome.

--                                    --
Abstraction is selective ignorance.
-Andrew Koenig
--                                    --

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: David Abrahams <dave@boost-consulting.com>
Date: 2 Jul 2004 05:45:01 GMT
Raw View
Seungbeom Kim <musiphil@bawi.org> writes:

> Daniel Kr   gler (ne Spangenberg) wrote:
>
>> Hello Xenos,
>> Xenos schrieb:
>>
>>> What was the rational for defining the max/min for types in <limits> as
>>> functions?  I usually have to resort to using the macros in <climits>
>>> because I need a compile-time constant.
>>>
>>>
>> The reason for this is, that std::numeric_limits is supposed to be
>> useful for any scalar type. But not
>> all those types are integrals or enumerations, which would allow
>> in-class-initialization. The alternative
>> (i.e. normal static data members) wouldn't help you either in this
>> situation.
>
> There's no reason why they should always be either in-class-initialized
> constant expressions or functions; things could have been better if they
> had been defined to be static const member variables, and only for those
> types which allow in-class-initialization they could have been ICEs.
> Is there anything that I'm missing here?

Apparently some implementations can change their floating-point
representation *at runtime*, and they wanted to be able to find out
the *current* minimum/maximum possible values.

:(

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dont.spam.me@pool.domainsite.com (Xenos)
Date: Tue, 29 Jun 2004 21:44:02 +0000 (UTC)
Raw View
What was the rational for defining the max/min for types in <limits> as
functions?  I usually have to resort to using the macros in <climits>
because I need a compile-time constant.

Thanks.



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: tslettebo@hotmail.com (Terje Sletteb?)
Date: 30 Jun 2004 18:50:04 GMT
Raw View
dont.spam.me@pool.domainsite.com (Xenos) wrote in message news:<cbslj3$edf6@cui1.lmms.lmco.com>...
> What was the rational for defining the max/min for types in <limits> as
> functions?  I usually have to resort to using the macros in <climits>
> because I need a compile-time constant.

Also, if you use Boost, you can use the integer_traits template, which
auguments std::numeric_limits with const_min and const_max members,
for the same reason (http://www.boost.org/libs/integer/index.html).

Regards,

Terje

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kprateek88@yahoo.com (Prateek R Karandikar)
Date: Wed, 30 Jun 2004 18:48:39 +0000 (UTC)
Raw View
dont.spam.me@pool.domainsite.com (Xenos) wrote in message news:<cbslj3$edf6@cui1.lmms.lmco.com>...
> What was the rational for defining the max/min for types in <limits> as
> functions?  I usually have to resort to using the macros in <climits>
> because I need a compile-time constant.
>
> Thanks.

I'm not sure, but I think this might be the reason: The interface of
numeric_limits was kept identical for all types which it is
specialized and the unspecialized version (I wonder why?). max and min
for the floating types are not ICEs, so they were made functions.
Also, I wonder why the unspecialized numeric_limits is not empty or
undefined( not in the sense of UB, but in the sense of declared but
not defined), instead of having a whole bunch of meaningless
declarations:
template<class T> class numeric_limits {
public:
static const bool is_specialized = false;
static T min() throw();
static T max() throw();
static const int digits = 0;
static const int digits10 = 0;
static const bool is_signed = false;
static const bool is_integer = false;
static const bool is_exact = false;
static const int radix = 0;
static T epsilon() throw();
static T round_error() throw();
static const int min_exponent = 0;
static const int min_exponent10 = 0;
static const int max_exponent = 0;
static const int max_exponent10 = 0;
static const bool has_infinity = false;
static const bool has_quiet_NaN = false;
static const bool has_signaling_NaN = false;
static const float_denorm_style has_denorm = denorm_absent;
static const bool has_denorm_loss = false;
static T infinity() throw();
static T quiet_NaN() throw();
static T signaling_NaN() throw();
static T denorm_min() throw();
static const bool is_iec559 = false;
static const bool is_bounded = false;
static const bool is_modulo = false;
static const bool traps = false;
static const bool tinyness_before = false;
static const float_round_style round_style = round_toward_zero;
};

Also, specializations for integral types also have members relevant
only to floating types. Why is it so?

--                                    --
Abstraction is selective ignorance.
-Andrew Koenig
--                                    --

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: =?ISO-8859-1?Q?=22Daniel_Kr=FCgler_=28ne_Spangenberg=29=22?= <dsp@bdal.de>
Date: 30 Jun 2004 19:00:06 GMT
Raw View
Hello Xenos,

Xenos schrieb:

>What was the rational for defining the max/min for types in <limits> as
>functions?  I usually have to resort to using the macros in <climits>
>because I need a compile-time constant.
>
>
The reason for this is, that std::numeric_limits is supposed to be
useful for any scalar type. But not
all those types are integrals or enumerations, which would allow
in-class-initialization. The alternative
(i.e. normal static data members) wouldn't help you either in this
situation.

If you need ICE's, I propose to the class template integer_traits from
the boost library
(http://www.boost.org/), which provides the missing members
const_min/const_max. (You can
view it as an corresponding extension of std::numeric_limits)

Hope that helps,

Daniel Kr   gler

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]