Topic: X to the Y power, how?


Author: dougm@rice.edu (Doug Moore)
Date: 1 Mar 92 02:13:21 GMT
Raw View
In article <22305@alice.att.com>, ark@alice.att.com (Andrew Koenig) writes:
|> Joe Buck's proposal for using @ for exponentiation goes quite far, but
|> omits one important detail: what is the value of x@y?
|>
|> The <question is> much easier to answer for simpler operations, especially
|> in light of things like the IEEE floating-point standard.  But that standard
|> does not define an exponentiation operation, perhaps because it is so difficult.

That seems like a challenge I should accept.  Surely I can model a C++
standard definition for the *meaning* of x@y (not my favorite symbol
choice, btw) based upon the C++ standard defintions of x+y,x-y,x*y,
and x/y.

But there are none.  In sections 5.6 and 5.7 of the manual, we have:

"The binary * operator indicates multiplication."
"The binary / operator yields the quotient...."
"The result of the + operator is the sum of the operands."
"The result of the - operator is the difference of the operands."

And that's all the ARM says about the meanings of the basic operators
for floating quantities.  Why?  Because the emphasis in C++ is always
on accepting whatever answer the implementation can give us cheaply.
In other contexts, I have argued for a tightening of definitions for
certain operations in integer arithmetic that are not defined by the
standard, and the argument against me has always been that the spirit
of C demands that we sacrifice consistency and implementation
independence for speed.

It seems unreasonable to set a high standard for this definition of
exponentiation when no other arithmetic operator in C++ is defined
with such care.  It should be sufficient to say

"The result of the @ operator is the first operand raised to the power
of the second."

It doesn't even matter if it's particularly accurate.  People want
this operator for syntactic reasons, not because they can't calculate
exponentials already.  If x@y always returned 0.0, it'd still be
appreciated by numerical programmers who could overload it to behave
the way they wanted in their Double class, their Complex class, or
their Matrix class.

Doug Moore
(dougm@cs.rice.edu)

P. S. -
a@b loses because gdb already uses @ for something very useful.  I
vote for a~b.




Author: pat@frumious.uucp (Patrick Smith)
Date: Sun, 1 Mar 1992 04:13:53 GMT
Raw View
In article <22305@alice.att.com> ark@alice.UUCP () writes:
|Joe Buck's proposal for using @ for exponentiation goes quite far, but
|omits one important detail: what is the value of x@y?
|
|That sounds like a trivial question -- it's just exponentiation, right? --
|but it is actually far from trivial.  For example, if x@y is implemented
|as exp(y*log(x)), the result will surely be inaccurate.  So what, if
|anything, should the user be entitled to assume?  For example:

I think this detail can safely be ignored.  As far as I can tell,
neither the ARM nor the ANSI C standard say anything (almost) about
the exact values returned by other floating point operations.
I just spent a few minutes searching the standard; I couldn't find
anything about the result of multiplication more restrictive than

   The result of the binary * operator is the product
   of the operands.

(X3.159 3.3.5, page 47; see also ARM 5.6, page 72).

So why should exponentiation be more precisely defined?

(There is one place where ANSI C specifies very accurate results:

   When a value of integral type is converted to floating type,
   if the value being converted is in the range of values that
   can be represented but cannot be represented exactly, the
   result is either the nearest higher or nearest lower value,
   chosen in an implementation-defined manner.

in X3.159 3.2.1.3 on page 36; see also ARM 4.4, page 34.
I find it curious that this one operation is so stringently
restricted, while at the same time there's nothing preventing
2.0 * 3.0 == 17.0.)

| Is x@2 guaranteed to be equal to x@2.0?  To x*x?

No.  And  x*3 is not guaranteed to be equal to x+x+x.

If you do a floating point calculation two different ways, you
can't expect to get the same answer both times.  It's been a long
time since I did any Fortran programming, but at the time this
statement was also true in Fortran.  (Have any of the Fortran
standards changed this?)


It may well be that people doing serious numerical programming
won't want to use C++ unless the accuracy of floating point
operations is more carefully specified.  But if so, it needs to
be done for all the floating point operations, not just exponentiation.

|--
|    --Andrew Koenig
|      ark@europa.att.com


--
Patrick Smith
uunet.ca!frumious!pat
pat%frumious.uucp@uunet.ca




Author: hopper@penguin.micro.umn.edu (Eric Hopper)
Date: 1 Mar 92 18:58:16 GMT
Raw View
In <1992Mar1.021321.17871@rice.edu> dougm@rice.edu (Doug Moore) writes:
 .... Lots of stuf deleted

>P. S. -
>a@b loses because gdb already uses @ for something very useful.  I
>vote for a~b.

 ~ is already used as the 1's complement operator. You could conceivably
set it up to have a different definition as a binary operator though, like *
I would still be a little leary of this. It isn't likely to break anything, but
it still looks funy to a C person like me.

Have fun,
hopper@donald.cs.umn.edu   (or hopper@mermaid.micro.umn.edu  until March)
    _                     /)                         * I went insane to   *
   / ') ______  ____  o  //  __.  __  o ____. . _    * preserve my sanity *
  (__/ / / / <_/ / <_<__//__(_/|_/ (_<_(_) (_/_/_)_  * for later.         *
                       />                            * -- Ford Prefect    *
                      </                            /**********************/




Author: ark@alice.att.com (Andrew Koenig)
Date: 1 Mar 92 14:59:17 GMT
Raw View
In article <1992Mar1.021321.17871@rice.edu> dougm@rice.edu (Doug Moore) writes:

> And that's all the ARM says about the meanings of the basic operators
> for floating quantities.  Why?  Because the emphasis in C++ is always
> on accepting whatever answer the implementation can give us cheaply.

And since most implementations cannot give us exponentiation cheaply,
C does not include it at all.
--
    --Andrew Koenig
      ark@europa.att.com




Author: dougm@rice.edu (Doug Moore)
Date: Mon, 2 Mar 1992 00:01:25 GMT
Raw View
In article <22310@alice.att.com>, ark@alice.att.com (Andrew Koenig) writes:
|> And since most implementations cannot give us exponentiation cheaply,
|> C does not include it at all.

That's a pretty good argument for leaving an exponention operator out
of C.  But C++ has operator overloading.  Programmers might reasonably
want to overload an exponential operator ~ for their String class:

String("ho")~3 == String("hohoho")

or for their Regexp or Integer or Rational or Complex or Matrix
classes.  That's not a particularly imaginative list, and if a binary
~ operator with the right precedence existed, programmers would no
doubt come up with dozens of other good (and hundreds of bad :->)
and imaginative uses for it.

Don't define it for floating quantites.  Don't define it for primitive
types at all, if you wish.  This is a syntax issue, not a semantics
issue.  The cost of adding a little syntax to the language must be
very low.  Give programmers the ability to express themselves in the
way that seems most natural to them, with an operator that corresponds
to a familar mathematical notation.

Doug Moore
(dougm@cs.rice.edu)
fighting inertia




Author: fjh@magnum.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Mon, 2 Mar 1992 12:01:04 GMT
Raw View
jbuck@forney.berkeley.edu (Joe Buck) writes:

>6.  Why not use ** as in Fortran?

>It would break "int main(int argc,char **argv)".  We can't make ** a
>token.

Joe Buck's proposal is in general very well thought out, and deserves to be
praised.

However, I am SICK TO DEATH of the MANY people posting INCORRECT statements to
the effect that "well, using '**' would be nice, but it is impossible".

It is QUITE possible to provide a simple interface that allows ** to be used as
both for exponentiation and double-indirection.

For example, the following program:

 #include "exponent.h"
 #include <iostream.h>

 int main(int argc, char **argv) {
  printf("Arg count: %d\n", argc);
  printf("Program name: %s\n", argv[0]);

  printf("Yes this REALLY will work: %f\n",
   2.0 ** 3.0 );
 }

will compile and execute giving the desired output.

This requires NO CHANGE TO THE LANGUAGE. All it requires is the appropriate
code in exponent.h, which I leave as an excercise to the reader (only because
I have posted it once already).

[Flame Off]

 Fergus.

------
Fergus Henderson             fjh@mundil.cs.mu.oz.au
This .signature VIRUS is a self-referential statement that is true - but
you will only be able to consistently believe it if you copy it to your own
.signature file!
--
Fergus Henderson             fjh@mundil.cs.mu.oz.au
This .signature VIRUS is a self-referential statement that is true - but
you will only be able to consistently believe it if you copy it to your own
signature file!




Author: dougm@rice.edu (Doug Moore)
Date: Mon, 2 Mar 1992 15:46:51 GMT
Raw View
In article <9206222.13299@mulga.cs.mu.OZ.AU>, fjh@magnum.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
|> ... I am SICK TO DEATH of the MANY people posting INCORRECT statements to
|> the effect that "well, using '**' would be nice, but it is impossible".
|>
|> It is QUITE possible to provide a simple interface that allows ** to be used as
|> both for exponentiation and double-indirection.

Okay.  So, if I've overloaded both unary and binary operator * for a
class Foo, and I've overloaded the exponentiation operator ** as well,
what is the meaning of:

Foo a,b,c;
a = b**c;  // b * (*c)?   b (**) c?

Doug Moore
(dougm@cs.rice.edu)




Author: jbuck@forney.berkeley.edu (Joe Buck)
Date: 2 Mar 92 19:54:31 GMT
Raw View
In article <1992Feb28.223749.2358@wdl.loral.com> mab@wdl39.wdl.loral.com (Mark A Biggar) writes:
> [ commenting on my proposal ]
>Why is (-2.0)@-2 undefined.  Isn't it equal to -0.25?
>I thought there was a problem only with fractional exponents of negitive
>numbers not negitive integral exponents.

You are correct; the restriction is an error.  The negative exponent is
only a problem when both arguments are integers, because the result is
not in the range space of the function (non-integral).

--
Joe Buck jbuck@ohm.berkeley.edu




Author: dow@idtg.UUCP (Keith Dow)
Date: 2 Mar 92 20:05:54 GMT
Raw View
 What is the problem with saying x@y = pow(x,y); ?

The problem was solved once, wasn't it?  Or is pow(,x,y) a dog?







Author: vorwald@oasys.dt.navy.mil (John Vorwald)
Date: 2 Mar 92 22:06:02 GMT
Raw View
In comp.lang.c++, dougm@rice.edu (Doug Moore) writes:
>In article <9206222.13299@mulga.cs.mu.OZ.AU>, fjh@magnum.cs.mu.OZ.AU (Fergus Ja
>mes HENDERSON) writes:
>|> ... I am SICK TO DEATH of the MANY people posting INCORRECT statements to
>|> the effect that "well, using '**' would be nice, but it is impossible".
>|>
>|> It is QUITE possible to provide a simple interface that allows ** to be used
> as
>|> both for exponentiation and double-indirection.
>
>Okay.  So, if I've overloaded both unary and binary operator * for a
>class Foo, and I've overloaded the exponentiation operator ** as well,
>what is the meaning of:
>
>Foo a,b,c;
>a = b**c;  // b * (*c)?   b (**) c?
>
>Doug Moore
>(dougm@cs.rice.edu)

I'm a beginner, so don't throw flames, but
generally, I would not define an operator between a pointer and an
object.  So the only possible interpretation is

 a =  (b) (**) (c)

Since a, b, and c are of the same class.  There would be a problem if
(*c) pointed to an obect of class Foo, but then Foo is an unique class
to begin with.  I think the issue of having exponentiation is primarily
concerned with somewhat normal classes dealing with numbers: ie:  matrices,
vectors, complex numbers, real numbers, integers...

Does anyone have a real example where ** can not be correctly interperated
correctly by letting the unary operator * have precedence if the operand
is of type pointer?

  a = b **** c     is either
  a = b ** (*c)    c is a pointer to a class where ** is defined
or a = b * (**c)   c is a pointer to pointers of a class where * is
     defined.
  One of these choices has to true, not both.  If both are true, then
the operators * and ** have been overloaded such that they are
indistuginasble (?SP).  Is there a situation where * and ** can not
be seperated out?  (Besides program or structure errors.  ie  Matrix(int)
and Matrix(int i, int j=1) can not be seperated)

  In the above example, if b is a matrix, and c is a matrix, one could
argue that a matrix raised to a vector can be defined
    a = b ** (*c)
and a matrix can be multiplied by a scalar
    a = b * (**c)

  But this error can be attributed to poor class defination, and caught by
the compiller as indistugishable code.

  I would like to see an honest evaluation of the potential to incorporate
** as an operator having the same precedence as .* or ->*.  This would
keep indirection (*) at a higher precedence, and be above the precedence
of * / % + -.

These are the thougths of a beginner (and a poor speller)
John Vorwald
vorwald@oasys.dt.navy.mil




Author: sarima@tdatirv.UUCP (Stanley Friesen)
Date: 3 Mar 92 00:24:46 GMT
Raw View
In article <1992Feb28.223749.2358@wdl.loral.com> mab@wdl39.wdl.loral.com (Mark A Biggar) writes:
|
|Why is (-2.0)@-2 undefined.  Isn't it equal to -0.25?

No, it is Complex(0, 0.25).
[or 0.25i in another notation].
--
---------------
uunet!tdatirv!sarima    (Stanley Friesen)





Author: sarima@tdatirv.UUCP (Stanley Friesen)
Date: 3 Mar 92 00:25:42 GMT
Raw View
Ignore my last post, I must have been asleep.
--
---------------
uunet!tdatirv!sarima    (Stanley Friesen)





Author: fjh@mundil.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Wed, 4 Mar 1992 09:12:33 GMT
Raw View
I must correct my previous posting:

>For example, the following program:

> #include "exponent.h"
  #include <stdio.h>

> int main(int argc, char **argv) {
>  printf("Arg count: %d\n", argc);
>  printf("Program name: %s\n", argv[0]);

>  printf("Yes this REALLY will work: %f\n",
>   2.0 ** 3.0 );
> }

Unfortunately this will _not_ work (moral: never post code to the net
without running it first, and especially not when you have a high fever and
are not thinking well).

The corrected program needs
  Double x = 3.0;  // you can #define Double double if you
     // really want, but it's pretty nasty.
  printf("Yes THIS really will work: %f\n",
   2.0 ** x );
or
  printf("Yes THIS really will work: %f\n",
   2.0 ** Double(3.0) );

This is because C++ (well, my compiler: I haven't checked the ARM) does not
do the same search for argument conversions when invoking operators operating on
built-in types as it does when invoking functions operating on built-in types.

My original exponent.h had an operator DoubleExponent operator * (Double x).
If you replace it with DoubleExponent indirection(Double x)
and use the expression 2.0 * indirection(3.0),
which I thought was only syntactically different from 2.0 * (*3.0),
then the compiler will correctly find the conversion from double to Double,
but the original expression 2.0 ** 3.0 gives an "Invalid Indirection" error.

Does anybody know why there is this difference between operators and functions?

Thanks, and sorry about my previous post,
 Fergus.

P.S.
Jim Adcock has provided sample code for exponent.h in another post on this
thread.

--
Fergus Henderson             fjh@mundil.cs.mu.oz.au
This .signature VIRUS is a self-referential statement that is true - but
you will only be able to consistently believe it if you copy it to your own
signature file!




Author: robert@kea.am.dsir.govt.nz (Robert Davies)
Date: Fri, 6 Mar 92 09:19:00 GMT
Raw View
This is a further note on a proposal for the exponentiation (=power)
operator (= Fortran's **) to be added to C++.

In this submission I am making a first try to write the story.

I follow Joe Buck in using  @  as the symbol for the operator. However I do
have sympathy with the person who suggested that  ~  is a more suitable
symbol. It shape suggests "superscript follows" although most people would
think of it meaning approximately equal to or equivalent to. And of course it
already is working double time in C++. I think which symbol is best should be
discussed in parallel with a discussion on the semantics of the operator and
on whether a good enough story can be written.

I think the semantics and syntax can be sorted out to a satisfactory state,
and without a lot of difficulty.

The problem is that you can't add everyone's favourite operator - unless you
start writing APL++.

As I understand Bjarne, he wanted a language that would fit into small
manuals, small machines and small heads. (I think he succeeded with the first
two, but am not sure about the third). As part of this, he rejected the
Algol68 facility of allowing almost any identifier to be defined as an
operator (plus allowing you to set its order of precedence).

Hence we are definitely limited in the number of operators we can have. For
example, I don't see us having a special symbol for kronecker products of
matrices or having the symbols that logicians use.

So why is exponentiation is a special case? It seems to me that the case needs
to be made on the following


A.  The size of the (potential) user community that would use the facility
B.  Frequency of usage of the new operator
C.  The improvements in the programs through the use of the new operator
D.  Possible other applications
E.  Compatibility with language philosophy
F.  Negative impacts
G.  Ease of implementation
H.  Effect on the pockets of compiler writers (since they are represented on
       the standards committee)


A. Users of exponentiation

The main users of the exponentiation operator will be in the scientific
and engineering computing community. This includes physics, chemistry,
statistics, numerical analysis, operational research, econometrics etc. [more
examples?]

C++ language is being increasingly used in these areas.

Evidence for this is the existence of at least two commercial C++ libraries
Rogue Wave math library and Dyad M++ [does anyone know the sales figures?]
which have emphasis on classes relevant to science and engineering
applications; the interest in C++ packages; and the "numerical methods in C++"
project.

[Does anyone have any idea what fraction of C++ users would be in the
science/engineering area?]

There has already been a move towards the use of C for large scientific
systems such as SAS. In fact C++ has substantial advantages over C for this
work and we can expect writers of such packages to use C++ when they are
satisfied that the language is sufficiently stable and reliable.

C++ is particularly suitable for scientific/engineering programming when
compared with Fortran 77 (Fortran 90?) and Pascal.

C++ provides easy control of memory (with suitable libraries) and compile time
checking of function argments (both failings with Fortran 77, the first is
also a failing with Pascal). Following the ANSI version of C, we can expect
scientific programmers to have the control over float and double that they
need. C++'s use of references for passing function arguments means that users
can avoid some of the mysteries and inelegance of pointers associated with C.
Yet C++ maintains much of the efficiency of C. I think Fortran will still win
in straight speed trials on the things Fortran is good at, but even this might
change as new libraries get written and C++ compilers and modern computers
come to terms with each other.

But the real advantage of C++ for scientific programming is in the use of
classes and objects.

Science and engineering deal with complicated interacting data structures.
Some of these are mathematical concepts such as complex numbers, matrices, or
elements in a finite element mesh. Others are describing real things such as a
section of pipe in the plumbing of a geothermal power station or a machine in
a production line. One can set up the data structures needed to describe these
classes of objects and then the formulae that show how they behave and
interact.

For objects like matrices it is particularly convenient to use the operators
like * and + to describe multiplication and addition since the formula in
one's program look a lot like the one written in the specification so the
chances of error are much reduced and the program is so much more transparent.

More important, the messy details of an object, can be written as part of the
definition of a class, tested and then forgotten as you get on with the rest
of the program. Where you have objects that differ in a few respects you can
use inheritance so you can program the common parts only once and the
differing parts separately. For example a curved pipe and a straight pipe in
our geothermal system have the same parameters describing the steam flow and
the pressures at the ends but the formulae relating these differ.


B. Frequency of use.

I don't think the usage is extremely large, just a few percent of the lines in
my programmes seem to need the operator. [How often do other people use it?]
The exponentiation operator does not occur nearly as often as the regular +
and * operators: but where it does occur it has to be correct. And in some
cases it is difficult to carry out detailed checking. So we must reduce the
possibility of error. My usage would be more if I was sure the compiler would
recognise repeated expressions and build up powers in a sensible way in
expressions involving several powers.


C. Improvements

Some people have strong feelings about the need for an exponentiation
operator. I quote from Press, Flannery, Tuekolsky and Vetterling in their
respected book "Numerical Recipes in C", pp 14 and 23 "... the slowness of C's
penetration into scientific computing has been due to deficiencies in the
language that computer scientists have been (we think, stubbornly) slow to
recognize. Examples are the lack of a good way to raise number to small
integer powers ...", "The omission of this operator from C is perhaps the
language's most galling insult to the scientific programmer".

Despite objects and classes scientists and engineers still have to deal with
complicated formulae involving numbers raised to powers. Some of these are not
too bad; just a few terms - some are generated by a symbolic programming
language and take many lines. Both may well involve powers of numbers. Very
often they are just simple integer powers. As with formulae involving matrices
etc, we want the formula in the program to look as close as possible to the
one in the book. We don't want functions with brackets. This is all in the
interests of efficient and accurate programming. That is what C++ is for.

Here are some lines from Geothermal plumbing problem and from a recent
project. The lines don't occur together.

With present day C++:

inline real square(real x)  { return x*x; }
inline real cube(real x)  { return x*x*x; }
inline real fourthpower(real x)  { return square(x*x); }

F1 = 0.8 * sin(0.5 * theta) * (1.0 - square(beta)) / fourthpower(beta);
F3 = 2.6 * sin(0.5 * theta) * square(1.0 - square(beta)) / fourthpower(beta);
F5 = sin(0.5 * theta) * (1.0 - square(beta))
   * (3.4 - 2.6 * square(beta)) / fourthpower(beta);
CR = 0.0027462 * LD * pow(P,0.3882) * square(D) / pow(T,0.737);

term = 2.0 * cube(y); y = square(y);
x *= (1.0 - exp(-0.5 * tausq * square(u)));
return pow(2.0,(sum1 / 4.0)) / (pi * square(axl));
sum1 += ncj * square(x / y) + nj * (square(x) / y + log1(-x, FALSE ));
*F = 0.5 * ( square(M(0,3)) + square(M(0,4)) ) * (n-5) / r(0,0);


With the addition of the exponentiation operator:

F1 = 0.8 * sin(0.5 * theta) * (1.0 - beta @ 2) / beta @ 4;
F3 = 2.6 * sin(0.5 * theta) * (1.0 - beta @ 2) @ 2 / beta @ 4;
F5 = sin(0.5 * theta) * (1.0 - beta @ 2) * (3.4 - 2.6 * beta @ 2) / beta @ 4;
CR = 0.0027462 * LD * P @ 0.3882 * D @ 2 / T @ 0.737;

term = 2.0 * y @ 3; y = y @ 2;
x *= (1.0 - exp(-0.5 * tausq * u @ 2))
return 2.0 @ (sum1 / 4.0) / (pi * axl @ 2);
sum1 += ncj * (x / y) @ 2 + nj * (x @ 2 / y + log1(-x, FALSE ));
*F = 0.5 * ( M(0,3) @ 2 + M(0,4) @ 2 ) * (n-5) / r(0,0);

The lines are not all that complicated, and yet I think the ones with @ (I
wish it was **, but that is just what I am used to) are easier to write and
easier to check against what is on the piece of paper.

I would expect the squares, cubes, up to 12th powers to be compiled inline,
and I would hope the compiler would recognise repeated powers in an expression
and build up the higher powers from the lower powers.

Some people have much more complicated expressions including one covering many
lines generated by symbolic manipulation programs [examples please]. They
won't want to translate the powers to functional form but probably won't be
too upset changing ** to @.


D. Other applications.

Because taking powers is such a common operation there will be numerous
situations when we want to define the power operation for other classes.

The one I am most interested in is symbolic manipulation. No, I am not trying
to write Macsyma in C++. But I would like to do some simple manipulation,
particularly taking derivatives as part of a maximising or non-linear solving
program. Such formulae will very often involve powers. I haven't worked out
details yet but I know something like this will be possible.

Dummy X;

Function F = X @ 4 + 3.0 * X @ 3 - 2.0 * X @ 2 + 5.0;
Function G = F.deviative(X);
double f = F.solve(5.0);


E. Language philosophy.

I can't see any problem.


F. Negative impacts.

The compiler will be just a little longer and, I suppose, a few microseconds
slower. There will be no impact on existing code.


G. Ease of implementation.

It uses well known technology from existing compilers so there shouldn't be a
big problem [any comments of this].


H. Financial considerations.

I think there is a substantial body of users who will be wondering about
switching to C++ from Fortran. Several of the companies selling C++ compilers
don't also have Fortran compilers so each switch is money in one of their
pockets.

I think there are a number of things that C++ has to do to attract these
users:

Get some good libraries together - underway.

Include the exponentiation operator - this proposal.

Sort out the float/double problem - done or underway?

Include complex - done.

Improve the handling of arrays in C - done or being done?

Get an equivalent of the data statement in fortran - another proposal.







Author: hendrik@vedge.UUCP (Hendrik Boom)
Date: 5 Mar 92 16:33:57 GMT
Raw View
In article <omci1INN8np@agate.berkeley.edu> jbuck@forney.berkeley.edu (Joe Buck) writes:
. Group 2:
. long double operator@(long double,long int);
. long double operator@(long double,unsigned int);
. long double operator@(long double,int);
. double operator@(double,long int);
. double operator@(double,unsigned int);
. double operator@(double,int);
. float operator@(float,long int);
. float operator@(float,unsigned int);
. float operator@(float,int);
.

. is negative the result is undefined.  For group 2 (floating base
. raised to integral power), the first may be negative as long as
. the second argument is zero or positive.  If both arguments are
. negative the result is undefined.  For group 3, the result of a
. negative second argument is undefined.

You goofed on group 2; negative float to negative integer power is
perfectly well-defined, and even returns a float.
x * (- i) = 1/ ( x * i) as long as x is nonzero.

By the way, 0 @ 0 is usually defined to be 1 when the zeros are
integers; usually undefined when real or coimplex.
--
Try one or more of the following addresses to reply.
 hendrik@vedge.uucp
 iros1!vedge!hendrik




Author: jimad@microsoft.com (Jim ADCOCK)
Date: 06 Mar 92 17:52:57 GMT
Raw View
In article <omci1INN8np@agate.berkeley.edu> jbuck@forney.berkeley.edu (Joe Buck) writes:
|In article <22292@alice.att.com>, bs@alice.att.com (Bjarne Stroustrup) writes:
||> That is the rub. We don't know. To the best of my knowledge nobody has
|!> bothered to think up and describe a specific proposal in detail and evaluate
|!> its implications. Naturally, we want an exponentiation operator, naturally
|!> we want lots of things, we all have wish lists. The problem is that we can't
|!> have everything we want and that having to be precise about what we want
|!> might spoil some of the fun.
|
|OK, I decided to take that as a challenge.  I don't find it at all
|difficult to be precise about the exponentiation operator.
|
|Here is the start of a more formal proposal.  I welcome feedback and
|comments, and if it seems that people are interested it could be made
|into an official proposal.

I think this looks pretty interesting.  Certainly getting some numerical
extensions into C++ is better than having a Numerical C group creating
extensions incompatible with C++.

Perhaps Bjarne's committee can consider this proposal?  -- Or alternately,
if this is not good enough, perhaps they can give us an example of a
language extension proposal that has passed by their standards?





Author: sakkinen@jyu.fi (Markku Sakkinen)
Date: 6 Mar 92 07:09:31 GMT
Raw View
In article <1992Mar2.154651.12261@rice.edu> dougm@rice.edu (Doug Moore) writes:
>
>In article <9206222.13299@mulga.cs.mu.OZ.AU>, fjh@magnum.cs.mu.OZ.AU (Fergus James HENDERSON) writes:
>|> ... I am SICK TO DEATH of the MANY people posting INCORRECT statements to
>|> the effect that "well, using '**' would be nice, but it is impossible".
>|>
>|> It is QUITE possible to provide a simple interface that allows ** to be used as
>|> both for exponentiation and double-indirection.
>
>Okay.  So, if I've overloaded both unary and binary operator * for a
>class Foo, and I've overloaded the exponentiation operator ** as well,
>what is the meaning of:
>
>Foo a,b,c;
>a = b**c;  // b * (*c)?   b (**) c?
>
>Doug Moore
>(dougm@cs.rice.edu)

This is a non-problem.  According to the lexical rules, the longest
possible token in a left-to-right scan must always be identified,
thus "b (**) c".

However, it seems that double indirection would get into trouble
if '**' were to be introduced as a binary operator. The reason is
that syntax rules are considered only after lexical analysis, so:
1. 'func (int**);'   input
2. 'func' '(' 'int' '**' ')' ';'   tokens
3. illegal use of binary operator
People will not be eager to change '**argv' everywhere
to '* *argv' or '* * argv' to escape this problem.

Is this analysis correct?

----------------------------------------------------------------------
Markku Sakkinen (sakkinen@jytko.jyu.fi)
       SAKKINEN@FINJYU.bitnet (alternative network address)
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
----------------------------------------------------------------------




Author: harrison@sunwhere.dab.ge.com (Gregory Harrison)
Date: 7 Mar 92 21:02:11 GMT
Raw View
Good luck.  I certainly appreciate having good and user-friendly numerical
capabilities in C and C++.  I use these languages for DSP and used to have
to force C into yielding to complex numbers (C++ is a nice improvement).

Greg Harrison




Author: jbuck@forney.berkeley.edu (Joe Buck)
Date: 28 Feb 1992 22:19:45 GMT
Raw View
In article <22292@alice.att.com>, bs@alice.att.com (Bjarne Stroustrup) writes:
|> That is the rub. We don't know. To the best of my knowledge nobody has
!> bothered to think up and describe a specific proposal in detail and evaluate
!> its implications. Naturally, we want an exponentiation operator, naturally
!> we want lots of things, we all have wish lists. The problem is that we can't
!> have everything we want and that having to be precise about what we want
!> might spoil some of the fun.

OK, I decided to take that as a challenge.  I don't find it at all
difficult to be precise about the exponentiation operator.

Here is the start of a more formal proposal.  I welcome feedback and
comments, and if it seems that people are interested it could be made
into an official proposal.

Draft Proposal to add an exponentiation operator to C++

Version 1
February 28, 1992
Joseph T. Buck (jbuck@ohm.berkeley.edu)

The portion of the grammar described on p. 72 of the ARM is revised as
follows:

exp-expression:
 pm-expression
 pm-expression @ exp-expression

multiplicative-expression:
 exp-expression
 multiplicative-expression * exp-expression
 multiplicative-expression / exp-expression
 multiplicative-expression % exp-expression

The revision introduces a new operator, @, which is right-associative
and binds more tightly than the multiplication/division operators
(*, /, and %), and more loosely than .*, ->*, cast operators, or
unary operators.  The grammar change basically sticks a new production
in between pm-expression and multiplicative-expression.

The exponentiation operator, @, groups right-to-left.  The operands
of @ must have arithmetic type.

The "usual arithmetic conversions" of section 4.5 are used, with the
following exception:

If the first argument is of one of the types long double, double, or
float, and the second argument is of integral type, integral promotion
(ch 4.1) takes place on the second argument, but no change is made to
the first argument.

 Commentary: we wish to allow -2.0@i, but not -2.0@x, where i
 is of integral type and x is of floating type.

The effect of these rules is to allow the following combinations of
arguments and result types:

 NOTE: I know that the language does not permit overloading
 on builtin arguments.  I am showing the "simulated prototypes"
 for the argument combinations that can occur.

Group 1:
long double operator@(long double,long double);
double operator@(double,double);
float operator@(float,float);

Group 2:
long double operator@(long double,long int);
long double operator@(long double,unsigned int);
long double operator@(long double,int);
double operator@(double,long int);
double operator@(double,unsigned int);
double operator@(double,int);
float operator@(float,long int);
float operator@(float,unsigned int);
float operator@(float,int);

Group 3:
long int operator@(long int,long int);
unsigned int operator@(unsigned int,unsigned int);
int operator@(int,int);

For all three groups, the result when both arguments are zero
is undefined.

For group 1 (both arguments of floating type), if the first argument
is negative the result is undefined.  For group 2 (floating base
raised to integral power), the first may be negative as long as
the second argument is zero or positive.  If both arguments are
negative the result is undefined.  For group 3, the result of a
negative second argument is undefined.

 Commentary: "undefined" means that an arithmetic exception
 may occur, or that the result may be garbage, for example,
 0@0 might return 1, 0, or an error.

 I'm choosing "undefined", rather than specifying exceptions,
 to be consistent with how the ARM deals with things like
 division by zero.

-------------------------------------------------------------------

"class complex" is not part of the standard at this point.  Should
it be standardized, the following operator overloads might be used:

complex operator@(complex b,complex e) {
 return exp(e*log(b));
}

// optional: might be desirable because log(double) is cheaper
complex operator@(double b,complex e) {
 return exp(e*log(b));
}

-------------------------------------------------------------------

Now for the objections and questions:

1.  But -1@0.5 is complex(0,1).  Why shouldn't that result be returned?

It is inconsistent with the language for the type of the result to
depend on the value of the arguments.  If a user wants to deal with
complex results, it's not difficult to specify complex classes.  Every
other operator in the language restricts the result to be no more
general than the type of its arguments; e.g. 3/2 = 1, not 1.5, even
though 1.5 is "correct".

2.  But nevertheless, complex(0,1) is the right answer.  Users won't
accept this.

Experience with Fortran suggests otherwise.  The semantics I've chosen
generally agree with Fortran, and yet everything fits nicely with C++
conventions.

3.  You're trying to make my favorite language more like Fortran, and
Fortran is an inferior language.

You're not going to get a lot of the people who use Fortran to convert to
C++ as long as you make life difficult for them.  It's not just a matter
of syntax; most compilers are simply going to generate worse code if there
isn't an exponentiation operator, unless the users write their code more
carefully than we can expect.  Fortran is inferior in many ways, but there
is a reason why it's still used for large scientific problems, and the
exponentiation operator is one of the reasons (vectorizability is another,
but that's another argument).

4.  Why don't you just use the "usual arithmetic conversions"?  Why the
exception?

Because users will be unhappy if X@2 returns an exception for negative
X, when it's perfectly well-defined.

5.  Why isn't it enough to use "pow", especially since you can overload it
to define pow(double,double), pow(double,int), and pow(int,int)?

Most scientific codes contain large amounts of exponentiation; raising
a real exponent to an integer power where the integer is known at compile
time is common, but real or integer unknown exponents are also common.
Evaluation of polynomials (where x@1, x@2, ... are used in sequence) is
a common operation.  Forcing the use of pow(...) has several harmful effects:

 The code size increases and complicated expressions become more
 difficult to read.  C/C++ programmers who don't think this is
 a problem haven't seen scientific codes, where even with the
 exponentiation operator, expressions can require several lines
 to write.

 Strength reduction becomes much more difficult to apply, unless
 pow(...) is made a special function known by the compiler.

 Optimizations commonly applied in Fortran compilers when the same
 base appears with several constant exponents are also more difficult.

6.  Why not use ** as in Fortran?

It would break "int main(int argc,char **argv)".  We can't make ** a
token.

7.  How about ^, ^^, or #.

^ is taken (exclusive-or).

I could live with ^^, but there is a relation between & and && and |
and || that might suggest a completely different meaning for ^^ to some
(a logical rather than a bitwise exclusive-or).

# doesn't suggest the right thing to me; also it might confuse some
preprocessor implementations if it appeared as the first nonblank
character on a line.

8.  It will be hard to implement.

I disagree.  The modification to the grammar is simple.  The single
exception to the "usual arithmetic conversions" rule is not difficult
to check for.  A quick and dirty implementation can simply insert
function calls for the various templates shown above, and their number
can be reduced in many environments; strength reduction for common
cases (exponent is a small constant integer) is very easy to apply.
Compared to, say, implementing exceptions, the work required for this
change will be trivial.

9.  But for implementations like cfront, which produce C, the lack of
an exponentiation operator in C is a problem.

Just as overloaded operators for classes are turned into function calls
unless they are inlined, a cfront-like implementation could inline
some cases (turning x@2 into x*x, say) and generate function calls
for the rest.  cfront already generates function calls for user-defined
operators; @ could be treated the same way even when used on builtin
types.

10.  What is the effect on existing programs?

None.  "@" is not used in the current grammar; no existing program
will break.

11.  Are there any other benefits or impact?

A new operator is available for overloading, with an intuitive meaning
("at") that may suggest natural uses for some classes.

If we attract more Fortran users they might ask us to implement
EQUIVALENCE next. :-)




--
Joe Buck jbuck@ohm.berkeley.edu




Author: mab@wdl39.wdl.loral.com (Mark A Biggar)
Date: Fri, 28 Feb 1992 22:37:49 GMT
Raw View
In article <omci1INN8np@agate.berkeley.edu> jbuck@forney.berkeley.edu (Joe Buck) writes:
>In article <22292@alice.att.com>, bs@alice.att.com (Bjarne Stroustrup) writes:
[group 2 is pow(float,int) etc.]
>..For group 2 (floating base
>raised to integral power), the first may be negative as long as
>the second argument is zero or positive.  If both arguments are
>negative the result is undefined.

Why is (-2.0)@-2 undefined.  Isn't it equal to -0.25?
I thought there was a problem only with fractional exponents of negitive
numbers not negitive integral exponents.
--
Mark Biggar
mab@wdl1.wdl.loral.com





Author: ark@alice.att.com (Andrew Koenig)
Date: 29 Feb 92 15:58:37 GMT
Raw View
Joe Buck's proposal for using @ for exponentiation goes quite far, but
omits one important detail: what is the value of x@y?

That sounds like a trivial question -- it's just exponentiation, right? --
but it is actually far from trivial.  For example, if x@y is implemented
as exp(y*log(x)), the result will surely be inaccurate.  So what, if
anything, should the user be entitled to assume?  For example:

 Is x@2 guaranteed to be equal to x@2.0?  To x*x?

 Is x@3 guaranteed equal to x*x*x?  What if x*x*x is not
 the best possible approximation to the infinite-precision
 cube of x?  This problem becomes worse for higher powers.

 If x>1 and y>z, is it always true that x@y >= x@z?

 If the mathematically exact value of x@y can be exactly
 represented as a floating-point number, is that the result
 you must get?

The corresponding questions are much easier to answer for simpler
operations, especially in light of things like the IEEE floating-point
standard.  But that standard does not define an exponentiation operation,
perhaps because it is so difficult.

Of course, people have looked at all these issues for other languages.
The point is that a complete proposal for C++ must at least survey
what other languages have done and give a cogent argument for a
particular approach.
--
    --Andrew Koenig
      ark@europa.att.com