Topic: Power operator for C++


Author: matt@physics.berkeley.edu (Matt Austern)
Date: 13 Jul 92 15:47:45 GMT
Raw View
In article <1992Jul08.222013.1959@microsoft.com> jimad@microsoft.com (Jim Adcock) writes:

> 3) For better or for worse, C and C++ have developed a particular
> "style and feel" and extensions need to fit into that existing
> "style and feel"
>
> 4) When new C++ programmers year after year all express
> disappointment and/or surprise over the lack of some "obvious" C++
> feature, those programmers are probably correct and their thoughts
> on these issues should probably be taken seriously.

I just wanted to call attention to these two points in Jim Adcock's
article, which, I think, should both be kept in mind when we discuss
any extensions to C++.  It's particularly interesting that he put them
next to each other, since there is quite definitely a tension between
them.

On the one hand, we don't want to turn C++ into a completely different
language; on the other hand, though, there is a danger that if we
become too familiar with C++, and not familiar enough with other
languages, then we will become blind to its omissions and weaknesses,
and unquestioningly assume that they are just the natural way of
things.  I have seen many fortran programmers who are like that: they
are simply unable to understand the difference between a data
structure and a "common block," because common blocks are all that
they know.  We should make sure not to become equally parochial.
--
Matthew Austern              I dreamt I was being followed by a roving band of
(510) 644-2618               of young Republicans, all wearing the same suit,
matt@physics.berkeley.edu    taunting me and shouting, "Politically correct
austern@theorm.lbl.gov       multiculturist scum!"... They were going to make
austern@lbl.bitnet      me kiss Jesse Helms's picture when I woke up.




Author: matt@physics.berkeley.edu (Matt Austern)
Date: 6 Jul 92 14:28:39
Raw View
In article <1992Jul5.133125.5575@lth.se> dag@control.lth.se (Dag Bruck) writes:

> I ask someone with access to a lot of numerical code (perhaps written
> in FORTRAN) to do us a favour: gather some statistics that can tell us
> how common exponentiation is compared to other operators, such as,
> addition, subtraction, multiplication and division.  Compared to
> square root?

I just did a bit of grepping on a moderate-sized (30,000 lines)
Fortran program, called Papageno.  I suspect that it is fairly typical
of code used in particle physics.

Some statistics:
 Operator Number of lines containing operator
 -------- -----------------------------------
   total  32000
   +  10711
   *  6805
   -  6253
   /  3382
   **  1481

The result, then, is that exponentiation is about half as common in
this program as division.  To me, at least, this qualifies as
"common".  Certainly nobody would suggest that a division operator is
unnecessary just because it is used only half as much as subtraction!

(Many lines uses exponentiation more than once; in total, the operator
** is used 4496 times in PAPAGENO.)

Looking over the code, by the way, it seems that the vast majority of
the time, ** is used to raise a quantity to an integral power.  (I
didn't gather statistics; that would require more effort than a simple
grep!)  This suggests that pow(double,double) is not a good
substitute; a better substitute would be pow(double,int), which is not
in the standard library.
--
Matthew Austern              I dreamt I was being followed by a roving band of
(510) 644-2618               of young Republicans, all wearing the same suit,
matt@physics.berkeley.edu    taunting me and shouting, "Politically correct
austern@theorm.lbl.gov       multiculturist scum!"... They were going to make
austern@lbl.bitnet      me kiss Jesse Helms's picture when I woke up.




Author: maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller)
Date: Tue, 7 Jul 1992 05:48:25 GMT
Raw View
In article <23127@alice.att.com> bs@alice.att.com (Bjarne Stroustrup) writes:
>
>Here is an (incomplete) list of proposals for which the committee has received
>"official" written proposals:

 Thank you for posting this list.

You have already saved me time because someone else proposed something
I was thinking about and it has been rejected.

I would like to know more about **>

>
> templates (accepted)
> exception handling (accepted)
> relaxation of virtual function return rules (accepted)
> overloading of dot
> renaming/generalized overriding (withdrawn/rejected)
> run-time type identification
**> name space control (modules)
> etc. concurrency support extensions
> named arguments ala Ada (withdrawn/rejected)
**> member initializers within class declarations
**> etc. template extensions (constraints, overloading, etc.)
**> ~const (explicit separation of the notions of logical and bitwise constness)
**> separate overloading based on rvalue/lvalue distinction
> operator new/delete for arrays
> more general default arguments
> restricted pointers (a modern version of noalias)
**> assertions
> representation using restricted European character set (accepted)
> extended characters sets (unicode, etc.)
> added facilities for hiding private data (rejected)
> etc. proposals for relaxing, strengthening, and changing the access protection rules
> overloading based on enumerations
>
>For several of the issues the committee has receives more than one - usually
>mutually exclusive - proposals.
>
>If you want details you can either become an observer and get the papers or look
>at the series of rapports from the ANSI meetings that appear regularly in C++
>publications. Each proposal involves several pages and the major ones many long
>papers so I can't easily present them here.
>
>In addition the net is humming with proposals for an exponentiation operator,
>user defined operators, etc. many of which I suspect will eventually emerge
>as formal proposals.
>
>In my opinion accepting any large subset of these proposed features would lead
>to chaos.

 C++ is already chaos :-)

>Yet the simple fact that it is easier to say 'yes' than 'no,' the
>fact that proponents invariably will devote more time and effort on a proposal
>than opponents, that a feature once voted in will never be voted out again,
>and that the acceptance of one proposal is invariably used as an argument
>for accepting others ("if THAT proposal was good enough then MINE is good
>enough and it is UNFAIR to reject it") seems to make it quite difficult to
>avoid the acceptance of most of these.
>
>The committee is struggling to develop a procedure that ensures a reasonable,
>fair, and timely evaluation of individual proposals and also keep the language
>as a whole in mind. This is not easy. Two techniques has been developed:
>
> Evaluation based on the criteria published in the "How to Write a C++
> Language Extension Proposal for ANSI-X3J16/ISO-WG21" letter from the
> extensions working group, and
>
> secret ballot on the order of consideration of proposals in the
> extensions working group.
>

 How about posting the proposals to comp.iso.c++??

 Then some of the work could be done by the time it comes
 to committee.

>We also try to produce written analyses of proposals to help evaluation but
>to date we have not managed to evolve a completely satisfactory method for
>producing such evaluations of a satisfactory format for such evaluations

 This would also be a problem fro the moderator(s) of
 comp.iso.c++

>(for example, there is a strong tendency for proponents of a proposals to
>be the only ones who volunteer for doing evaluations). In the absence of
>improved evaluation procedures I suspect the committee will accept at least
>one proposal per meeting (three a year) for some years.

 Net based proposals tend to get evaluated by many people
with different ideas.

>
>To balance the enthusiasm for extensions I note that every proposed extension
>has been strongly opposed on technical grounds by someone both within the
>ANSI/ISO committee and without. People tend to hold strong opinions on language
>issues.
>

 That is one thing we may all agree on!
--
;----------------------------------------------------------------------
        JOHN (MAX) SKALLER,         maxtal@extro.ucc.su.oz.au
 Maxtal Pty Ltd, 6 MacKay St ASHFIELD, NSW 2131, AUSTRALIA
;--------------- SCIENTIFIC AND ENGINEERING SOFTWARE ------------------




Author: sasa@version.kiev.ua
Date: Tue, 07 Jul 92 22:40:47 +0300
Raw View
        Hello!

    I am sorry, but I have skipped  the start  of  discusion. I  think
the  main  subject is   "how  to  add  new ability   to use  C++   for
calculation programming?". May I speak on the this subject?

    I think the adding a "power operator"  isn't single question.  How
about adding "complex-values" as predefined type (as "int",  "double",
etc.)? After this, the question about "^"  is nothing,  because  "XOR"
is undefined for "complex". My offer is follow.


    Add some  new  predefined  types  to  C++  -   "numerical",  "long
numerical" and "complex" with full set of  operators. Then, contain of
standard libriry must be defined (as <stdio.h> for example).

    For this types "XOR" and "NOT" are undefined. Bicause it,  we  can
assign  its  tokens to  "power"   and  "root". For compatible, we  can
assign the "^" for "power", then, the "~" must be a "root".

    It is  full  in  C++ "overload  ideology". For example,  near  all
class libriries redefine  the  "+", "+=",  and other for own  classes,
and this may be "add  to  list".  And C++ programmer  dosn't afraid of
it.

    As second,   the  one-letter operator is more nice,  especialy  in
assignment form: compare "/%=" and "~=".

    Of course, it need addition abilites in <errno.h>, etc.


        How about this idea?

P.S. Where I can find the "Joe Buck's version 2 proposal of 8 March 1992" ?
--
Sasa Derazhne | sasa%version.kiev.ua@USSR.EU.net | Fido:   2:463/16
Kiev, Ukraine | sasa@version.kiev.ua (For /USSR) |  +7-044-417-2175






Author: steve@taumet.com (Steve Clamage)
Date: Tue, 7 Jul 1992 20:36:18 GMT
Raw View
eric@tfs.com (Eric Smith) writes:


>Using operator@ adds a character to the language.  Is that one of the
>objections to it?

Yes.  There are already 9 characters in C/C++ from the ASCII set
 [  ]  {  }  \  |  ^  ~  #
which do not exist in one national character set or another.  These
cause endless problems for those nations.  Some of these characters
are also missing from EBCDIC, used on IBM systems even in America.

Trigraphs, invented by the ANSI C committee, are not a completely
satisfactory solution.  As a consequence, some other character
sequences have been proposed to represent these characters, or
operators using them, in *addition* to trigraphs.

Adding yet another character not commonly available would further
fan the flames of discontent.  It would take some pretty powerful
arguments to add yet another character.
--

Steve Clamage, TauMetric Corp, steve@taumet.com
Vice Chair, ANSI C++ Committee, X3J16




Author: matt@physics.berkeley.edu (Matt Austern)
Date: 7 Jul 92 23:16:41 GMT
Raw View
In article <0.131005894782@kiae.su> sasa@version.kiev.ua writes:

>     I think the adding a "power operator"  isn't single question.  How
> about adding "complex-values" as predefined type (as "int",  "double",
> etc.)? After this, the question about "^"  is nothing,  because  "XOR"
> is undefined for "complex".

Unfortunately, overloading operator^ doesn't work.  An operator always
has the same precedence, no matter how it is overloaded, and the
precedence of operator^ is utterly wrong for exponentiation.  The
expression 3*x^5 + 2*x^17 will not be interpreted the way you would
want!

I don't think that any language changes are needed for using complex
numbers; it seems to me that classes are a find solution to that.  (Of
course, if a class library is ever standardized, I would suggest that
complex numbers would be a fine candidate for inclusion in that
standard class library, along with, for example, strings, arrays, and
linked lists.)
--
Matthew Austern              I dreamt I was being followed by a roving band of
(510) 644-2618               of young Republicans, all wearing the same suit,
matt@physics.berkeley.edu    taunting me and shouting, "Politically correct
austern@theorm.lbl.gov       multiculturist scum!"... They were going to make
austern@lbl.bitnet      me kiss Jesse Helms's picture when I woke up.




Author: maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller)
Date: Wed, 8 Jul 1992 13:46:44 GMT
Raw View
In article <1992Jul7.203618.18074@taumet.com> steve@taumet.com (Steve Clamage) writes:
>eric@tfs.com (Eric Smith) writes:
>
>
>>Using operator@ adds a character to the language.  Is that one of the
>>objections to it?
>
>Yes.  There are already 9 characters in C/C++ from the ASCII set
> [  ]  {  }  \  |  ^  ~  #
>which do not exist in one national character set or another.  These
>cause endless problems for those nations.  Some of these characters
>are also missing from EBCDIC, used on IBM systems even in America.
>
 Worse, Borland for one uses the @ character in its name
mangling schemes.

 Lets stick with the existing character set.

 BTW: what is the status of '\'. It is not in the
language proper, but has specific meanings in strings.


--
;----------------------------------------------------------------------
        JOHN (MAX) SKALLER,         maxtal@extro.ucc.su.oz.au
 Maxtal Pty Ltd, 6 MacKay St ASHFIELD, NSW 2131, AUSTRALIA
;--------------- SCIENTIFIC AND ENGINEERING SOFTWARE ------------------




Author: bs@alice.att.com (Bjarne Stroustrup)
Date: 8 Jul 92 21:45:52 GMT
Raw View


 > BTW: what is the status of '\'. It is not in the
 > language proper, but has specific meanings in strings.

backslash is one of the characters used for a letter in many european
languages. Its character position is set aside for that purpose by ISO.
It would thus add to problems of "internatiolization" to use backslash
for anything significant.

PS See page 371 of the ARM for a brief example of these character set problems




Author: jimad@microsoft.com (Jim Adcock)
Date: 08 Jul 92 22:20:13 GMT
Raw View
In article <23127@alice.att.com> bs@alice.att.com (Bjarne Stroustrup) writes:
|We must work within some formal framework exactly to
|avoid the fears of arbitrariness that have been voiced

while recognizing merely working within a formal framework is insufficient
to avoid arbitrariness, nor is is a justification for arbitrariness.

|I note that no form of decision making is perfect but that the
|relatively democratic process specified by ANSI and ratified by
|the somewhat more formal ISO voting by national representatives
|is similar to that devised in other contexts to ensure a balance
|between openness and stability. Anyway, please note that the ANSI
|process is not controlled by the members of the ANSI C++ committee.

Nor is "democracy" or voting systems in one of their miriad forms an
excuse for arbitrariness.

Many aspects of the ANSI C++ committee are under the control of the
C++ committee.  Examples recently mentioned by committee members include
the rules for who gets copies of the papers and how, the private mail
systems in use by the committee, the methods for determining what items
get addressed when and how, what working group issues get routed to and
why, joining the ANSI and ISO efforts, etc.

|The ANSI process gives votes to anyone turning up without any questions
|asked about interests, abilities, or nationality. It gives that vote on
|the second and subsequent meeting a member attends

I believe there is also a rule that you have to attend two out of the
last three meetings?

|Maybe accepting many new features is good? Most people seems to agree that
|some features should be added. Yet, again most people pay at least lip service
|to minimalism and make rude jokes about creeping featurism. What do people
|really think? Is the easiest path - simply accept as many reasonable proposals
|as possible after a minimal cleanup - also the best? Alternatively, should
|all extensions be avoided and all our energy channeled into "standardizing
|current practice?" (can that be done?) Can anyone think of criteria to apply
|in between these extremes? That is, what makes one proposal acceptable and
|another unacceptable?

I can only state my personal opinions on this question:

1) Not every change that has been suggested is an extension.  Changes
that extend the language should be considered differently than changes
that either don't extend the language or possibly reduce the language.
Changes that extend the possibilities of things that programmers can
do are not necessarily extensions to the language -- such changes might
equally likely be *reductions* in the language.  *Reductions* in the langauge
-- removing special cases, for example -- should be viewed particularly
favorably.  The onus should be on proving the *worth* of special cases,
not in removing them.

2) "All things being equal" making the language more orthogonal
is preferrable to leaving it unorthogonal because
leaving it unorthogonal makes the language bigger and increases
the surface area that programmers have to learn.  For example,
adding the initialization syntax:

 Foo foo(bar);

made the language bigger, and then later adding:

 int i(bar);

made it at least somewhat smaller.
IMHO the 2nd such a change is an improvement and a *reduction* in the language
and further has the *advantage* of giving programmers the *choice*
to express themselves in the manner they see fit.

3)  For better or for worse, C and C++ have developed a particular
"style and feel" and extensions need to fit into that existing "style and feel"

4) When new C++ programmers year after year all express disappointment
and/or surprise over the lack of some "obvious" C++ feature, those programmers
are probably correct and their thoughts on these  issues should probably be
taken seriously.

5) Having a C++ standard is NOT an important goal.  Learning something,
reaching some consensus, and improving C++ along the way ARE important goals.
When C++ has a standard, then its probably time to move on to something
else, just as C++ began really happening just as ANSI-C was finishing
standardization.

6) Common usage of programming "hacks" are not "features of the language",
but rather show errors in the language that need to be fixed, as do
common usages of the preprocessor.




Author: dougm@titan.cs.rice.edu (Doug Moore)
Date: Sun, 5 Jul 1992 05:38:57 GMT
Raw View
Draft Proposal to add an exponentiation operator to C++
(a revision of Joe Buck's version 2 proposal of 8 March 1992)
(that he wouldn't entirely agree with)

Revise the ARM as follows:

====Insert a new section between sections 5.5 and 5.6:====

The exponential operators  *%, /%, and %% group right-to-left.

exp-expression:
        pm-expression
        pm-expression *% exp-expression
        pm-expression /% exp-expression
        pm-expression %% exp-expression

The operands of *%, /%, and %% must have arithmetic type.

These operators each group right-to-left.  The usual arithmetic
conversions (see 4.5) are performed on the operands and determine the
type of the result.

The binary *% operator indicates exponentiation; i. e. it returns the
result of raising the first operand to the power represented by the
second operand.  If the second operand is 0, the result is one.

The binary /% operator yields the root, and the binary %% operator
yields the remainder from the extraction of the integral n-th root of
the first expression, where n is the value of the second.  If the
second operand of /% or %% is 0 the result is undefined.  If the
result of a /% b is defined, then (a/%b)*%b + a%%b is equal to a and
a%%b is nonnegative.

For the binary operators *%, /% and %%, the result is undefined if the
first operand is negative and the value of the second is not an
integer, or if the first operand is zero and the second is negative.
If the type of the result is integral and the second operand is
negative, then a*%b and a/%b have value 0.

 [Note] To preserve the identity (floatval op intval ==
 floatval op float(intval)), it is necessary to have the
 exponentiation operator for floats check whether its second
 argument is integer valued.  Relative to the cost of
 exponentiation, the cost of the test float(int(b))==b seems a
 small one.

====Amend the productions of section 5.6 as follows:====
multiplicative-expression:
        exp-expression
        multiplicative-expression *% exp-expression
        multiplicative-expression /% exp-expression
        multiplicative-expression %% exp-expression

==Add to the list of assignment operators in section 5.17 the following:==

*%=  /%=   %%=

-------------------------------------------------------------------

Objections and questions:

1.  C++ is not Fortran; let the numerical programmers use that.

In greater and greater numbers, they don't want to.  They recognize
the limitations of Fortran better than anyone.  By taking small steps
to bring the numerical community into the C++ community, we get more
and better books, better optimizing compilers, and a bigger
marketplace for C++ expertise.  The benefits outweigh the costs.

2.  Why isn't it enough to use "pow", especially since you can overload it
to define pow (double, double), pow (double, int), and pow (int, int)?

Why have operators at all?  Because used properly they match our own
mathematical notations and make programs more readable.

3.  Why not use ** as in Fortran?

It would break "int main (int argc, char **argv)".

4.  How about !, ^, ^^, #, @, or ~ ?

! can be used as a binary operator, but not in the assignment form !=.

^ is taken (exclusive-or).  It has the wrong precedence and associativity
to be used as an overloaded operator signifying exponentiation.

There is a relation between & and && and | and || that might suggest a
completely different meaning for ^^ to some (a logical rather than a
bitwise exclusive-or).

# would produce unexpected results in a preprocessor definition like

#define power(a,b) (a#b)

~ is currently a unary operator but not a binary operator, and it
would be possible to use it for exponentiation instead.  But it may
not appear on some non U. S. keyboards.

@ is not a character used in C++ tokens now.  It would likely require
the generation of a new trigraph sequence for those with non U.S.
keyboards to deal with.

5.  Well, why *% then?  The use of % is so ... bizarre.

I was using *^ at first.  That choice appeared to require of some non
U.S. users the use of a trigraph, as (a *??' b).  I thought having
everyone write (a *% b) would be better.

6.  Why have the root taking "/%" operator?

To permit the taking of cube and similar roots that cannot be
expressed in terms of integral values and *%.  For example,

(-8)*%(1./3)

is undefined, but

(-8)/%3

is -2.

7.  Why have the %% operator?

Because it's a quantity computed as a byproduct of root taking anyway,
so why not use it?  Because it completes the analogy to the workings
of operators *, / and %.  Because it's another operator to overload.

8.  You implied above that 0*%0 == 1.  Did you mean to - because this
is not right.

I meant to.  First, saying that 0*%0 is undefined suggests only that
you have an incomplete definition.  We can define it as we wish if
such a definition is useful in practical programs.  So, consider a
function that evaluates sparse polynomials:

double pEval(double x, double* coeff, int* pwr, int n)
{
  double sum = 0.;
  for (int i = 0; i < n; ++i)
    sum  += coeff[i] * x *% pwr[i];
  return sum;
}

Without defining 0*%0 == 1, this reasonable-seeming function can fail.
Is there a similar reasonable-seeming program that fails unless 0*%0
takes on some other value?

9.  Why not let the programmer overload operators like these on the
builtin types?

Because name clashes between two libraries that overloaded *%
differently on doubles would be a nightmare.

10.  What is the effect on existing programs?

None.

11. This is a waste of time.  None of these proposals will ever get adopted.

Well, maybe.  The ANSI/ISO standardization process will probably
ignore this sort of issue until compiler writers start developing
their own exponentiation operator extensions.  If we get a coherent
proposal and enough momentum, we can bang on compiler vendors to get
the proposal implemented.

The GNU people have demonstrated an inclination to extend in the past.
For example, the GNU C++ compiler features builtin, overloadable max
and min operators (>? and <?).

How much would you pay for a GNU C++ compiler that had exponentiation
operators?  I'd pay five dollars.  That's not enough to do it, of
course.  What's it worth to you to get a popular C++ compiler to add a
power operator, so that makers of other C++ compilers feel competitive
pressure to do the same?  If people were willing to put up the money
to match the talk, exponentiation operators *would* appear in at least
one compiler*.

Doug Moore
dougm@cs.rice.edu

* This is a conjecture.  I don't work for Cygnus, who are the most
likely candidates to do it for some unknown amount of money.




Author: maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller)
Date: Sun, 5 Jul 1992 11:21:53 GMT
Raw View
In article <DOUGM.92Jul4233857@titan.cs.rice.edu> dougm@titan.cs.rice.edu (Doug Moore) writes:
>
>Draft Proposal to add an exponentiation operator to C++
>(a revision of Joe Buck's version 2 proposal of 8 March 1992)
>(that he wouldn't entirely agree with)
>
>The operands of *%, /%, and %% must have arithmetic type.
>
>These operators each group right-to-left.  The usual arithmetic
>conversions (see 4.5) are performed on the operands and determine the
>type of the result.

 Two alternatives. Hypothetical arguments only ...

 1) Already discussed, a *% n should not promote n to float.
Possible arguments: checking if a real number is integral is mathematically
inconsistent in constructive mathematics. a*%b where a<0 should
always be an error. If n>0 it is never an error. The conversion
to float followed by a test degrades efficiency and reliability.

 2) The return type should be determined by the first argument
only. The result of the power function is the original number raised
to some power: the type of the power is not relevant in determining
the type of the result.
>
>The binary *% operator indicates exponentiation; i. e. it returns the
>result of raising the first operand to the power represented by the
>second operand.  If the second operand is 0, the result is one.
>
>For the binary operators *%, /% and %%, the result is undefined if the
>first operand is negative and the value of the second is not an
>integer, or if the first operand is zero and the second is negative.
>If the type of the result is integral and the second operand is
>negative, then a*%b and a/%b have value 0.
>
> [Note] To preserve the identity (floatval op intval ==
> floatval op float(intval)), it is necessary to have the
> exponentiation operator for floats check whether its second
> argument is integer valued.  Relative to the cost of
> exponentiation, the cost of the test float(int(b))==b seems a
> small one.
>

 The cost is greater for a *% 2.
--
;----------------------------------------------------------------------
        JOHN (MAX) SKALLER,         maxtal@extro.ucc.su.oz.au
 Maxtal Pty Ltd, 6 MacKay St ASHFIELD, NSW 2131, AUSTRALIA
;--------------- SCIENTIFIC AND ENGINEERING SOFTWARE ------------------




Author: dag@control.lth.se (Dag Bruck)
Date: 5 Jul 92 13:31:25 GMT
Raw View
A fundamental question is how great is the need for a power operator.
If this is a common operation in numerical code, I would regard that
as a strong argument for a power operator.  If it is not, that would
be strong argument against.

I ask someone with access to a lot of numerical code (perhaps written
in FORTRAN) to do us a favour: gather some statistics that can tell us
how common exponentiation is compared to other operators, such as,
addition, subtraction, multiplication and division.  Compared to
square root?

It would also be helpful if you could describe your application a
little, and also your view if the code you have evaluated is typical
for certain applications.


    -- Dag




Author: hbf@durin.uio.no (Hallvard B Furuseth)
Date: 5 Jul 92 16:17:00 GMT
Raw View
In article <DOUGM.92Jul4233857@titan.cs.rice.edu> dougm@titan.cs.rice.edu (Doug Moore) writes:

> The exponential operators  *%, /%, and %% group right-to-left.

AArgh!  Trying to get the committee to insert one operator is hard
enough, and you want three!

I don't think integral roots are usual enough to warrant an operator.
Someplace a line must be drawn where you must use a function instead of
an operator.  That is, unless you allow the user to define any operator
he wishes with some special syntax.  Ie something like
 int operator \root "*%" (int, int);
meaning that operator \root has the same precedence as *% and you can
then say ie "i * j \root 3 + 2".


> (...)      The usual arithmetic
> conversions (see 4.5) are performed on the operands and determine the
> type of the result.

I have seen two discussions of power operators recently, and after all
these words post participants still do not seem to realize one detail:
The type of the result should be (more or less) that of the 1st operand.
(Or like pow(): always floating.  I hate the idea but it is certainly
consistent.)

To be exact, I think the type should be given thus:
* If arg2 is integral: As arg1 after integral promotions.
* Maybe forbid integral!floating.
* Otherwise, perform the usual arithmetic conversions.

Just *PLEASE* think slightly about it:
* unsigned~anything cannot become negative.
* short~long would certainly get slightly less risk of overflow if
  promoted to long~long, but since even (long)2~(char)127 will give
  overflow on most machines, it's not very useful to worry about
  integral overflow due to large 2nd arg anyway.
* int~double is most likely the programmer making an error.
* the range of the result has nothing to do with the range (or
  precision) of 2nd arg's type.
* float~integral is defined as multiplying arg1 (or 1/arg1) with
  itself repeatedly.  The precision stays that of arg1, the range is
  as useless to worry about than short~long.
* True, the value of int~negative-int is nonintegral.  But if arg2 is a
  variable the compiler won't know whether it's integral, so to allow
  for that it would have to convert to floating even for int~int.
  Of course it could recognize if arg2 is a negative integral CONSTANT,
  but heaven forbid different return type on (i~-2) and (j=-2, i~j)!

(Maybe even float!double should return arg1's type.  I don't know if
exponentiation is precise enough to make the extra precision from the
conversion useful.  After all, theoretically the precision of a
nonintegral operation is no greater than the least precise operand.)


> 8.  You implied above that 0*%0 == 1.  Did you mean to - because this
> is not right.
>
> double pEval(double x, double* coeff, int* pwr, int n)
> {
>   double sum = 0.;
>   for (int i = 0; i < n; ++i)
>     sum  += coeff[i] * x *% pwr[i];
>   return sum;
> }
>
> Without defining 0*%0 == 1, this reasonable-seeming function can fail.
> Is there a similar reasonable-seeming program that fails unless 0*%0
> takes on some other value?

Certainly.  Anything which builds on the fact that 0%*a == 0 instead of
a%*0 == 1 [when the expression is valid].

double pEvalVec(double* x, double* coeff, int* pwr, int n)
{
  double sum = 0.;
  for (int i = 0; i < n; ++i)
    sum  += coeff[i] * x[i] *% pwr[i];
  return sum;
}

Undefined result (in mathematics) means that the "desired" value depends
on the context.  You could just as well claim that 0/0 should be 1
because then (fabs(a/b - 1.0) < epsilon) will be a good check for
whether two floating numbers are approximately equal.  Sure that's a
nice algorithm, but there are other algorithms in this world.
--

Hallvard




Author: jbuck@forney.berkeley.edu (Joe Buck)
Date: 5 Jul 92 20:27:38 GMT
Raw View
In article <1992Jul5.133125.5575@lth.se> dag@control.lth.se (Dag Bruck) writes:
>A fundamental question is how great is the need for a power operator.
>If this is a common operation in numerical code, I would regard that
>as a strong argument for a power operator.  If it is not, that would
>be strong argument against.

The vast majority of the world's numerical code is still done in Fortran.
The power operator is EXTREMELY common in numerical code.  It is very
common to raise a real value to an integer power, or to a power that is
a constant known at compile-time.  For this reason, every Fortran compiler
since the very first one did strength reduction on the ** operator.

>  In many cases, the
>I ask someone with access to a lot of numerical code (perhaps written
>in FORTRAN) to do us a favour: gather some statistics that can tell us
>how common exponentiation is compared to other operators, such as,
>addition, subtraction, multiplication and division.  Compared to
>square root?

Most Fortran programmers don't write SQRT(X), they write X**0.5.  They
also write X**2 rather than X*X.  I think, though, that these should be
counted as a use of **, because the use of this operator was natural to
the programmer.  But once you grant this, you'll see that ** is going
to be extremely common: how common is the operation of forming the
square in numerical code?

In any case, a reasonable place to start would be the large public-domain
math libraries that are available in many places.



--
Joe Buck jbuck@ohm.berkeley.edu




Author: jbuck@forney.berkeley.edu (Joe Buck)
Date: 5 Jul 92 20:44:55 GMT
Raw View
In article <DOUGM.92Jul4233857@titan.cs.rice.edu> dougm@titan.cs.rice.edu (Doug Moore) writes:
>
>Draft Proposal to add an exponentiation operator to C++
>(a revision of Joe Buck's version 2 proposal of 8 March 1992)
>(that he wouldn't entirely agree with)

That's an understatement!  The exponentiation operator has a very strong
precedent: 30 years of use in Fortran, to the point where scientific
programmers' most bitter complaint about C and C++ has always been the
lack of an exponentiation operator.  You take a proposal that is direct
and to the point and dilute it by adding two new operators that don't
have a precedent in another language; you just made up their semantics.
To be frank, I feel kind of insulted; by associating your proposal with
mine, you drag down my proposal.

>The binary /% operator yields the root, and the binary %% operator
>yields the remainder from the extraction of the integral n-th root of
>the first expression, where n is the value of the second.  If the
>second operand of /% or %% is 0 the result is undefined.

There hasn't been a need for your root operator in Fortran, because
compilers perform strength reduction (which is the whole reason for
wanting an operator: yes, it is *possible* to do strength reduction by
making the pow function magic, but no one currently does, and I doubt that
anyone will, because programmers will write code assuming that this
strength reduction is not present).  X**0.5 is successfully converted by
almost any Fortran compiler to a call to square root; if the operator were
added to C++, the same property would hold.  I don't think there is any
computational advantage to doing this strength reduction to higher roots,
and programmers would have no problem writing X@(1.0/N) (if @ were the
exponentiation operator) for your X /% N.

Your %% operator is just strange.  It is not a commonly used operation.
It does not deserve an operator.
--
Joe Buck jbuck@ohm.berkeley.edu




Author: cflatter@nrao.edu (Chris Flatters)
Date: Sun, 5 Jul 1992 23:19:41 GMT
Raw View
On the frequency of use of ** in FORTRAN.

I just did some checking on the use of the `power' operator in the
Astronomical Image Processing System (AIPS).  This is a pretty large
(> 600k lines) FORTRAN-based package for the reduction of
radioastronomical data.  Numerical operations include Fourier
transforms and fitting components to images.  FFT's probably dominate.

In the whole package, which we can conservatively estimate to comprise
about 500k lines of code excluding comments (comments are a bit thin in
AIPS) the string ' ** ' shows up on 478 lines of code.  SQRT shows up
on 949 lines of code.  A few occurrences of either may occur in
comments and formatted I/O.

Many of the numerically intensive routines in AIPS have been isolated
(a legacy of the days when array processors were important).  In the
numerically intensive code ' ** ' occurs on only 44 lines, SQRT occurs
on 219 lines and ' + ' occurs on 5230 lines.

I suspect that the relative rareness of the power operator in the
numerically intensive code is probably due to programmers hand
optimizing expressions like X**2 to X*X (in my experience, FORTRAN
programmers will usually do this if X is a short expression).

Examination of a small fraction of the non-numeric code using the power
operator shows that a large fraction of it is involved in driving image
display and graphics output devices.

I would conclude that exponentiation is a rather rare operation as far
as our numerical application code goes.  Trigonometric operations are
far more common (SIN appears on 518 lines in the numerically intensive
Q routines).  I doubt that the lack of an exponentiation operator
would make much impact on our code (particularly if overloaded versions
of pow() -- float pow(float, float), double pow(double, double), int
pow(int, int), float pow(float, int), double pow(double, int) --
were available).

Mileage will vary for other applications.  An interesting case to look
at would be a fluid dynamics code: many supercomputer applications
deal with simulations of fluid flow.

 Chris Flatters
 cflatter@nrao.edu




Author: dougm@titan.cs.rice.edu (Doug Moore)
Date: Mon, 6 Jul 1992 04:37:29 GMT
Raw View
>>>>> On 5 Jul 92 16:17:00 GMT, hbf@durin.uio.no (Hallvard B Furuseth) said:
 Doug> The exponential operators  *%, /%, and %% group right-to-left.

 Hallvard> AArgh!  Trying to get the committee to insert one operator is hard
 Hallvard> enough, and you want three!

Hey, I can negotiate.  I'll settle for two :-)

Besides, if you read to the end, you'd see that I don't think anyone
is going to get the committee to insert anything that hasn't been
implemented in a C++ compiler somewhere already.

 Hallvard> I don't think integral roots are usual enough to warrant an operator.
 Hallvard> Someplace a line must be drawn where you must use a function instead of
 Hallvard> an operator.
I suspect that square roots get taken a lot.  But otherwise, you are
probably right.

 Doug> (...)      The usual arithmetic
 Doug> conversions (see 4.5) are performed on the operands and determine the
 Doug> type of the result.

 Hallvard> I have seen two discussions of power operators recently, and after all
 Hallvard> these words post participants still do not seem to realize one detail:
 Hallvard> The type of the result should be (more or less) that of the 1st operand.
 Hallvard> (Or like pow(): always floating.  I hate the idea but it is certainly
 Hallvard> consistent.)
The detail that "participants still do not seem to realize" is not a
fact, but an opinion.  I reserve the right to disagree.

 Hallvard> To be exact, I think the type should be given thus:
 Hallvard> * If arg2 is integral: As arg1 after integral promotions.
 Hallvard> * Maybe forbid integral!floating.
 Hallvard> * Otherwise, perform the usual arithmetic conversions.
I respect your opinion.  But you are proposing a behavior very
different from that of any other C++ operator.

 Doug> 8.  You implied above that 0*%0 == 1.  Did you mean to - because this
 Doug> is not right.
 Doug>
 Doug> double pEval(double x, double* coeff, int* pwr, int n)
 Doug> {
 Doug>   double sum = 0.;
 Doug>   for (int i = 0; i < n; ++i)
 Doug>     sum  += coeff[i] * x *% pwr[i];
 Doug>   return sum;
 Doug> }
 Doug>
 Doug> Without defining 0*%0 == 1, this reasonable-seeming function can fail.
 Doug> Is there a similar reasonable-seeming program that fails unless 0*%0
 Doug> takes on some other value?

 Hallvard> Certainly.  Anything which builds on the fact that 0%*a == 0 instead of
 Hallvard> a%*0 == 1 [when the expression is valid].

 Hallvard> double pEvalVec(double* x, double* coeff, int* pwr, int n)
 Hallvard> {
 Hallvard>   double sum = 0.;
 Hallvard>   for (int i = 0; i < n; ++i)
 Hallvard>     sum  += coeff[i] * x[i] *% pwr[i];
 Hallvard>   return sum;
 Hallvard> }

Is this function meant as an example of one that "expects" 0%*0 to be
0?  I fail to see how.  Unless the programmer wrote
 // I expect 0%*0 to be 0
I wouldn't know what she had in mind.

I admit that the example I gave was one constructed specifically to
make my point.  Here is another example that I didn't write, from the
work a student of mine is doing (on lighting in computer graphics).
This is code that I have not modified:

       double inten =
  diffuse * cosine1 +
  specular * ((cosine2 == 0 && specularFactor == 0) ?
       1 : pow(cosine2,specularFactor));

Clearly, he could write simpler code if he could rely on pow(0,0) being 1.

Since I don't consider your "constructed" example legitimate, and you
can consider my "constructed" example illegitimate as well if you
wish, how about finding any *real* code anywhere that would be made
simpler if 0*%0 could be relied upon to have some value other than
one?

 Hallvard> Undefined result (in mathematics) means that the "desired" value depends
 Hallvard> on the context.  You could just as well claim that 0/0 should be 1
 Hallvard> because then (fabs(a/b - 1.0) < epsilon) will be a good check for
 Hallvard> whether two floating numbers are approximately equal.  Sure that's a
 Hallvard> nice algorithm, but there are other algorithms in this world.
 Hallvard> --

Yeah, but I didn't make that particular claim, so I don't see that it
applies to this discussion.

Doug Moore
(dougm@cs.rice.edu)




Author: dougm@titan.cs.rice.edu (Doug Moore)
Date: Mon, 6 Jul 1992 05:04:32 GMT
Raw View
>>>>> On 5 Jul 1992 20:44:55 GMT, jbuck@forney.berkeley.edu (Joe Buck) said:
 Joe> NNTP-Posting-Host: forney.berkeley.edu

 Joe> In article <DOUGM.92Jul4233857@titan.cs.rice.edu> dougm@titan.cs.rice.edu (Doug Moore) writes:
>
>Draft Proposal to add an exponentiation operator to C++
>(a revision of Joe Buck's version 2 proposal of 8 March 1992)
>(that he wouldn't entirely agree with)

 Joe> That's an understatement!  The exponentiation operator has a very strong
 Joe> precedent: 30 years of use in Fortran, to the point where scientific
 Joe> programmers' most bitter complaint about C and C++ has always been the
 Joe> lack of an exponentiation operator.  You take a proposal that is direct
 Joe> and to the point
In your opinion.
 Joe> and dilute it by adding two new operators that don't
 Joe> have a precedent in another language; you just made up their semantics.
Yes, I did.  So?  I can't do that?
 Joe> To be frank, I feel kind of insulted; by associating your proposal with
 Joe> mine, you drag down my proposal.

I apologize for understating.  I certainly did not intend to imply
your approval of anything I wrote.  I merely wished to give some
credit, since I used some of your words.

If you wish to take offense where none was meant, that is your business.
I was not particularly concerned with "dragging down" your proposal; I
had come to believe that you were no longer pursuing it.  Evidently, I
was mistaken.

Why don't you post the latest version of your proposal then?  Add your
ideas to the mix.  I will continue to add mine, without apology.

Doug Moore
(dougm@cs.rice.edu)




Author: dougm@titan.cs.rice.edu (Doug Moore)
Date: 6 Jul 1992 05:27:21 GMT
Raw View
>>>>> On Sun, 5 Jul 1992 11:21:53 GMT, maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller) said:
 MAX>  1) Already discussed, a *% n should not promote n to float.
 MAX> Possible arguments: checking if a real number is integral is mathematically
 MAX> inconsistent in constructive mathematics. a*%b where a<0 should
 MAX> always be an error. If n>0 it is never an error. The conversion
 MAX> to float followed by a test degrades efficiency and reliability.

Your claim that the conversion to float, followed by a test, degrades
efficiency is probably correct.  How much is unclear.   However, I am
surprised how stoically everyone seems to accept that

b = n, (a op b != a op n)

which has always been true for ints exactly representable as floats,
will become false when op is the power operator as proposed by most.
That there is a difference between (-1.) op (2.) and (-1.) op (2) is
unprecedented.

 MAX>  2) The return type should be determined by the first argument
 MAX> only. The result of the power function is the original number raised
 MAX> to some power: the type of the power is not relevant in determining
 MAX> the type of the result.

So what is (2) op (2.5)?  5?

Doug Moore
(dougm@cs.rice.edu)




Author: daveg@synaptics.com (Dave Gillespie)
Date: 6 Jul 92 04:55:30 GMT
Raw View
In article <1992Jul5.112153.9009@ucc.su.OZ.AU> maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller) writes:
>    Two alternatives. Hypothetical arguments only ...
>
>    1) Already discussed, a *% n should not promote n to float.
>   Possible arguments: checking if a real number is integral is mathematically
>   inconsistent in constructive mathematics. a*%b where a<0 should
>   always be an error. If n>0 it is never an error. The conversion
>   to float followed by a test degrades efficiency and reliability.
>
>    2) The return type should be determined by the first argument
>   only. The result of the power function is the original number raised
>   to some power: the type of the power is not relevant in determining
>   the type of the result.
>
>   > [Note] To preserve the identity (floatval op intval ==
>   > floatval op float(intval)), it is necessary to have the
>   > exponentiation operator for floats check whether its second
>   > argument is integer valued.  Relative to the cost of
>   > exponentiation, the cost of the test float(int(b))==b seems a
>   > small one.
>   >
>
>    The cost is greater for a *% 2.

I think the more serious problem would be if the standard allowed
systems where a "long" had more significant bits than a "double".
(I don't know offhand if it does.)  If a "long" can always be cast
to a "double" without losing significance, it wouldn't really hurt
to coerce "n" to double.

Assuming it is safe to do the coercion, I don't think it needs to
degrade efficiency.  A compiler is perfectly able to compile a*%n
as a call to a pow(double,int) routine, provided that this routine
returns the same answer that pow(double,double) would return on
a*%double(n).

In particular, there would be no extra cost for a *% 2, since any
decent compiler would inline this as a special case.  Having the
standard state that the compiler is supposed to imagine itself
first converting 2 to a float doesn't change this fact.

By the way, I'd vote for either ~ or ^^ over *%, just because
the former are more pleasing to the eye and more likely to be
understandable to the new reader.  The argument that ~ is unavailble
on some keyboards loses some weight due to the fact that ~ is already
used two different ways in the language.  And the argument against
^^, while it does have some merit, might be outweighed by the
benefits to be had from a clear, pretty operator.

(I do like the *%, /%, %% symmetry, though!)

        -- Dave
--
Dave Gillespie
  daveg@synaptics.com, uunet!synaptx!daveg
  or: daveg@csvax.cs.caltech.edu




Author: maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller)
Date: Mon, 6 Jul 1992 06:22:57 GMT
Raw View
In article <1992Jul5.133125.5575@lth.se> dag@control.lth.se (Dag Bruck) writes:
>
>A fundamental question is how great is the need for a power operator.
>If this is a common operation in numerical code, I would regard that
>as a strong argument for a power operator.  If it is not, that would
>be strong argument against.
>
>I ask someone with access to a lot of numerical code (perhaps written
>in FORTRAN) to do us a favour: gather some statistics that can tell us
>how common exponentiation is compared to other operators, such as,
>addition, subtraction, multiplication and division.  Compared to
>square root?
>
>It would also be helpful if you could describe your application a
>little, and also your view if the code you have evaluated is typical
>for certain applications.
>
>
>    -- Dag

 I have a program which calculates gaussian dispersion of
toxic materials in air. In the bits that do calculations work,
here are the scores in terms of line counts
[I did grep token file | wc]

Lines total: 172
+  27
-  24
*  25
/  32
%  2
=,==,etc 79
pow  13
fsqr  12 // inline to square number
sqrt  3

IMHO if you cheat a bit and add the last three you get 28, second
only to use of '/'. I'd be happy to post the code if that was
desired [minus flames on style :-)





--
;----------------------------------------------------------------------
        JOHN (MAX) SKALLER,         maxtal@extro.ucc.su.oz.au
 Maxtal Pty Ltd, 6 MacKay St ASHFIELD, NSW 2131, AUSTRALIA
;--------------- SCIENTIFIC AND ENGINEERING SOFTWARE ------------------




Author: maxtal@extro.ucc.su.OZ.AU (John (MAX) Skaller)
Date: Mon, 6 Jul 1992 06:33:34 GMT
Raw View
In article <HBF.92Jul5171700@durin.uio.no> hbf@durin.uio.no (Hallvard B Furuseth) writes:
>In article <DOUGM.92Jul4233857@titan.cs.rice.edu> dougm@titan.cs.rice.edu (Doug Moore) writes:
>
>> The exponential operators  *%, /%, and %% group right-to-left.
>
>AArgh!  Trying to get the committee to insert one operator is hard
>enough, and you want three!

 ANYONE ON THE COMMITTEE---is this true :-)

>I have seen two discussions of power operators recently, and after all
>these words post participants still do not seem to realize one detail:
>The type of the result should be (more or less) that of the 1st operand.
>(Or like pow(): always floating.  I hate the idea but it is certainly
>consistent.)

 Yes.

 It is not the same as C.  BUT: C is wrong anyhow.
Everyone knows that the result of int*int is long!

 So float ~ double should be float, not double.

--
;----------------------------------------------------------------------
        JOHN (MAX) SKALLER,         maxtal@extro.ucc.su.oz.au
 Maxtal Pty Ltd, 6 MacKay St ASHFIELD, NSW 2131, AUSTRALIA
;--------------- SCIENTIFIC AND ENGINEERING SOFTWARE ------------------




Author: eric@tfs.com (Eric Smith)
Date: 6 Jul 92 07:26:03 GMT
Raw View
In article <137n07INNoev@agate.berkeley.edu> jbuck@forney.berkeley.edu (Joe Buck) writes:
>and programmers would have no problem writing X@(1.0/N) (if @ were the

Using operator@ adds a character to the language.  Is that one of the
objections to it?  I haven't been following the thread, but it seems to
me that operator^^ might be more acceptable to more people, partly
because it is slightly suggestive of the function, and partly because
it doesn't add a character to the language.  Also it fits in nicely
with the C scheme of doubling operator symbols to give them different
meanings.  For example compare operator< with operator<< and notice
that their meanings are entirely unrelated to each other.




Author: bs@alice.att.com (Bjarne Stroustrup)
Date: 6 Jul 92 18:13:02 GMT
Raw View

There have been a lot of messages about language extensions,
the ANSI/ISO process, etc., posted lately including some guesses
about my opinions on such topics. Let me try to clarify one or
two things at the risk of adding gasoline to the fire.

I consider the ANSI/ISO standardization process slow, infuriating,
and necessary. We need a C++ standard and I see no better way of
getting one. We must work within some formal framework exactly to
avoid the fears of arbitrariness that have been voiced - and (a
point heavily emphasized by the ANSI officials) to avoid legal
problems.

I note that no form of decision making is perfect but that the
relatively democratic process specified by ANSI and ratified by
the somewhat more formal ISO voting by national representatives
is similar to that devised in other contexts to ensure a balance
between openness and stability. Anyway, please note that the ANSI
process is not controlled by the members of the ANSI C++ committee.
If you don't like them you'll have to lobby the US congress, not
the C++ committee. I don't really know who to lobby if you don't
like the ISO rules.

The ANSI process gives votes to anyone turning up without any questions
asked about interests, abilities, or nationality. It gives that vote on
the second and subsequent meeting a member attends (I believe that the
one meeting delay is there to ensure a minimal familiarity with the rules
and to avoid major fluctuations in committee composition and behavior -
that is only conjecture though) and that only one member from a company
can vote (clearly a rule to ensure that large companies, say AT&T or IBM,
cannot stack the committee).
There have been a lot of messages about language extensions,
the ANSI/ISO process, etc., posted lately including some guesses
about my opinions on such topics. Let me try to clarify one or
two things at the risk of adding gasoline to the fire.

I consider the ANSI/ISO standardization process slow, infuriating,
and necessary. We need a C++ standard and I see no better way of
getting one. We must work within some formal framework exactly to
avoid the fears of arbitrariness that have been voiced - and (a
point heavily emphasized by the ANSI officials) to avoid legal
problems.

I note that no form of decision making is perfect but that the
relatively democratic process specified by ANSI and ratified by
the somewhat more formal ISO voting by national representatives
is similar to that devised in other contexts to ensure a balance
between openness and stability. Anyway, please note that the ANSI
process is not controlled by the members of the ANSI C++ committee.
If you don't like them you'll have to lobby the US congress, not
the C++ committee. I don't really know who to lobby if you don't
like the ISO rules.

The ANSI process gives votes to anyone turning up without any questions
asked about interests, abilities, or nationality. It gives that vote on
the second and subsequent meeting a member attends (I believe that the
one meeting delay is there to ensure a minimal familiarity with the rules
and to avoid major fluctuations in committee composition and behavior -
that is only conjecture though) and that only one member from a company
can vote (clearly a rule to ensure that large companies, say AT&T or IBM,
cannot stack the committee).
I accept these arguments only partially, but see them as reasons for
approaching extensions with caution. I think that C++ can be improved through
judicious extension and seriously damaged through overenthusiastic acceptance
of a large number of extensions. My opinion is probably best summarized by
"If we accept all the GOOD proposals, C++ will fail under the added weight
and complexity." Note that this is a very uncomfortable and difficult position
because everyone seems to have extensions that they would like to have in the
language, and many can't understand why their particular proposal isn't given
priority.

Here is an (incomplete) list of proposals for which the committee has received
"official" written proposals:

 templates (accepted)
 exception handling (accepted)
 relaxation of virtual function return rules (accepted)
 overloading of dot
 renaming/generalized overriding (withdrawn/rejected)
 run-time type identification
 name space control (modules)
 etc. concurrency support extensions
 named arguments ala Ada (withdrawn/rejected)
 member initializers within class declarations
 etc. template extensions (constraints, overloading, etc.)
 ~const (explicit separation of the notions of logical and bitwise constness)
 separate overloading based on rvalue/lvalue distinction
 operator new/delete for arrays
 more general default arguments
 restricted pointers (a modern version of noalias)
 assertions
 representation using restricted European character set (accepted)
 extended characters sets (unicode, etc.)
 added facilities for hiding private data (rejected)
 etc. proposals for relaxing, strengthening, and changing the access protection rules
 overloading based on enumerations

For several of the issues the committee has receives more than one - usually
mutually exclusive - proposals.

If you want details you can either become an observer and get the papers or look
at the series of rapports from the ANSI meetings that appear regularly in C++
publications. Each proposal involves several pages and the major ones many long
papers so I can't easily present them here.

In addition the net is humming with proposals for an exponentiation operator,
user defined operators, etc. many of which I suspect will eventually emerge
as formal proposals.

In my opinion accepting any large subset of these proposed features would lead
to chaos. Yet the simple fact that it is easier to say 'yes' than 'no,' the
fact that proponents invariably will devote more time and effort on a proposal
than opponents, that a feature once voted in will never be voted out again,
and that the acceptance of one proposal is invariably used as an argument
for accepting others ("if THAT proposal was good enough then MINE is good
enough and it is UNFAIR to reject it") seems to make it quite difficult to
avoid the acceptance of most of these.

The committee is struggling to develop a procedure that ensures a reasonable,
fair, and timely evaluation of individual proposals and also keep the language
as a whole in mind. This is not easy. Two techniques has been developed:

 Evaluation based on the criteria published in the "How to Write a C++
 Language Extension Proposal for ANSI-X3J16/ISO-WG21" letter from the
 extensions working group, and

 secret ballot on the order of consideration of proposals in the
 extensions working group.

We also try to produce written analyses of proposals to help evaluation but
to date we have not managed to evolve a completely satisfactory method for
producing such evaluations of a satisfactory format for such evaluations
(for example, there is a strong tendency for proponents of a proposals to
be the only ones who volunteer for doing evaluations). In the absence of
improved evaluation procedures I suspect the committee will accept at least
one proposal per meeting (three a year) for some years.

To balance the enthusiasm for extensions I note that every proposed extension
has been strongly opposed on technical grounds by someone both within the
ANSI/ISO committee and without. People tend to hold strong opinions on language
issues. Rejecting a proposal usually annoys a group of people - so does accepting
one.

Like on the net, the attitudes to extensions on the ANSI/ISO committee varies
greatly, some are for "small" extensions because they are relatively easy and
manageable, others are against "small" extensions because they tend only to have
local impact and doesn't affect programming style in general. The complementary
attitudes to "large" extensions are also found. A few are against all extensions,
a few seems to like just about every extension. One attitude that I am glad I
haven't seen there is support for one's own extensions and against all others.

I think that every extension evaluation comes down to a judgement call:
If programs of a certain kind are important and a certain style of programming
leads to better programs  then an extension supporting that style would be of
real help to some part of the C++ community. If not, then the extension would
be spurious featurism. If so, how many such styles can C++ support directly?

Maybe accepting many new features is good? Most people seems to agree that
some features should be added. Yet, again most people pay at least lip service
to minimalism and make rude jokes about creeping featurism. What do people
really think? Is the easiest path - simply accept as many reasonable proposals
as possible after a minimal cleanup - also the best? Alternatively, should
all extensions be avoided and all our energy channeled into "standardizing
current practice?" (can that be done?) Can anyone think of criteria to apply
in between these extremes? That is, what makes one proposal acceptable and
another unacceptable?



PS. While writing this I saw Mike Miller's excellent postings on the issues
surrounding standardization and extensions. I hope that helps assure people
that the sentiments and reasoning you hear from committee members on the net
and in various publications are the same you would have heard had you been a
fly on the wall at extensions discussions at ANSI/ISO meetings.