Topic: Is this really unspecified behavior?


Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Mon, 12 Dec 2005 10:29:03 CST
Raw View
Andrei Alexandrescu (See Website For Email) wrote:
> There remain the "not-so-obvious" opportunities for optimization, such
> as those that reuse registers etc. By my assertion number (2)
> ("optimizing compiler") I am clarifying that an optimizing compiler can
> still evaluate things in the order they please al long as the
> left-to-right semantics are unaffected.

The question is whether the compiler is able to deduce the
existance of evaluation orders, different from the strict left-to-right
order, but preserving its semantics. I'd expect the conservative
assumptions, based on incomplete knowledge about a program
to cause the compiler to miss alternatives.

On the other hand, not imposing evaluation orders conveys
important information to the compiler - that the expression
does not depend on the evaluation order, even if this is not
evident to the compiler by other means.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Mon, 12 Dec 2005 16:44:38 GMT
Raw View
Andrei Alexandrescu (See Website For Email) wrote:
> 1. Evaluate expr0 resulting in a function f
> 2. For each i in 1..n in this order, evaluate argi resulting in a value vi
> 3. Invoke f(v1, v2, ..., vn)

Step 2 is wrong. What we actually want there is
   2. For each i in 1..n in this order, construct parameter i
      of the function using argi. If an exception is thrown
      during the construction of any parameter, the previous
      parameters are destructed in reverse order of construction.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Mon, 12 Dec 2005 10:44:56 CST
Raw View
"Andrei Alexandrescu See Website For Email wrote:
> Momchil Velikov wrote:
> > "Andrei Alexandrescu See Website For Email wrote:
> >
> >>So the jury is still out on finding cases (that are not source-level
> >>optimizable in an obvious way) in which a specified order of argument
> >>evaluation forces the compiler to generate pessimized code.
> >
> >   How about finding cases in which the order of evaluation is not
> > enforceable at the source level in an obvious way ?
>
> Not sure I understand.

Just presenting the opposite view.  I'm not sure proponents of the
unspecifed
evaluation order should be put in a defensive position, like your
posting suggests.

> For the call (expr0)(arg1, arg2, ..., argn) the
> evaluation algorithm should be as if the following happens:
>
> 1. Evaluate expr0 resulting in a function f
> 2. For each i in 1..n in this order, evaluate argi resulting in a value vi
> 3. Invoke f(v1, v2, ..., vn)
>
> It's a pity that the intended semantics can't be easily expressed as a
> source-to-source transformation. (The problem is that rvalue and lvalue
> expressions would lead to different types of temporaries.)

Then maybe *this* is the problem to solve. Is it related to the
"forwarding problem"
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm ?

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Tue, 13 Dec 2005 05:16:25 GMT
Raw View
Hyman Rosen wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>
>> 1. Evaluate expr0 resulting in a function f
>> 2. For each i in 1..n in this order, evaluate argi resulting in a
>> value vi
>> 3. Invoke f(v1, v2, ..., vn)
>
>
> Step 2 is wrong. What we actually want there is
>   2. For each i in 1..n in this order, construct parameter i
>      of the function using argi. If an exception is thrown
>      during the construction of any parameter, the previous
>      parameters are destructed in reverse order of construction.

I assumed destruction as being a normal outcome of value creation.
That's a language invariant.

Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Tue, 13 Dec 2005 05:16:36 GMT
Raw View
Momchil Velikov wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>
>>There remain the "not-so-obvious" opportunities for optimization, such
>>as those that reuse registers etc. By my assertion number (2)
>>("optimizing compiler") I am clarifying that an optimizing compiler can
>>still evaluate things in the order they please al long as the
>>left-to-right semantics are unaffected.
>
>
> The question is whether the compiler is able to deduce the
> existance of evaluation orders, different from the strict left-to-right
> order, but preserving its semantics. I'd expect the conservative
> assumptions, based on incomplete knowledge about a program
> to cause the compiler to miss alternatives.

Compilers do that extensively already. The past ten years have seen more
and more aggressive reordering by compilers, and there's no sign of it
slowing down.

> On the other hand, not imposing evaluation orders conveys
> important information to the compiler - that the expression
> does not depend on the evaluation order, even if this is not
> evident to the compiler by other means.

I think in the wake of current developments, the assertion above has
transformed from a certainty into an anachronic speculation that needs
to be revisited.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Tue, 13 Dec 2005 05:16:41 GMT
Raw View
Momchil Velikov wrote:
>>It's a pity that the intended semantics can't be easily expressed as a
>>source-to-source transformation. (The problem is that rvalue and lvalue
>>expressions would lead to different types of temporaries.)
>
>
> Then maybe *this* is the problem to solve. Is it related to the
> "forwarding problem"
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm ?

That's being addressed by the rvalue proposal. Solving it won't take
care of defining order of evaluation of function arguments.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Tue, 13 Dec 2005 09:37:06 CST
Raw View
"Andrei Alexandrescu See Website For Email wrote:
> Momchil Velikov wrote:
> >>It's a pity that the intended semantics can't be easily expressed as a
> >>source-to-source transformation. (The problem is that rvalue and lvalue
> >>expressions would lead to different types of temporaries.)
> >
> > Then maybe *this* is the problem to solve. Is it related to the
> > "forwarding problem"
> > http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm ?
>
> That's being addressed by the rvalue proposal. Solving it won't take
> care of defining order of evaluation of function arguments.

Sorry, I didn't understand. Do you mean that solving this
issue won't allow evaluation order defining source-to-source
transformation ?

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Tue, 13 Dec 2005 15:38:02 GMT
Raw View
Andrei Alexandrescu (See Website For Email) wrote:
> I assumed destruction as being a normal outcome of value creation.
> That's a language invariant.

But it should still be made clear that step two involves
binding the function parameters in order, not just
accumulating a set of values (and references) to be passed
to the function. Evaluating an expression used as an
argument and constructing a function parameter aren't so
obviously the same that it can go without saying. Remember,
one thing that this process is meant to fix is the
f(auto_ptr, auto_ptr) problem.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Wed, 14 Dec 2005 05:55:35 GMT
Raw View
Hyman Rosen <hyrosen@mail.com> writes:

> Andrei Alexandrescu (See Website For Email) wrote:
>> I assumed destruction as being a normal outcome of value
>> creation. That's a language invariant.
>
> But it should still be made clear that step two involves
> binding the function parameters in order, not just
> accumulating a set of values (and references) to be passed
> to the function. Evaluating an expression used as an
> argument and constructing a function parameter aren't so
> obviously the same that it can go without saying. Remember,
> one thing that this process is meant to fix is the
> f(auto_ptr, auto_ptr) problem.

It's interesting that this same discussion has been going on
simultaneously on one of the committee mailing lists.  Let me just
point out that the general form of that problem is insoluble:

  // safe under left-to-right ordering?
  f(g(), new T);

As a matter of fact it isn't safe, if f has default arguments.
Leaving that aside, will users be reticent to make this transformation

  f(new T, g())

Does one of those look safer to you?

IMO the right solution for the f(auto_ptr, auto_ptr) problem is to add
a library function

  auto_ptr_new<T>(arg1, ... argN)

or, in an MPL-enabled world,

  new_<auto_ptr<_> >(arg1, ... argN)
  new_<shared_ptr<_> >(arg1, ... argN)
  new_<unique_ptr<_> >(arg1, ... argN)

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Wed, 14 Dec 2005 05:55:38 GMT
Raw View
Momchil Velikov wrote:
> "Andrei Alexandrescu See Website For Email wrote:
>
>>Momchil Velikov wrote:
>>
>>>>It's a pity that the intended semantics can't be easily expressed as a
>>>>source-to-source transformation. (The problem is that rvalue and lvalue
>>>>expressions would lead to different types of temporaries.)
>>>
>>>Then maybe *this* is the problem to solve. Is it related to the
>>>"forwarding problem"
>>>http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm ?
>>
>>That's being addressed by the rvalue proposal. Solving it won't take
>>care of defining order of evaluation of function arguments.
>
>
> Sorry, I didn't understand. Do you mean that solving this
> issue won't allow evaluation order defining source-to-source
> transformation ?

No.

A source-to-source transformation would have made it easier for me to
define order of evaluation by translating C++ to equivalent C++. That
would have made writing my post easier, but it's not essential for
making steps towards defining the order of evaluation.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 14 Dec 2005 15:32:35 GMT
Raw View
David Abrahams wrote:
> Let me just point out that the general form
 > of that problem is insoluble:
>   // safe under left-to-right ordering?
>   f(g(), new T);
> As a matter of fact it isn't safe, if f has default arguments.
> Leaving that aside, will users be reticent to make this transformation
>   f(new T, g())
> Does one of those look safer to you?

I'm sorry, but I don't understand what you mean.

Under my proposed new regime, function parameters
will be constructed from arguments in the call in
strict left-to-right order, and the arguments will
be evaluated in strict left-to-right order. If
constructing a parameter throws an exception,
previously constructed parameters are destructed
in reverse order. So in the examples, if the new T
argument is for an auto_ptr<T> parameter, then both
versions are equally safe. In the first case, if
g() throws then new T will never be called, and in
the second case if g() throws then the auto_ptr<T>
parameter of f will be destructed and thus the
new T pointer will be freed.

So please explain why you think that the general
case is unsafe, or why people will have to change
parameter order.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Wed, 14 Dec 2005 09:31:40 CST
Raw View
"Andrei Alexandrescu See Website For Email wrote:
> Momchil Velikov wrote:
> > "Andrei Alexandrescu See Website For Email wrote:
> >>Momchil Velikov wrote:
> >>>>It's a pity that the intended semantics can't be easily expressed as a
> >>>>source-to-source transformation. (The problem is that rvalue and lvalue
> >>>>expressions would lead to different types of temporaries.)
> >>>
> >>>Then maybe *this* is the problem to solve. Is it related to the
> >>>"forwarding problem"
> >>>http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm ?
> >>
> >>That's being addressed by the rvalue proposal. Solving it won't take
> >>care of defining order of evaluation of function arguments.
> >
> > Sorry, I didn't understand. Do you mean that solving this
> > issue won't allow evaluation order defining source-to-source
> > transformation ?
>
> No.
>
> A source-to-source transformation would have made it easier for me to
> define order of evaluation by translating C++ to equivalent C++. That
> would have made writing my post easier, but it's not essential for
> making steps towards defining the order of evaluation.

I'm under the impression we put different meaning in "defining the
order of evaluation".  While I mean a source-to-source transformation,
which a programmer employs whenever (s)he want to impose a concrete
order of evaluation,  you seem to use it to refer to a change in  the
C++ language specification.

To restate the question, will the rvalue proposal [1] enable a
programmer to perform source-to-source transformations
whenever (s)he wants to specify a concrete evaluation order
of subexpressions and function arguments ?

~velco

[1] I guess by "rvalue proposal" you mean this
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1690.html

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Wed, 14 Dec 2005 22:10:48 GMT
Raw View
David Abrahams wrote:
> It's interesting that this same discussion has been going on
> simultaneously on one of the committee mailing lists.  Let me just
> point out that the general form of that problem is insoluble:
>
>   // safe under left-to-right ordering?
>   f(g(), new T);
>
> As a matter of fact it isn't safe, if f has default arguments.
> Leaving that aside, will users be reticent to make this transformation
>
>   f(new T, g())
>
> Does one of those look safer to you?
 >
 > IMO the right solution for the f(auto_ptr, auto_ptr) problem is to add
 > a library function
 >
 >   auto_ptr_new<T>(arg1, ... argN)

I'd be interested in understanding the solution of principle even if it
would be too late to push that for standardization.

The code f(new T, g()) can leak because the call to new is bounded to a
temporary that never reaches f. I believe that that's an issue that's
not deeply linked to the order of evaluation. Defining the order of
evaluation would leave that case alone.

However, I believe defining the order of evaluation would solve
f(auto_ptr<T>, auto_ptr<T>). This is because the language invariant
preserves the rule that any value that was created will be destroyed.

Too much effort for the compiler? It's not, really, and looking at the
rules for constructing arrays gives good insights. So, right now, when
you write:

T * p = new T[n];

there's a lot going on. The semantics of the code is really:

T * p;
{
   size_t __i = 0;
   on_scope_failure {
     while (__i > 0) {
       p[--__i].T::~T();
     }
   }
   for (; __i != n; ++__i) {
     new(p + __i) T();
   }
}

(With this occasion I've also shown how nice coding with on_scope_xxx is
:o). It's not much of a difference with try/catch (...) in this case
though.)

We can generalize this idea to the creation of function parameter lists.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Wed, 14 Dec 2005 22:10:53 GMT
Raw View
Andrei Alexandrescu (See Website For Email) wrote:
 > Too much effort for the compiler? It's not, really, and looking at the
 > rules for constructing arrays gives good insights. So, right now, when
 > you write:
 >
 > T * p = new T[n];
 >
 > there's a lot going on. The semantics of the code is really:
 >
 > T * p;
 > {
 >   size_t __i = 0;
 >   on_scope_failure {
 >     while (__i > 0) {
 >       p[--__i].T::~T();
 >     }
 >   }
 >   for (; __i != n; ++__i) {
 >     new(p + __i) T();
 >   }
 > }

Oops, I meant:

T * p = operator new(n * sizeof(T));
{
   size_t __i = 0;
   on_scope_failure {
     while (__i > 0) {
       p[--__i].T::~T();
     }
     operator delete[](p);
   }
   for (; __i != n; ++__i) {
     new(p + __i) T();
   }
}

.with the appropriate magic of calling T::operator new and T::operator
delete if T defines those. By the way, is there a portable way of
writing that in source code? That is, call T::operator new if it's
defined, otherwise call ::operator new.

One more correction to my same post: When I said:

"The code f(new T, g()) can leak because the call to new is bounded to a
temporary that never reaches f. I believe that that's an issue that's
not deeply linked to the order of evaluation. Defining the order of
evaluation would leave that case alone."

. I referred to the case when f takes a raw pointer to T as its first
argument.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Thu, 15 Dec 2005 00:42:32 GMT
Raw View
hyrosen@mail.com (Hyman Rosen) writes:

> David Abrahams wrote:
>> Let me just point out that the general form
>> of that problem is insoluble:
>>   // safe under left-to-right ordering?
>>   f(g(), new T);
>> As a matter of fact it isn't safe, if f has default arguments.
>> Leaving that aside, will users be reticent to make this transformation
>>   f(new T, g())
>> Does one of those look safer to you?
>
> I'm sorry, but I don't understand what you mean.
>
> Under my proposed new regime, function parameters
> will be constructed from arguments in the call in
> strict left-to-right order, and the arguments will
> be evaluated in strict left-to-right order. If
> constructing a parameter throws an exception,
> previously constructed parameters are destructed
> in reverse order. So in the examples, if the new T
> argument is for an auto_ptr<T> parameter, then both
> versions are equally safe. In the first case, if
> g() throws then new T will never be called, and in
> the second case if g() throws then the auto_ptr<T>
> parameter of f will be destructed and thus the
> new T pointer will be freed.

f doesn't have an auto_ptr<T> parameter.  It takes two pointers.

> So please explain why you think that the general
> case is unsafe, or why people will have to change
> parameter order.

It's not about "having to" change.  It's about whether you'll notice
that the change affects safety.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Thu, 15 Dec 2005 02:19:34 GMT
Raw View
David Abrahams wrote:
> f doesn't have an auto_ptr<T> parameter.  It takes two pointers.
>
> It's not about "having to" change.  It's about whether you'll notice
> that the change affects safety.

I'm still confused. If I write
     T *p = new T; g();
or
     g(); T *p = new T;
then in the first case I'll leak memory if g() throws,
and in the second case I won't. But what does this have
to do with safety? Obviously, if I use raw pointers then
my code is subject to resource leaks from exceptions that
happen afterwards.

The point is that with a properly specified order of
evaluation, code which tries to be safe really can be
safe, as opposed to the f(auto_ptr, auto_ptr) mess that
we have now. Code that doesn't try to be safe is going
to get only slightly safer, in that it won't be subject
to the compiler's arbitrary reorderings, but latent
resource leaks aren't going to go away by themselves.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Bob Bell" <belvis@pacbell.net>
Date: Wed, 14 Dec 2005 22:10:27 CST
Raw View
Hyman Rosen wrote:
> David Abrahams wrote:
> > f doesn't have an auto_ptr<T> parameter.  It takes two pointers.
> >
> > It's not about "having to" change.  It's about whether you'll notice
> > that the change affects safety.
>
> I'm still confused. If I write
>      T *p = new T; g();
> or
>      g(); T *p = new T;
> then in the first case I'll leak memory if g() throws,
> and in the second case I won't. But what does this have
> to do with safety?

It's not just a memory leak; it's an object that isn't destroyed. If
the destructor has important side-effects, then it could be a big
problem.

> Obviously, if I use raw pointers then
> my code is subject to resource leaks from exceptions that
> happen afterwards.
>
> The point is that with a properly specified order of
> evaluation, code which tries to be safe really can be
> safe, as opposed to the f(auto_ptr, auto_ptr) mess that
> we have now.

It strikes me that the phrase "code which tries to be safe really can
be safe" applies to the situation today as well. Here's some code that
"tries to be safe" today:

   std::auto_ptr<int> p1(new int(0));
   std::auto_ptr<int> p2(new int(0));

   function_taking_two_auto_ptrs(p1, p2);

> Code that doesn't try to be safe is going
> to get only slightly safer, in that it won't be subject
> to the compiler's arbitrary reorderings, but latent
> resource leaks aren't going to go away by themselves.

To me, this sounds like "code that doesn't try to be safe won't be
safe", which again sounds like the situation we have today. The
advantage of well-defined order of evaluation is that it may make it
simpler to reason about what happens when, but Dave's example shows
that it won't eliminate safety gotchas from expression evaluation.

For the record, I'm not taking a position one way or the other on
whether order of evaluation should be nailed down. I'm comfortable with
things the way they are, but I think this has been an interesting
discussion.

Bob

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Thu, 15 Dec 2005 01:37:16 CST
Raw View
Bob Bell wrote:
> It strikes me that the phrase "code which tries to be safe really can
> be safe" applies to the situation today as well. Here's some code that
> "tries to be safe" today:
>
>    std::auto_ptr<int> p1(new int(0));
>    std::auto_ptr<int> p2(new int(0));
>
>    function_taking_two_auto_ptrs(p1, p2);

Hmmm... I think Hyman had a different definition of "code that tries to
be safe". The meaning would be "code that doesn't manipulate at any
moment bald pointers". You see, the thing is that the code

function_taking_two_auto_ptrs(auto_ptr<int>(new int), auto_ptr<int>(new
int))

does not expose at any moment any bald pointer, yet it can fail for very
ocult reasons. Of course if we extend the definition of "code that tries
to be safe" appropriately, we can do a lot of things safely :o). It will
just take a lot of effort and rules to memorize.

So bottom line is that the case of functions taking several auto_ptrs is
peculiar -- it's unsafe in spite of the user not doing anything unsafe
at any moment.

>>Code that doesn't try to be safe is going
>>to get only slightly safer, in that it won't be subject
>>to the compiler's arbitrary reorderings, but latent
>>resource leaks aren't going to go away by themselves.
>
>
> To me, this sounds like "code that doesn't try to be safe won't be
> safe", which again sounds like the situation we have today. The
> advantage of well-defined order of evaluation is that it may make it
> simpler to reason about what happens when, but Dave's example shows
> that it won't eliminate safety gotchas from expression evaluation.

Maybe there could be another case in the language in which there is a
leak in spite of the user always playing covered. I can't think of any
other than the one above. Maybe there are a few more, can anyone think
of any? In any case, eliminating this one annoying quirk would be great.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: David Abrahams <dave@boost-consulting.com>
Date: Thu, 15 Dec 2005 23:58:24 CST
Raw View
hyrosen@mail.com (Hyman Rosen) writes:

> David Abrahams wrote:
>> f doesn't have an auto_ptr<T> parameter.  It takes two pointers.
>> It's not about "having to" change.  It's about whether you'll notice
>> that the change affects safety.
>
> I'm still confused. If I write
>     T *p = new T; g();
> or
>     g(); T *p = new T;
> then in the first case I'll leak memory if g() throws,
> and in the second case I won't. But what does this have
> to do with safety? Obviously, if I use raw pointers then
> my code is subject to resource leaks from exceptions that
> happen afterwards.

Right.

> The point is that with a properly specified order of
> evaluation, code which tries to be safe really can be
> safe, as opposed to the f(auto_ptr, auto_ptr) mess that
> we have now.

It's like Bob says.  Code that tries to be safe can be safe today.  It
just has to try the right way.  And that will be true tomorrow, even
if we specify order of evaluation.

The _fundamental_ problem with the auto_ptr case above isn't an
unspecified order of evaluation.  It's allowing/encouraging the casual
user to expose unmanaged resources in the first place

> Code that doesn't try to be safe is going
> to get only slightly safer, in that it won't be subject
> to the compiler's arbitrary reorderings, but latent
> resource leaks aren't going to go away by themselves.

Exactly my point.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Fri, 16 Dec 2005 05:59:13 GMT
Raw View
"Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes:

> Bob Bell wrote:
>> It strikes me that the phrase "code which tries to be safe really can
>> be safe" applies to the situation today as well. Here's some code that
>> "tries to be safe" today:
>>    std::auto_ptr<int> p1(new int(0));
>>    std::auto_ptr<int> p2(new int(0));
>>    function_taking_two_auto_ptrs(p1, p2);
>
> Hmmm... I think Hyman had a different definition of "code that tries to
> be safe". The meaning would be "code that doesn't manipulate at any
> moment bald pointers". You see, the thing is that the code
>
> function_taking_two_auto_ptrs(auto_ptr<int>(new int), auto_ptr<int>(new
> int))
>
> does not expose at any moment any bald pointer,

Of course

     new int

exposes a bald pointer.  That's *precisely* the root cause of the
problem.  That should be encapsulated in

     new_<auto_ptr<int> >()

It's even less typing.

> yet it can fail for very ocult reasons.
                           ^^^^^
cute.

> Of course if we extend the definition of "code that tries to be
> safe" appropriately, we can do a lot of things safely :o). It will
> just take a lot of effort and rules to memorize.

You can't extend the definition before defining it :)

> So bottom line is that the case of functions taking several
> auto_ptrs is peculiar -- it's unsafe in spite of the user not doing
> anything unsafe at any moment.

So you seem to be starting with the axiom that

   foo( auto_ptr<T>( new T ), auto_ptr<T>( new T ) )

isn't "unsafe at any moment."  But of course it is completely unsafe
or we wouldn't be having this discussion.  And you find that to
conflict with your axiom.  So that line of reasoning seems circular to
me.

All that said, I can see an argument for your position.  I think you'd
like exception-safety to be "context free," so that if expression1 and
expression2 are each exception-safe, then some expression3 composed of
expression1 and expression2 is also exception-safe.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Fri, 16 Dec 2005 12:33:20 GMT
Raw View
David Abrahams wrote:
> "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes:
>>Hmmm... I think Hyman had a different definition of "code that tries to
>>be safe". The meaning would be "code that doesn't manipulate at any
>>moment bald pointers". You see, the thing is that the code
>>
>>function_taking_two_auto_ptrs(auto_ptr<int>(new int), auto_ptr<int>(new
>>int))
>>
>>does not expose at any moment any bald pointer,
>
>
> Of course
>
>      new int
>
> exposes a bald pointer.  That's *precisely* the root cause of the
> problem.  That should be encapsulated in
>
>      new_<auto_ptr<int> >()
>
> It's even less typing.

Well I guess it depends on how we define "expose". For example, I
believe the code below doesn't:

auto_ptr<int> sp(new int);

The programmer calls new to create a temporary pointer that's
immediately passed to a class that manages it. No stars in sight, no
trouble, end of story. If you claim the line above does expose a bald
pointer, you have a different definition, our criteria don't compare,
end of discussion. If we agree that the code above is sensible, then I
claim it is sensible that also an unnamed temporary:

auto_ptr<int>(new int)

oughtn't leak memory, and I'd also claim that it's exactly because of
unspecified order of evaluation of funtion arguments that:

extern f(auto_ptr<int>, auto_ptr<int>);
f(auto_ptr<int>(new int), auto_ptr<int>(new int));

might leak. There's no other context in which the leak is possible that
I can imagine.

> So you seem to be starting with the axiom that
>
>    foo( auto_ptr<T>( new T ), auto_ptr<T>( new T ) )
>
> isn't "unsafe at any moment."  But of course it is completely unsafe
> or we wouldn't be having this discussion.  And you find that to
> conflict with your axiom.  So that line of reasoning seems circular to
> me.

I'm starting simply with the desideratum that said line *oughtn't be*
unsafe at any moment. I desire that because there are no other contexts
in which auto_ptr<T>( new T ) could leak.

> All that said, I can see an argument for your position.  I think you'd
> like exception-safety to be "context free," so that if expression1 and
> expression2 are each exception-safe, then some expression3 composed of
> expression1 and expression2 is also exception-safe.

That sounds like a very nice formalization of a worthy goal. And I
believe that function argument evaluation is the only instance where
that goal is unrealized.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Mon, 19 Dec 2005 21:01:21 CST
Raw View
Hyman Rosen wrote:
> David Abrahams wrote:
> > What's the biggest problem?
>
> As I have said many times, programming languages are a means
> for driving the actions of a computer. As such, the actions
> that a program specifies should be unambiguous.

This is strictly not true. Its the *effects* that should be
unambiguous.  The compiler must be allowed to chose
whatever actions are most appropriate for producing the
programmer specified effects for the simple reason it
generally knows better.

> If you would
> like for your programming language to have the ability to
> state that a set of actions should be carried out in an
> arbitrary rather than in a defined order, then this should be
> an explicit construct within the language, so that this
> ambiguity is manifestly clear to the readers of the program.

Or, alternatively, if one would like a set of effects to take
place in a specific order, then there should be an
explicit construct within the language, with the added bonus
that it would allow specifying arbitrary fixed order, unlike the
case of having a single default one.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Tue, 20 Dec 2005 04:03:01 GMT
Raw View
hyrosen@mail.com (Hyman Rosen) writes:

> David Abrahams wrote:
>> What's the biggest problem?
>
> As I have said many times, programming languages are a means
> for driving the actions of a computer. As such, the actions
> that a program specifies should be unambiguous.

So do we need to make the behavior of

   unsigned int x = 1;
   x << 33;
   std::cout << x << std::endl;

unambiguous?  If not, what distinguishes the cases that ought to be
unambiguous from those that don't need to be?

> If you would like for your programming language to have the ability
> to state that a set of actions should be carried out in an arbitrary
> rather than in a defined order, then this should be an explicit
> construct within the language, so that this ambiguity is manifestly
> clear to the readers of the program.

For high-performance code where there are no side-effects present but
that fact is hidden from the compiler, I think that would make
writing efficient programs needlessly cumbersome.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrew Koenig" <ark@acm.org>
Date: Mon, 19 Dec 2005 22:07:49 CST
Raw View
""Andrei Alexandrescu (See Website For Email)""
<SeeWebsiteForEmail@moderncppdesign.com> wrote in message
news:IrqFvJ.D85@beaver.cs.washington.edu...
> Hyman Rosen wrote:

>> As I have said many times, programming languages are a means
>> for driving the actions of a computer. As such, the actions
>> that a program specifies should be unambiguous. If you would
>> like for your programming language to have the ability to
>> state that a set of actions should be carried out in an
>> arbitrary rather than in a defined order, then this should be
>> an explicit construct within the language, so that this
>> ambiguity is manifestly clear to the readers of the program.

> Very nice! And to that I'd add: "Programs must be written for people to
> read, and only incidentally for machines to execute" - Abelson and
> Sussman.

There is a subtle misconception in the first paragraph above, which I didn't
realize until I started thinking about Dijkstra's remarks on the subject.

In his book ``A Discipline of Programming,'' he defines a little programming
language that, among other notions, includes an "if" statement that looks
somewhat like this:

    if
        <condition 1> -> action 1 []
        <condition 2> -> action 2 []
        <condition 3> -> action 3 []
            ...
    fi

The idea is that if none of the conditions are true, the program is in error
(!).  If exactly one of the conditions is true, the corresponding action is
executed.  If more than one condition is true, *one* of the actions that
corresponds to one of the true conditions is executed.  It is
*indeterminate* which one it is.

Dijkstra claims that this indeterminacy in *language* definition is a good
thing because it allows programmers to avoid overspecification in *program*
definition.  As an example, he shows something along the following lines,
which sets y to the absolute value of x:

    if
        x >= 0 -> y := x []
        x <= 0 -> y := -x
    fi

The point here is that you do not care which of these branches is executed
if x is equal to zero.  To prove this program correct, you need to prove
that each branch's guard (i.e. x>=0 or x<=0) causes y to be set to the
absolute value of x, and that at least one of the guards is always true.  If
you care which of the guards is used when they are both true, then you are
overspecifying the problem.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Tue, 20 Dec 2005 04:06:50 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:200512161412.jBGECcuC061966@horus.isnic.is...

> What I would like is for
>     void foo(auto_ptr<T>, auto_ptr<T>);
>     foo(new T, new T);
> also not to be unsafe at any moment.

That would certainly be nice.  However, defining order of evaluation is
neither necessary nor sufficient to guarantee that behavior.

What it seems to me would be necessary is a guarantee that each argument be
evaluated completely, and the corresponding parameter constructed, before
moving on to the next argument.  The arguments (and corresponding
parameters) could, it seems to me, be evaluated in any order.

The reason that defining order of evaluation is not sufficient is that it
could be defined to require both arguments to be evaluated *before* binding
either parameter to its argument, in which case memory exhaustion would
assuredly result in a leak.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Tue, 20 Dec 2005 04:05:46 GMT
Raw View
In article <IrqFrw.D6G@beaver.cs.washington.edu>,
 SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See
 Website For Email)") wrote:

> > Then explain why.  Educate us.  Leaving me (and the rest of the world)
> > guessing why isn't effective communication.  It is simply fud.  If I
> > read this reply and don't understand it, then perhaps others aren't
> > understanding too (although I freely admit that it is possible I am the
> > only idiot reading this).
>
> Ok, ok, ok, I meant no offense. Sorry!

No offense taken.  Just happy to have you clarify:

> I understand a sequence point would suffice for exception safety. But
> I'd advocate just going all the way and mandating the order of
> evaluation, to the end of fewer bugs and better, more portable programs.

So once you have more than one designer of something, it is all about
compromise.

The aerodynamics people want thinner, smoother wings.  The structural
guys want big thick wings shaped like I-beams.  The fuel storage group
just wants to stick two gas tanks on either side of the fuselage.  The
weight and balance group doesn't really even want the wings in the first
place.  At the end of the day, it has to fly without falling apart or
crashing.

The compiler experts are telling us that they need some freedom in this
area for optimization purposes.  (Some of the) exception safety experts
are saying too much freedom leads to error prone, fragile code.  Having
grown up with C, personally I'm very use to the fact that:

f(i, ++i);

is not well defined.  It is something I can live with, and over the
decades it has come to seem an acceptable fact of life despite the fact
that it might horrify a Java programmer.

However, the fact that the pattern:

f(acquire(release()));

is safe, while the pattern:

f(acquire(release()), do_something_unrelated());

isn't safe, I find scary even for C++ programmers.  And the minimum
compromise we need to get this puppy off the ground is the sequence
point.

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Tue, 20 Dec 2005 04:07:48 GMT
Raw View
""Andrei Alexandrescu (See Website For Email)""
<SeeWebsiteForEmail@moderncppdesign.com> wrote in message
news:IrqFrw.D6G@beaver.cs.washington.edu...

> I understand a sequence point would suffice for exception safety. But I'd
> advocate just going all the way and mandating the order of evaluation, to
> the end of fewer bugs and better, more portable programs.

Merely mandating the order of execution doesn't get rid of the
exception-safety problems, *unless* you happen to pick an order that amounts
ot having a sequence point between arguments.  In other words, you might
mandate that in

    f(new T, new T);

the two instances of "new T" are evaluated left to right, and *then* the
corresponding parameters are bound to them (also left to right), and you'd
still have resource-leak problems.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hickin@nortel.com ("John Hickin")
Date: Tue, 20 Dec 2005 04:08:15 GMT
Raw View
""Andrei Alexandrescu (See Website For Email)""
<SeeWebsiteForEmail@moderncppdesign.com> wrote in message
news:IrqFvJ.D85@beaver.cs.washington.edu...
> Hyman Rosen wrote:
> > David Abrahams wrote:
> >
> >> What's the biggest problem?
> >
> >
> > As I have said many times, programming languages are a means
> > for driving the actions of a computer. As such, the actions
> > that a program specifies should be unambiguous. If you would
> > like for your programming language to have the ability to
> > state that a set of actions should be carried out in an
> > arbitrary rather than in a defined order, then this should be
> > an explicit construct within the language, so that this
> > ambiguity is manifestly clear to the readers of the program.
>
> Very nice! And to that I'd add: "Programs must be written for people to
> read, and only incidentally for machines to execute" - Abelson and
Sussman.
>
>
> Andrei
>

With the existing rules I believe that it is possible to write programs for
people to read, so I think that while being a nice -- even easily readable
statement -- it really should not have too much weight in the debate.

OTOH, I think that complete specification will be useful for those who want
to write program generators. Within 10 years nobody will be writing C++ (or
Java) -- it will all be code generated directly from specifications. For
those who believe that complete specification will kill optimization
opportunities, there will be ample new opportunities tweaking model
compilers.


Regards, John.


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Tue, 20 Dec 2005 05:52:33 GMT
Raw View
David Abrahams wrote:
> SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)") writes:
>>Very nice! And to that I'd add: "Programs must be written for people
>>to read, and only incidentally for machines to execute" - Abelson and
>>Sussman.
>
>
> Oh, I'm _very_ big on that one.  I don't think decorating all the
> high-performance numerics code with some "allow unordered evaluation"
> construct is a good way to keep them readable, though.

I work with (and hack into) a high-performance math package (the
QuickNet neural network library) a daily basis. For that library,
unordered evaluation would not be useful.

Please show examples taken from real high-performance numeric code where
  unordered evaluation would make a difference.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Mon, 19 Dec 2005 23:58:37 CST
Raw View
David Abrahams wrote:
> SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)") writes:
>
> Not to mention that fancy_component<T> might not take ownership of the
> handle at all.

It doesn't matter. That answer is entirely missing my point.

>>That makes all the difference.
>>
>>The important part is that it has the choice.
>
>
> What choice, please?

It's simple. You said that:

// Example 1
f( fancy_component<T>( create_handle<T>() ), g() );

does not look safe to you because "there's a lot going on in that line".
  To reveal that "a lot going on" is not the reason for which the line
above is unsafe, and to also reveal that it's exactly the unspecified
order of evaluation that's the culprit, I replied that:

// Example 2
f( fancier_component<T>( fancy_component<T>( create_handle<T>() ) );

is safe although "there's a lot going on in that line", too.

Now allow me to explain things a bit in an attempt to clear any past or
future misunderstanding.

Example 1 is unsafe because no matter how programmers are implementing
the functions, it could leak.

Example 2 is safe if the functions are properly implemented and respect
each other's contract.

So while Example 1 is rotten by definition without even looking past
that one line of code, Example 2 is valid, provided of course the call
sequence makes sense.

That's why I was saying that the code in Example 2 "has a choice" of
being correct.

I think it's super clear now that the only problem is with the
unspecified order of evaluation.

>>Compare that with your example.
>
> Which one, please?

Example 2 with Example 1 above.

>>And, no comment to my other examples?
>
>
> They all look the same to me: they all do a bare "new" and thus expose
> the user to handling a raw pointer at some point or other.

Yet they don't suffer of leaks. I thought I made my point very clear: I
showed a number of examples that are safe, yet are very similar to the
unsafe example.

> Just as
> you'd like an expression to be safely usable in any context if it can
> be used safely alone, I'd like all expressions in common use to be
> safe regardless of the context they're placed in.  "new T" is not like
> that: you'd better be really careful where and how you do it.
> Mandating evaluation order will only go a small distance toward fixing
> that problem.

Fine. I'd also add, mandating new_<unique_ptr<T> > will only go a small
distance toward fixing the problems created by leaving the execution
order unspecfied :o).

>>>>Leaks are not the biggest problem of unspecified defined order of
>>>>evaluation. They are a pretty good showcase, though.
>>>
>>>What's the biggest problem?
>>
>>Their being a gratuitous source of bugs, incompatibilities, and
>>nonportability.
>
>
> What kind of bugs and incompatibilities do you get that make this a
> big problem?

I don't, because I've learned my lesson the hard way. Why should it be
the same for others?

Bugs appear because code seems to work under certain conditions when
evaluation is made out of order. Incompatibilities appear when some code
works with a compiler but not with another.

> I'm very suspicious of any crusade to eliminate all low-level
> differences between compilers.  Smells like -- no offense intended;
> I'm just identifying where my reaction comes from -- Java hype to me.
> Do you also want to mandate standard sizes for short, int, long, etc.?

I'm very suspicious of any crusade to maintain the status quo at all
costs :o). I only think it's very healthy to revisit assumptions that
might have been invalidated by progress in compiler technology and
hardware. I believe that the assumption that leaving order of evaluation
unspecified improves performance has been well rendered anachronic.

> That said, I'm not closed-minded about this; I just need to be
> convinced ;-) So far it seems like something we *could* do, that would
> render existing compilers nonconforming in a fundamental way, create
> backward compatibility problems, break existing (nonportable) code,
> and consume valuable core language drafting time.

Well it's not that bad. Existing compilers will work for existing code.
Existing standard-conforming code must not depend on the order of
evaluation. Ergo, order of evaluation doesn't matter for existing
standard code. Ergo, old compilers can compile old code no problem.

> I'm just not
> convinced the benefits justify the costs yet, especially when the
> biggest problems *I've* seen identified so far can be solved more
> completely with a library and (IMO painless) changes to common
> programming practice.

I think the practice of playing covered is worthwhile, but I don't think
the f(auto_ptr<T>, auto_ptr<T>) is the biggest. It just happens to be
particularly glaring :o).


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Mon, 19 Dec 2005 23:58:46 CST
Raw View
Andrew Koenig wrote:
> ""Andrei Alexandrescu (See Website For Email)""
> <SeeWebsiteForEmail@moderncppdesign.com> wrote in message
> news:IrqFrw.D6G@beaver.cs.washington.edu...
>
>
>>I understand a sequence point would suffice for exception safety. But I'd
>>advocate just going all the way and mandating the order of evaluation, to
>>the end of fewer bugs and better, more portable programs.
>
>
> Merely mandating the order of execution doesn't get rid of the
> exception-safety problems, *unless* you happen to pick an order that amounts
> ot having a sequence point between arguments.  In other words, you might
> mandate that in
>
>     f(new T, new T);
>
> the two instances of "new T" are evaluated left to right, and *then* the
> corresponding parameters are bound to them (also left to right), and you'd
> still have resource-leak problems.

True; that's what I meant by specifying order of evaluation. In my mind
"evaluation" included whatever conversions are needed to fit the
function signature - IOW, "binding" would be the binding in computer
science (have the formal argument names refer to the actual arguments).
Thanks for clarifying that.


Andreu

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Tue, 20 Dec 2005 05:57:52 GMT
Raw View
In article <87r7896z8e.fsf@boost-consulting.com>,
 dave@boost-consulting.com (David Abrahams) wrote:

> howard.hinnant@gmail.com (Howard Hinnant) writes:
>
> > In article <87bqzfp5t2.fsf@boost-consulting.com>,
> >  dave@boost-consulting.com (David Abrahams) wrote:
> >
> >> > Smart pointer factory functions in C++0X sound great.  Let's have
> >> > them (I hope to see your proposal soon).
> >>
> >> EWG or LWG?
> >
> > LWG please.  If you see core issues also involved (perhaps variadic
> > templates, or rvalue reference?), I'll make sure the EWG knows they have
> > more motivation for this core issue.  Workarounds (if possible) for lack
> > of core issues are appreciated.
>
> Sure, that's easy enough.  Ping me after the first week of January,
> though, if you *really* want it; it's likely to fall off the radar
> otherwise.

I've entered it on my calendar. :-)

> > evaluate_in_any_order
> > {
> >     auto t1(fancy_component<T>( create_handle<T>() ));
> >     auto t2(g());
>
> [don't use tabs :)]

<chuckle>  Thanks for the reminder.  You caught me (yet again).

> > that the world would be significantly safer than it is today
>
> I'm just not convinced of how significant it is, yet.  People will
> still be commonly trafficing in unmanaged resources, and we won't have
> fixed that.

Fair enough.  See below...

In article <871x098em5.fsf@boost-consulting.com>,
 dave@boost-consulting.com (David Abrahams) wrote:

> They all look the same to me: they all do a bare "new" and thus expose
> the user to handling a raw pointer at some point or other.

<nod> Maybe our examples have been too limited.

Exposing a bare "new" may not be the only danger.  Consider something
like:

template <class T>
void
X<T>::might_transfer_ownership(std::unique_ptr<T>&& up,
                              int condition1, int condition2)
{
    // ...
    if (really_want_to_transfer(condition1, condition2)
    {
        process(acquire(up), and_process(condition1, condition2));
    }
    // ...
}

Now on analysis of this member function we decide:

1.  If we don't want to transfer ownership, up should keep ownership.
2.  If we do want to transfer ownership, then we should take it, even
    if acquire() fails.

The member function acquire(std::unique_ptr& up) servers more than just
this use case and is set up to take ownership of up, unless the acquire
function itself fails, in which case up retains ownership.  However
there is also an acquire overload taking a T* which retains ownership of
the pointer whether or not acquire fails.  We were inspired to this
design by std::tr1::shared_ptr --- This is identical to the semantics of
std::tr1::shared_ptr constructors:  shared_ptr(T* p) always takes
ownership of p, even on failure.  shared_ptr(auto_ptr<T>& p) only takes
ownership from p if the ctor succeeds.

So if acquire(up) fails, having up retain ownership is not what we want
in this case.  The above code is wrong for us.  We instead would like:

template <class T>
void
X<T>::might_transfer_ownership(std::unique_ptr<T>&& up,
                              int condition1, int condition2)
{
    // ...
    if (really_want_to_transfer(condition1, condition2)
    {
        process(acquire(up.release()),
                and_process(condition1, condition2));
    }
    // ...
}

Now, whether or not the acquire fails, we've grabbed ownership of
up.get() because we met the conditions that we really wanted to transfer
ownership (really_want_to_transfer() evaluated true).

This is advanced level code.  It has been well designed and well thought
through.  It has emulated some of our best from tr1 and for good
reasons.  And yet if and_process() might throw an exception, it is still
flawed.  No "new" in sight.  Careful thought has occurred with respect
to resource ownership even under exceptional conditions.  The exception
safety aspect has even been tested by throwing at every possible point,
and inspecting the state.  All tests passed.  And yet it is still wrong.

Imho, if we're still wrong after such careful thought, design and
testing, and you can't easily "grep" for this, then C++ has a defect.
Maybe you or I or a few other experts could spot the problem in this
design.  But most people won't.  This is scary.

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Tue, 20 Dec 2005 05:57:59 GMT
Raw View
Howard Hinnant wrote:
> So once you have more than one designer of something, it is all about
> compromise.
>
> The aerodynamics people want thinner, smoother wings.  The structural
> guys want big thick wings shaped like I-beams.  The fuel storage group
> just wants to stick two gas tanks on either side of the fuselage.  The
> weight and balance group doesn't really even want the wings in the first
> place.  At the end of the day, it has to fly without falling apart or
> crashing.

Of course. I agree with all that, but IMHO with this one it's something
more like, "Wings must be made of duraluminum because dural is better
than all other materials." That's an assumption that can be invalidated
by advance in materials.

> The compiler experts are telling us that they need some freedom in this
> area for optimization purposes.

You see, this is what I'm not sure at all anymore. The two compiler
experts that I know told me it's not a relevant source of optimization.
So so far we're a few people just passing anecdotes and hearsay around,
most of which is 20 years old. And even though this is comp.std.c++, not
one person, expert or not, came with an example showing with numbers
that the assumption I'm questioning is still valid.

> (Some of the) exception safety experts
> are saying too much freedom leads to error prone, fragile code.  Having
> grown up with C, personally I'm very use to the fact that:
>
> f(i, ++i);
>
> is not well defined.  It is something I can live with, and over the
> decades it has come to seem an acceptable fact of life despite the fact
> that it might horrify a Java programmer.

Honest, in the spirit of out-of-the-box thinking, we should still be
horrified, particularly when (because?) it's gratuitous.

> However, the fact that the pattern:
>
> f(acquire(release()));
>
> is safe, while the pattern:
>
> f(acquire(release()), do_something_unrelated());
>
> isn't safe, I find scary even for C++ programmers.  And the minimum
> compromise we need to get this puppy off the ground is the sequence
> point.

Yah, point taken. Requiring a sequence point might actually be a very
smart political-technical move :o).


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Tue, 20 Dec 2005 09:48:07 CST
Raw View
Andrew Koenig wrote:
> Dijkstra claims that this indeterminacy in *language* definition is a good
> thing because it allows programmers to avoid overspecification in *program*
> definition.

Which is all fine and good if you believe that the programmer
has actually considered the possibility that more than one
guard is true. I suppose that's the case if you're proving
that your code is correct (and you don't make a mistake in
doing that), but lots of people don't do that in C++. I think
it's much more likely for two guards to be true because the
programmer has made an error rather than actually not caring
which branch is executed.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: jm@bourguet.org (Jean-Marc Bourguet)
Date: Tue, 20 Dec 2005 15:47:06 GMT
Raw View
"Andrew Koenig" <ark@acm.org> writes:

> [Dijkstra's guarded commands language] If you care which of the
> guards is used when they are both true, then you are overspecifying
> the problem.

Note: I've used "unspecified behaviour" for all kinds of not
reproducable, not portable behaviour, not in its more precise meaning
in the standard.

I've quite a lot of sympathy for the idea that overspecifying is bad
for a language (that idea was at the core of my research work when I
was still at university years ago, and the fact that source code is
nearly allways overspecifying is one of my favourite arguments against
thoose who thinks that source code is all you need to understand a
piece of code) but I'm not sure that it is true for a programming
language which targets programming at large.

My main points are the following:

   - keeping things unspecified is better than overspecifying when you
     do it *voluntarily*.  Doing it unvoluntarily is worse that having
     something overspecified.

   - keeping things voluntarily unspecified is good when you are at
     the specification stage as it allows to leave breathing space to
     implementation.  But at the implementation phase, leaving them
     unspecified opens a lot of opportunities for problems, especially
     if it is unknowningly.

Most of the time, unspecified behaviour is there unknowningly.  For
the better programmers, because they where concerned about higher
level thinking when writing that piece of code.  For the worse,
because they don't master the language enough to even recognize the
unspecified behaviour when pointed to.

So unspecified behaviour is there unknowningly; we want a process to
detect it.  What we do is reuse the processes we use to ensure that
the defined behaviour is the one we wanted.  The most common are test
and code review -- automatic or made by people.  First note that we
are putting now two goals in the same process, and those goals will
conflicts.  But I'll just continue to consider the aim of finding
unspecified behaviour.

But testing don't find unspecified behaviour in a reliable way.  Just
when it happen not to do what was assumed.  Murphy will strike, some
of the unspecified behaviours you have will manifest its unspecified
status at the worst time.  In front of an important customer asking
"what would happen if..." during a demo or after deployement when a
security patch for the runtime library is installed.

Static analysis tools (automatic code review) are of two kinds.  Those
bundled with the compilers as "warnings".  The common practice is to
silent some of them and ignore -- ie don't even ask the compiler for
them -- the other because there is too many false positive.  For the
first, one may wonder why there are not errors in the first place.
For the other, they are useless.  The other kind of tool is used less
often and share with the second kind of warning the property of having
lot of false positive limiting the usefullness.

Code review by people is probably the best way to detect the
unspecified behaviour.  But there are three factors against the
detection: 1) we read what we assume, 2) we are dragged into higher
level thinking 3) most don't recognize the unspecified behaviour even
when pointed to and the code review becomes a lesson in one arcane
aspect of C++.


I've no time to further expand on programming languages as
implementation language and as specification language.  I'll try to
make a second post.

--
Jean-Marc

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Tue, 20 Dec 2005 15:47:39 GMT
Raw View
Andrew Koenig wrote:
> That would certainly be nice.  However, defining order of evaluation is
> neither necessary nor sufficient to guarantee that behavior.
>
> What it seems to me would be necessary is a guarantee that each argument be
> evaluated completely, and the corresponding parameter constructed, before
> moving on to the next argument.

Yes, I know. But this is a long and repetitious thread,
and it's tiresome to have to say it each time. I'm using
"defined order" as a shorthand. As you say, it means
evaluating expressions in a defined order, having side
effects happen as soon as they're evaluated, and binding
arguments to parameters as they're evaluated.

 > The arguments (and corresponding parameters) could,
 > it seems to me, be evaluated in any order.

Only if the goal was fixing the f(auto, auto) problem and
nothing else. But that's not the main goal, that's a side
effect of achieving the main goal. The main goal is to make
C++ expression evaluation a fully deterministic and defined
process so that code is maximally portable and unambiguous,
because there is no value in leaving some programming language
semantics unspecified.

In fact, just look at your first paragraph above. Clearly,
decades of exposure to C and C++ have warped and deformed
your brain to such an extent that when you look at a plain
old function call, you instead see a quantum haze of alternate
universes reflecting all the different orders of evaluation
which can exist until the compiler collapses the state into a
single possibilty :-)

Wouldn't it be nice if you could save other people from that?

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Tue, 20 Dec 2005 15:48:14 GMT
Raw View
Momchil Velikov wrote:
> This is strictly not true.
 > Its the *effects* that should be unambiguous.

The effects are all we can observe from the
program, so I take your comment as making a
distinction without a difference.

> if one would like a set of effects to take place
 > in a specific order, then there should be an
> explicit construct within the language, with the
 > added bonus that it would allow specifying arbitrary
 > fixed order, unlike the case of having a single default
 > one.

Yes, I agree with you completely. That construct shall be
called an "expression", and will permit specifying arbitrary
fixed order through a positional mechanism - in 'f() + g()',
f() shall be evaluated before g() and in 'g() + f()', g()
shall be evaluated before 'f()'.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Tue, 20 Dec 2005 15:48:51 GMT
Raw View
David Abrahams wrote:
> So do we need to make the behavior of
>    unsigned int x = 1;
>    x << 33;
>    std::cout << x << std::endl;
> unambiguous?

Yes, that's required by the standard, 1.9/2:
     Certain aspects ... are ... implementation-defined
     (for example, sizeof(int)). ... Each implementation
     shall include documentation describing its
     characteristics and behavior in these respects.

This means that a programmer will know, for the
implementation being used, exactly what the code
above will do. It would be better if this behavior
was the same across implementations, but at least
it's specified.

> For high-performance code where there are no side-effects present but
> that fact is hidden from the compiler, I think that would make
> writing efficient programs needlessly cumbersome.

If high-performance code can tolerate the needless
cumbersomeness of having a sequence of statements
execute in a defined order, then it can tolerate the
same thing in an expression. This hypothetical high-
performance code and smart/stupid compiler is a myth
anyway, but people confused by order of evaluation
issues are all too real, as we constantly see on the
newsgroup.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Wed, 21 Dec 2005 04:02:43 GMT
Raw View
Hyman Rosen wrote:
> Andrew Koenig wrote:
>  > The arguments (and corresponding parameters) could,
>  > it seems to me, be evaluated in any order.
>
> Only if the goal was fixing the f(auto, auto) problem and
> nothing else. But that's not the main goal, that's a side
> effect of achieving the main goal. The main goal is to make
> C++ expression evaluation a fully deterministic and defined
> process so that code is maximally portable and unambiguous,
> because there is no value in leaving some programming language
> semantics unspecified.

I agree. That brings me to another gratuitous source of problems. The
language should finally fix the postfix ++ and -- to give a guarantee.
As of now,

a = b++;

has undefined behavior if a and b alias the same lvalue. There's no
particular reason for which things have to be that way anymore. The
PDP-7 had some postincrement assembler instruction that did something
like that, and it was very advisable to use it because it was fast. I
don't think we still need to cling to such anachronisms. The semantics
of postfix operator++ should be those of a function:

int & increment(int & i) {
   int old(i);
   ++i;
   return old;
}

that does the postincrement. Then, writing a = b++ would be equivalent
to a = increment(b); which has defined semantics, and I doubt there will
be any loss in efficiency with today's compilers.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Wed, 21 Dec 2005 04:02:55 GMT
Raw View
hyrosen@mail.com (Hyman Rosen) writes:

> David Abrahams wrote:
>> So do we need to make the behavior of
>>    unsigned int x = 1;
>>    x << 33;
>>    std::cout << x << std::endl;
>> unambiguous?
>
> Yes, that's required by the standard, 1.9/2:
>     Certain aspects ... are ... implementation-defined
>     (for example, sizeof(int)). ... Each implementation
>     shall include documentation describing its
>     characteristics and behavior in these respects.
>
> This means that a programmer will know, for the
> implementation being used, exactly what the code
> above will do. It would be better if this behavior
> was the same across implementations, but at least
> it's specified.
>
>> For high-performance code where there are no side-effects present but
>> that fact is hidden from the compiler, I think that would make
>> writing efficient programs needlessly cumbersome.
>
> If high-performance code can tolerate the needless
> cumbersomeness of having a sequence of statements
> execute in a defined order, then it can tolerate the
> same thing in an expression.

That dog don't hunt.

  If a computer can tolerate accessing the disk every time a new
  program starts, it can also tolerate it for every statement in a
  program?

Also, statements say "sequential" to the reader and writer of the
code, at least to programmers familiar with the rules today.  It's not
a limitation on expressiveness to have statements evaluated in order.
On the other hand, even if you change the rules I doubt that I would
*ever* knowingly write code that depended on evaluation order within
an expression.  Expressions just don't look ordered to me.

> This hypothetical high-performance code and smart/stupid compiler is
> a myth anyway, but people confused by order of evaluation issues are
> all too real, ...

Okay, maybe I'm one of them, and you can clear it up.  Andrew Koenig
described why evaluation order might matter to a compiler
(http://groups.google.com/group/comp.std.c++/msg/54716a09f97cc000).
Do you have an argument against his post?

Also, it seems obvious to me that in some cases at least, evaluating
the arguments in the same order they need to be pushed onto the stack
could be important:

      g( f1(), f2(), f3() )

      call  f3
      push  r0
      call  f2
      push  r0
      call  f1
      push  r0
      call  g

If you force an ordering that conflicts with the calling convention,
you end up with

      sub   sp, #3
      call  f1
      mv    r0, sp+1
      call  f2
      mv    r0, sp+2
      call  f3
      mv    r0, sp+3
      call  g

What am I missing?

Arguments about how the compiler can reorder code at will when it can
detect that the order doesn't matter don't carry a lot of weight with
me, since the compiler can't know anything about what happens behind
the scenes of a non-inlined function (without whole program analysis
of course).

> ... as we constantly see on the newsgroup.

It doesn't seem like a very common occurence to me.  How often does it
happen?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Wed, 21 Dec 2005 04:02:46 GMT
Raw View
Hyman Rosen wrote:
> Andrew Koenig wrote:
>
>> Dijkstra claims that this indeterminacy in *language* definition is a
>> good thing because it allows programmers to avoid overspecification in
>> *program* definition.
>
>
> Which is all fine and good if you believe that the programmer
> has actually considered the possibility that more than one
> guard is true. I suppose that's the case if you're proving
> that your code is correct (and you don't make a mistake in
> doing that), but lots of people don't do that in C++. I think
> it's much more likely for two guards to be true because the
> programmer has made an error rather than actually not caring
> which branch is executed.

One thing is that Dijkstra's suggested idiom didn't really catch on in
real languages. The only one I know of is
http://search.cpan.org/~trey/Commands-Guarded-0.01/Guarded.pm.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Tue, 20 Dec 2005 22:03:28 CST
Raw View
Andrew Koenig wrote:
> ""Andrei Alexandrescu (See Website For Email)""
> <SeeWebsiteForEmail@moderncppdesign.com> wrote in message
> news:IrqFrw.D6G@beaver.cs.washington.edu...
>
>
>>I understand a sequence point would suffice for exception safety. But I'd
>>advocate just going all the way and mandating the order of evaluation, to
>>the end of fewer bugs and better, more portable programs.
>
>
> Merely mandating the order of execution doesn't get rid of the
> exception-safety problems, *unless* you happen to pick an order that amounts
> ot having a sequence point between arguments.  In other words, you might
> mandate that in
>
>     f(new T, new T);
>
> the two instances of "new T" are evaluated left to right, and *then* the
> corresponding parameters are bound to them (also left to right), and you'd
> still have resource-leak problems.

True; that's what I meant by specifying order of evaluation. In my mind
"evaluation" included whatever conversions are needed to fit the
function signature - IOW, "binding" would be the binding in computer
science (have the formal argument names refer to the actual arguments).
Thanks for clarifying that.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Tue, 20 Dec 2005 23:50:43 CST
Raw View
Hyman Rosen wrote:
> Momchil Velikov wrote:
> > This is strictly not true.
>  > Its the *effects* that should be unambiguous.
>
> The effects are all we can observe from the
> program, so I take your comment as making a
> distinction without a difference.

Theoretically. But as we all know theory and practice
are the same only in theory, in practice they differ.
Thus a language specification is neccessarily concerned
with the possibility of implementing it in an efficeint
(for various values of "efficient") manner.

> > if one would like a set of effects to take place
>  > in a specific order, then there should be an
> > explicit construct within the language, with the
>  > added bonus that it would allow specifying arbitrary
>  > fixed order, unlike the case of having a single default
>  > one.
>
> Yes, I agree with you completely. That construct shall be
> called an "expression", and will permit specifying arbitrary
> fixed order through a positional mechanism - in 'f() + g()',
> f() shall be evaluated before g() and in 'g() + f()', g()
> shall be evaluated before 'f()'.

This construct is already taken.  Why break it leaving no
alternative? Especially, when there exist a sequencing
construct?

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Wed, 21 Dec 2005 02:07:04 CST
Raw View
David Abrahams wrote:
>>This hypothetical high-performance code and smart/stupid compiler is
>>a myth anyway, but people confused by order of evaluation issues are
>>all too real, ...
>
>
> Okay, maybe I'm one of them, and you can clear it up.  Andrew Koenig
> described why evaluation order might matter to a compiler
> (http://groups.google.com/group/comp.std.c++/msg/54716a09f97cc000).
> Do you have an argument against his post?

Executing the expression that takes most register first is better if the
other expression leaves a lot of live registers (registers that will be
read later). In the case of evaluating a function's arguments, there are
no live registers remaining after pushing the result on the stack, so
I'm unclear on why the technique would be desirable or more efficient.

> Also, it seems obvious to me that in some cases at least, evaluating
> the arguments in the same order they need to be pushed onto the stack
> could be important:
>
>       g( f1(), f2(), f3() )
>
>       call  f3
>       push  r0
>       call  f2
>       push  r0
>       call  f1
>       push  r0
>       call  g
>
> If you force an ordering that conflicts with the calling convention,
> you end up with
>
>       sub   sp, #3
>       call  f1
>       mv    r0, sp+1
>       call  f2
>       mv    r0, sp+2
>       call  f3
>       mv    r0, sp+3
>       call  g
>
> What am I missing?

The fact that that code passes through a ton of optimizing
transformations before it's ever executed?

At the macro level, see for example
http://64.233.167.104/search?q=cache:fDzJHL3l3ccJ:www.codeproject.com/cpp/calling_conventions_demystified.asp+intel+fastest+calling+convention&hl=en

I quote: "How fast is this calling convention, comparing to __cdecl and
__stdcall? Find out for yourselves. Set the compiler option /Gr, and
compare the execution time. I didn't find __fastcall to be any faster
than other calling conventons, but you may come to different conclusions."

(N.B. _fastcall places vars in registers, __cdecl puts them on the stack
RTL and has the caller clean the stack up, __stdcall puts them on the
stack RTL and has the callee clean the stack up.)

> Arguments about how the compiler can reorder code at will when it can
> detect that the order doesn't matter don't carry a lot of weight with
> me, since the compiler can't know anything about what happens behind
> the scenes of a non-inlined function (without whole program analysis
> of course).

If the functions aren't inlined, the compiler can not decide whether
there's any gain from calling them in one sequence or another, so
sticking with left-to-right would be as sensible as anything else.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Wed, 21 Dec 2005 13:28:32 GMT
Raw View
In article <87irtjy5sy.fsf@boost-consulting.com>, David Abrahams
<dave@boost-consulting.com> writes
>> ... as we constantly see on the newsgroup.
>
>It doesn't seem like a very common occurence to me.  How often does it
>happen?

And I frequently see abuses of English such as incorrect use of an
apostrophe, misspelling and incorrect grammar. That does not lead me to
suggest we need to change English, but it does lead me to complain about
poor educational standards.

Should we change the C++ Standard to allow 'void main(){' or teach
programmers to write correct code? What should we do to fix problems
with the inexact nature of floating point arithmetic? In each case the
answer is better education.

Of course we all have our problem nits. Mine is that I remain convinced
that making overflow undefined behaviour for signed integer arithmetic
devalues the concept of undefined behaviour. Well it would if we
improved education so that most programmers knew of the problem:-)


--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Wed, 21 Dec 2005 13:28:45 GMT
Raw View
In article <Iru7EG.1Fqu@beaver.cs.washington.edu>, "Andrei Alexandrescu
(See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes
>Executing the expression that takes most register first is better if
>the other expression leaves a lot of live registers (registers that
>will be read later). In the case of evaluating a function's arguments,
>there are no live registers remaining after pushing the result on the
>stack, so I'm unclear on why the technique would be desirable or more
>efficient.

What is this stack you are pushing results onto? Just because many
compilers pass arguments that way is no reason to suppose they will be.
Why shouldn't a compiler pass the value of a complex in two registers,
or a quaternion in four if it can spare them?

You see your objection seems to me to be based on a specific way of
passing arguments and AFAIR C++ makes no requirements on the way that is
achieved.

Note that this is even more the case where a function is inlined.


--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Wed, 21 Dec 2005 13:29:17 GMT
Raw View
In article <Irt826.1zD@beaver.cs.washington.edu>, "Andrei Alexandrescu
(See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes
>I agree. That brings me to another gratuitous source of problems. The
>language should finally fix the postfix ++ and -- to give a guarantee.
>As of now,
>
>a = b++;
>
>has undefined behavior if a and b alias the same lvalue. There's no
>particular reason for which things have to be that way anymore. The
>PDP-7 had some postincrement assembler instruction that did something
>like that, and it was very advisable to use it because it was fast. I
>don't think we still need to cling to such anachronisms. The semantics
>of postfix operator++ should be those of a function:
>
>int & increment(int & i) {
>  int old(i);
>  ++i;
>  return old;
>}
>
>that does the postincrement. Then, writing a = b++ would be equivalent
>to a = increment(b); which has defined semantics, and I doubt there
>will be any loss in efficiency with today's compilers.
>
Should we mandate things in the Standard that will make development of
new technology possibly harder? We get fixated on a specific family of
hardware architectures and lapse into thinking that is the way it must
be. But what happens if we return to hardware that supports direct
arithmetic on memory?

I do not know the answer, but I am pretty certain that the concept of
sequence points would be important in such a case. We do need to address
the issue of sequence points because the WG14 experience of trying to
write a specification for them demonstrates that they are far from as
clear and well specified as we like to think.

--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Wed, 21 Dec 2005 07:28:22 CST
Raw View
Andrei Alexandrescu (See Website For Email) wrote:
> David Abrahams wrote:
> >>This hypothetical high-performance code and smart/stupid compiler is
> >>a myth anyway, but people confused by order of evaluation issues are
> >>all too real, ...
> >
> > Okay, maybe I'm one of them, and you can clear it up.  Andrew Koenig
> > described why evaluation order might matter to a compiler
> > (http://groups.google.com/group/comp.std.c++/msg/54716a09f97cc000).
> > Do you have an argument against his post?
>
> Executing the expression that takes most register first is better if the
> other expression leaves a lot of live registers (registers that will be
> read later).

Executing the subexpression. which takes more temporaries
first results in less temporaries used, becase while evaluating
the other subexpression, one has to keep the result of the
first one.  I've never heard of a criteria for evaluation order
involving number of live variables, just the opposite I'd rather
think this number is orthogonal to any order.

[ I'd appreciate references to the relavant papers, though,
 so I have a chance to correct this possibly wrong opinion of mine.]

> In the case of evaluating a function's arguments, there are
> no live registers remaining after pushing the result on the stack, so
> I'm unclear on why the technique would be desirable or more efficient.

 f(a); /* ``a'' is live here */ b = a;

> I quote: "How fast is this calling convention, comparing to __cdecl and
> __stdcall? Find out for yourselves. Set the compiler option /Gr, and
> compare the execution time. I didn't find __fastcall to be any faster
> than other calling conventons, but you may come to different conclusions."

I'm not sure what conclusions can be drawn
from this, quote, article, unquote? What conclusions can be drawn
from microbenchmarks(?) of a register argument passing convention
on an criminally deprived of registers architecture ?

> > Arguments about how the compiler can reorder code at will when it can
> > detect that the order doesn't matter don't carry a lot of weight with
> > me, since the compiler can't know anything about what happens behind
> > the scenes of a non-inlined function (without whole program analysis
> > of course).
>
> If the functions aren't inlined, the compiler can not decide whether
> there's any gain from calling them in one sequence or another, so
> sticking with left-to-right would be as sensible as anything else.

Yes, the compiler *can* decide whether there's any gain even if the
functions are not inlined.  For example, GCC keeps track of the
register
usage of each function, the SGI compilers too (and even make available
this information to the linker).  Information of the register usage of
the
callee, can and does affect register allocation in the caller, thus
evaluation
order.

Even without detailed register usage information, register allocation
in
the caller can be affected by knowledge whether the called function is
a leaf or not.

Even the number of the parameters of the function can be used to
prefer one or another order in the absense of other information.

And there are a number of (more or less) whole program optimizing
compilers too.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Wed, 21 Dec 2005 07:36:53 CST
Raw View
Momchil Velikov wrote:
> This construct is already taken.

It is not. There are unspecified semantics for the
superficially similar construct that appears in C++.

> Why break it leaving no alternative?

Agreed. Fortunately, specifying order of evaluation
does not involve breaking anything.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 23 Dec 2005 05:51:05 GMT
Raw View
David Abrahams wrote:
> That dog don't hunt.
>   If a computer can tolerate accessing the disk every time a new
>   program starts, it can also tolerate it for every statement in a
>   program?

The operation of accessing the disk does not look
similar to executing statements or expressions,
but 'f(a(), b());' looks very much like 'a(); b();'.

Furthermore, most programming style guides recommend
breaking up complex expressions using temporary variables
in order to make the code more readable. I don't think I
have ever seen a style guide recommending doing the
opposite in order to increase efficiency by affording the
compiler more opportunities to optimize.

> Also, statements say "sequential" to the reader and writer of the
> code, at least to programmers familiar with the rules today.  It's not
> a limitation on expressiveness to have statements evaluated in order.
> On the other hand, even if you change the rules I doubt that I would
> *ever* knowingly write code that depended on evaluation order within
> an expression.  Expressions just don't look ordered to me.

Yes, the "we've always done it this way" argument.
I'm convinced that this argument is the one that
most drives opponents of defined evaluation order.
There's obviously no answer to this except to keep
persevering and slowly chip away, until the moderns
gain the majority. The Java example gives hope while
we wait.

> What am I missing?

That the call instructions, not even counting what the
called routines do, completely overwhelm the timing of
this code.

That modern processors do much behind the scenes work,
with caches, speculative execution, parallel units, and
all sorts of other stuff. That means that looking at the
two different sets of instructions doesn't tell you much
about which is more efficient.

That there is no reason that C++ compilers need to use a
calling convention whereby parameters look as if they've
been pushed right-to-left.

> It doesn't seem like a very common occurence to me.
 > How often does it happen?

Well, here's a few from the moderated groups. I shudder
to think of what the unbmoderated ones are like.

<http://groups.google.com/group/comp.std.c++/tree/browse_frm/thread/5c45cc27443187fc/a2edcf55416b7d3a?rnum=1&hl=en&q=%22order%22+group%3Acomp.std.c%2B%2B&_done=%2Fgroup%2Fcomp.std.c%2B%2B%2Fbrowse_frm%2Fthread%2F5c45cc27443187fc%2F03c903b1e42dddec%3Flnk%3Dst%26q%3D%22order%22+group%3Acomp.std.c%2B%2B%26rnum%3D6%26hl%3Den%26#doc_a2edcf55416b7d3a>
<http://groups.google.com/group/comp.lang.c++.moderated/tree/browse_frm/thread/954b22ca0e49beda/a6c7682a09c18a2c?rnum=1&hl=en&q=%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated&_done=%2Fgroup%2Fcomp.lang.c%2B%2B.moderated%2Fbrowse_frm%2Fthread%2F954b22ca0e49beda%2F5276e200ed8b642f%3Flnk%3Dst%26q%3D%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated%26rnum%3D3%26hl%3Den%26#doc_a6c7682a09c18a2c>
<http://groups.google.com/group/comp.lang.c++.moderated/tree/browse_frm/thread/93f8b03d69ce1f1/9fd9ff147eb84dfd?rnum=171&hl=en&q=%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated&_done=%2Fgroup%2Fcomp.lang.c%2B%2B.moderated%2Fbrowse_frm%2Fthread%2F93f8b03d69ce1f1%2Fbabb029dc1191f71%3Flnk%3Dst%26q%3D%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated%26rnum%3D4%26hl%3Den%26#doc_7f1c19db71c178f6>
<http://groups.google.com/group/comp.lang.c++.moderated/tree/browse_frm/thread/5cfa8a73e45306d8/bd1e860cb64c18ee?rnum=1&hl=en&q=%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated&_done=%2Fgroup%2Fcomp.lang.c%2B%2B.moderated%2Fbrowse_frm%2Fthread%2F5cfa8a73e45306d8%2F82bf3375236a32b5%3Flnk%3Dst%26q%3D%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated%26rnum%3D6%26hl%3Den%26#doc_82bf3375236a32b5>
<http://groups.google.com/group/comp.lang.c++.moderated/tree/browse_frm/thread/bd1f8111a1182882/d20bc8452c3bfdd1?rnum=1&hl=en&q=%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated&_done=%2Fgroup%2Fcomp.lang.c%2B%2B.moderated%2Fbrowse_frm%2Fthread%2Fbd1f8111a1182882%2F6c733cbd6849d512%3Flnk%3Dst%26q%3D%22sequence+point%22+group%3Acomp.lang.c%2B%2B.moderated%26rnum%3D7%26hl%3Den%26#doc_d20bc8452c3bfdd1>



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: jpotter@lhup.edu (John Potter)
Date: Fri, 23 Dec 2005 05:50:00 GMT
Raw View
On Wed, 21 Dec 2005 02:07:04 CST, "Andrei Alexandrescu (See Website For
Email)" <SeeWebsiteForEmail@moderncppdesign.com> wrote:

> Executing the expression that takes most register first is better if the
> other expression leaves a lot of live registers (registers that will be
> read later). In the case of evaluating a function's arguments, there are
> no live registers remaining after pushing the result on the stack, so
> I'm unclear on why the technique would be desirable or more efficient.

What stack?  On my RISC box the arguments are passed in registers and
might be pushed on a software stack by the callee.  There are lots of
live registers when the call is made.

John

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 23 Dec 2005 05:52:26 GMT
Raw View
Francis Glassborow wrote:
> And I frequently see abuses of English such as incorrect use of an
> apostrophe, misspelling and incorrect grammar. That does not lead me to
> suggest we need to change English, but it does lead me to complain about
> poor educational standards.

As I'm now in the process of helping my 5-year old son
learn to read, changing English is starting to seem a
lot more attractive. Explaining why the same letter
combinations are pronounced differently in different
words makes him look at me with great suspicion.

> Should we change the C++ Standard to allow 'void main(){' or teach
> programmers to write correct code?

Note that 'void main()' does not cause any actual problems.
Either the compiler accepts it or it doesn't, but it doesn't
make the code behave any differently one way or the other.

> What should we do to fix problems with the inexact nature
 > of floating point arithmetic? In each case the answer is
 > better education.

And in this case, the problem is inherent in the nature of
the beast. There is nothing that can be done technically to
deal with it - arbitrary precision arithmetic is orders of
magnitutde slower, for example.

Order of evaluation difficulty, on the other hand, is a
problem caused solely by the poor definition of programming
languages. It can be fixed in compilers without affecting
users or making them change the way they do things. It
leaves correct programs correct, and additionally corrects
programs with unspecified or undefined behavior.
It's all good.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Fri, 23 Dec 2005 05:53:48 GMT
Raw View
Francis Glassborow wrote:
> In article <87irtjy5sy.fsf@boost-consulting.com>, David Abrahams
> <dave@boost-consulting.com> writes
>
>>> ... as we constantly see on the newsgroup.
>>
>>
>> It doesn't seem like a very common occurence to me.  How often does it
>> happen?
>
>
> And I frequently see abuses of English such as incorrect use of an
> apostrophe, misspelling and incorrect grammar. That does not lead me to
> suggest we need to change English, but it does lead me to complain about
> poor educational standards.

Well well well, natural language vs. programming language... a risky way
of comparing.

> Should we change the C++ Standard to allow 'void main(){' or teach
> programmers to write correct code? What should we do to fix problems
> with the inexact nature of floating point arithmetic? In each case the
> answer is better education.

But void main() is a whole different animal. It's a statically checked
error.

> Of course we all have our problem nits. Mine is that I remain convinced
> that making overflow undefined behaviour for signed integer arithmetic
> devalues the concept of undefined behaviour. Well it would if we
> improved education so that most programmers knew of the problem:-)

By and large, I believe that all instances of undefined behavior (pardon
my American spelling) ought to be critically revisited for the next
round of standardization. There might be instances where behavior was
too hard to check statically, too hard to generate efficient code for,
or simply considered unimportant in the 1970s - instances that became
anachronic.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Fri, 23 Dec 2005 05:54:49 GMT
Raw View
Momchil Velikov wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>>Executing the expression that takes most register first is better if the
>>other expression leaves a lot of live registers (registers that will be
>>read later).
 >
 > Executing the subexpression. which takes more temporaries
 > first results in less temporaries used, becase while evaluating
 > the other subexpression, one has to keep the result of the
 > first one.  I've never heard of a criteria for evaluation order
 > involving number of live variables, just the opposite I'd rather
 > think this number is orthogonal to any order.

It does look like that registers should be used early and greedily while
evaluating some expression. See (Register allocation by priority-based
coloring, Chow & Henessy, ACM Sigplan Notices, 2004), and I quote: "We
came up with the view that a register allocator should strive to make
maximal use of the available registers over each region of the code, and
that the process should continue until either all registers are used up
or there is no more good candidate to allocate."

However, when evaluating *a function's arguments*, after they're put on
the stack, the number of live variables left behind is zero.

>>In the case of evaluating a function's arguments, there are
>>no live registers remaining after pushing the result on the stack, so
>>I'm unclear on why the technique would be desirable or more efficient.
>
>
>  f(a); /* ``a'' is live here */ b = a;

Nonono, that's an entirely different issue. The parameter that has been
passed (by value) to the function is not live anymore. The variable that
has created that parameter continues, of course, to be live.

This brings us to another important source of optimization - common
subexpression elimination. It could be argued that this example could be
better optimized if RTL evaluation was ok:

f(e1, e2, e3);
x = e1;

(ek are arbitrarily complex expressions). In the RTL case, the result of
e1 is readily available and requires only one register to save. In the
LTR case, it's possible that e2 and e3 are too complicated and use up
all of the registers. On the other hand, many compilers spill all
registers upon a function call, so it doesn't matter anyway :o). All in
all, I doubt there's going to be any weeping widow mourning over this
one potential loss of optimization.

>>I quote: "How fast is this calling convention, comparing to __cdecl and
>>__stdcall? Find out for yourselves. Set the compiler option /Gr, and
>>compare the execution time. I didn't find __fastcall to be any faster
>>than other calling conventons, but you may come to different conclusions."
>
>
> I'm not sure what conclusions can be drawn
> from this, quote, article, unquote? What conclusions can be drawn
> from microbenchmarks(?) of a register argument passing convention
> on an criminally deprived of registers architecture ?

Do your research. I've done mine, and found two more things:

http://www.hackcraft.net/cpp/MSCallingConventions/

http://www.tantalon.com/pete/cppopt/compiler.htm

The first is too full of typos to be credible :o). The second says that
fastcall is 2% faster. If placing everything in registers is 2% faster,
then I doubt that placing things on the stack in different order would
make a difference.

>>If the functions aren't inlined, the compiler can not decide whether
>>there's any gain from calling them in one sequence or another, so
>>sticking with left-to-right would be as sensible as anything else.
>
>
> Yes, the compiler *can* decide whether there's any gain even if the
> functions are not inlined.  For example, GCC keeps track of the
> register
> usage of each function, the SGI compilers too (and even make available
> this information to the linker).  Information of the register usage of
> the
> callee, can and does affect register allocation in the caller, thus
> evaluation
> order.
 >
> Even without detailed register usage information, register allocation
> in
> the caller can be affected by knowledge whether the called function is
> a leaf or not.
>
> Even the number of the parameters of the function can be used to
> prefer one or another order in the absense of other information.
>
> And there are a number of (more or less) whole program optimizing
> compilers too.

Ehm, fine. At the end of the day, there are no numbers to tell what this
all boils down to. We need numbers.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 23 Dec 2005 05:56:43 GMT
Raw View
Francis Glassborow wrote:
> Should we mandate things in the Standard that will make development of
> new technology possibly harder?

So now order of evaluation can't be fixed not only
because of mythical existing optimizers, but because
of mythical not-yet-existing future tech? Well, fie
on that! There's a problem that needs fixing now.
If future tech requires a change, then our future
selves will need to arrange that.

 > We do need to address the issue of sequence points
 > because the WG14 experience of trying to write a
 > specification for them demonstrates that they are
 > far from as clear and well specified as we like to think.

Think of what you're saying. WG14 lacks the ability to
specify the meaning of an expression with precisely the
correct amount of ambiguity! That doesn't seem ridiculous
to you? It's going to put in enormous effort to come up
with bizarrely worded standardese that no one will
understand, and which will later turn out to be subtly
wrong. It is just frustrating to consider how obviously
intelligent people are going so astray.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Fri, 23 Dec 2005 05:55:24 GMT
Raw View
Francis Glassborow wrote:
> In article <Iru7EG.1Fqu@beaver.cs.washington.edu>, "Andrei Alexandrescu
> (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes
>
>> Executing the expression that takes most register first is better if
>> the other expression leaves a lot of live registers (registers that
>> will be read later). In the case of evaluating a function's arguments,
>> there are no live registers remaining after pushing the result on the
>> stack, so I'm unclear on why the technique would be desirable or more
>> efficient.
>
>
> What is this stack you are pushing results onto? Just because many
> compilers pass arguments that way is no reason to suppose they will be.
> Why shouldn't a compiler pass the value of a complex in two registers,
> or a quaternion in four if it can spare them?
>
> You see your objection seems to me to be based on a specific way of
> passing arguments and AFAIR C++ makes no requirements on the way that is
> achieved.

I see what you're saying. So, if arguments are passed in registers and
the compiler knows what register requirements each function called has,
it could spend time figuring out which evaluation sequence spills the
fewest registers.

> Note that this is even more the case where a function is inlined.

That's the same as knowing a function's register usage.

So, I think at the end of the day, it's clear that having more freedom
means we can imagine cases where that freedom could be, and is,
exploited. Qualitatively, it's a no-win discussion for me.

However, it became clear that we're at a sheer lack of hard data with
regards to numbers. We pass anecdotes around, me trying to air as many
pompous words as possible in an attempt to suggest that I know what I'm
talking about. :o)

For us to make progress in the discussion, we should focus on two things:

1. Find numbers. I mean, real numbers on real programs with real
compilers. We ought to abandon anecdotes. With anecdotes, the sustainers
of the status quo will always win because there's something that some
compiler (existing or hypothetical) is (or was or might be) doing, that
is (or was or might be or might have been), yielding code with better
performance (by a negligible little, by some amount, or by a lot).

2. Understand, evaluate, and agree on the costs of this scar on the C++
programming language.

If we forget about (2), then the discussion takes a silly turn. Because
right now, it looks as though if someone shows somehow a 0.1% edge with
some compiler and some program, then the discussion is over.

We should think and talk about the benefits of specifying the order of
evaluation. If the benefits are understood, then maybe a little gain
won't seem as big of a deal.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: tannhauser86549spam@free.fr (=?ISO-8859-1?Q?Falk_Tannh=E4user?=)
Date: Fri, 23 Dec 2005 05:55:46 GMT
Raw View
David Abrahams wrote:
> Also, it seems obvious to me that in some cases at least, evaluating
> the arguments in the same order they need to be pushed onto the stack
> could be important:
>
>       g( f1(), f2(), f3() )
>
>       call  f3
>       push  r0
>       call  f2
>       push  r0
>       call  f1
>       push  r0
>       call  g
>
> If you force an ordering that conflicts with the calling convention,
> you end up with
>
>       sub   sp, #3
>       call  f1
>       mv    r0, sp+1
>       call  f2
>       mv    r0, sp+2
>       call  f3
>       mv    r0, sp+3
>       call  g

What would be the run-time penalty of the second code sequence
compared to the first one? Would it make a *measurable* difference,
especially when the functions f1(), f2(), f3() are not completely
trivial themselves? (If they were trivial and if execution time
matters, one would probably let the compiler inline them, giving
the latitude for reordering as to match the calling convention.)

Falk

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Fri, 23 Dec 2005 05:55:34 GMT
Raw View
"David Abrahams" <dave@boost-consulting.com> skrev i meddelandet
news:87irtjy5sy.fsf@boost-consulting.com...

> Also, it seems obvious to me that in some cases at least, evaluating
> the arguments in the same order they need to be pushed onto the
> stack
> could be important:
>
>      g( f1(), f2(), f3() )
>
>      call  f3
>      push  r0
>      call  f2
>      push  r0
>      call  f1
>      push  r0
>      call  g
>
> If you force an ordering that conflicts with the calling convention,
> you end up with
>
>      sub   sp, #3
>      call  f1
>      mv    r0, sp+1
>      call  f2
>      mv    r0, sp+2
>      call  f3
>      mv    r0, sp+3
>      call  g
>
> What am I missing?

That we might have to change the calling convention?

>
> Arguments about how the compiler can reorder code at will when it
> can
> detect that the order doesn't matter don't carry a lot of weight
> with
> me, since the compiler can't know anything about what happens behind
> the scenes of a non-inlined function (without whole program analysis
> of course).

On the other hand, if the compiler cannot analyze the code properly,
how do we know that it handles the current situation properly?



Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: virtanea@mustatilhi.cs.tut.fi (Virtanen Antti)
Date: Fri, 23 Dec 2005 05:58:46 GMT
Raw View
On 2005-12-21, Francis Glassborow <francis@robinton.demon.co.uk> wrote:

>>It doesn't seem like a very common occurence to me.  How often does it
>>happen?
>
> And I frequently see abuses of English such as incorrect use of an
> apostrophe, misspelling and incorrect grammar. That does not lead me to
> suggest we need to change English, but it does lead me to complain about
> poor educational standards.

On the other hand, they do change the grammar occasionally, when given
good enough reason. For example when there are a lot of people using
an incorrect form of some expression. The amount of abuse depends on
the educational standards, but also on the complexity of the grammar.

> Should we change the C++ Standard to allow 'void main(){' or teach
> programmers to write correct code? What should we do to fix problems
> with the inexact nature of floating point arithmetic? In each case the
> answer is better education.

Do you intend to solve all those nasty buffer overflows with better
education? Are they all caused by incompetent programmers and could
be avoided with better education?

Some people have claimed in this thread that there's a big chunk of code
out there which relies on unspecified order of evaluation and for this
reason the compiler vendors don't want the standard to impose certain
rules. It seems that there was a need for better education years ago.
I don't think that need will go anywhere no matter how much we put
resources to education.

Let's look at two programmers: a Beginner and a Guru. Beginner is certainly
pleased when the compiler warns him about unspecified behavior and ambigous
statements. Guru would never write such statements nor rely on unspecified
order of evaluation inside an expression, so such warnings never bother him.
Thus, giving warnings or errors is clearly a positive thing. Where's the
flaw in this reasoning?

I admit that looser specification gives more freedom to optimisations inside
compilers. I don't know how important this freedom really is, but if it's
important, this is a valid argument for leaving things as they are.

--
// Antti Virtanen -//- http://lokori.iki.fi/ -//- 050-4004278

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Thu, 22 Dec 2005 23:53:40 CST
Raw View
Hyman Rosen wrote:
> Momchil Velikov wrote:
> > This construct is already taken.
>
> It is not. There are unspecified semantics for the
> superficially similar construct that appears in C++.

Yes, it is taken, the "expression" construct exists in C++ and
there's nothing in it, that would not remain unspecified even
if  the order of evaluation was fixed.

> > Why break it leaving no alternative?
>
> Agreed. Fortunately, specifying order of evaluation
> does not involve breaking anything.

Yes, it breaks the language by removing means to
specify independence of the order of evaluation.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Christopher Dearlove" <chris.dearlove@baesystems.com>
Date: Thu, 22 Dec 2005 23:51:42 CST
Raw View
Hyman Rosen wrote:
> Fortunately, specifying order of evaluation
> does not involve breaking anything.

Except existing compilers. And hence it would be ten years or more
before we could usefully rely on this even after specification, whilst
in the transition period I suspect there would likely be more resulting
bugs, not fewer. So it wouldn't be, I submit, a painless process.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: eldiener_no_spam_here@earthlink.net (Edward Diener No Spam)
Date: Fri, 23 Dec 2005 15:43:39 GMT
Raw View
Christopher Dearlove wrote:
> Hyman Rosen wrote:
>
>>Fortunately, specifying order of evaluation
>>does not involve breaking anything.
>
>
> Except existing compilers.

It would not break existing compilers, it would change them. I view the
programmer as more important than the compiler implementors when
designing a programming language.

> And hence it would be ten years or more

I am glad you have measured this so accurately.

> before we could usefully rely on this even after specification, whilst
> in the transition period I suspect there would likely be more resulting
> bugs, not fewer. So it wouldn't be, I submit, a painless process.

Then you might as well argue that any change to any language will change
existing compilers and therefore should never be made.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Fri, 23 Dec 2005 15:43:42 GMT
Raw View
In article <40tnh4F1c1imcU1@individual.net>, Bo Persson <bop@gmb.dk>
writes
>That we might have to change the calling convention?

If that were the case we would be considering breaking ABIs and that
could be immensely expensive.


--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Fri, 23 Dec 2005 15:43:48 GMT
Raw View
In article <1135179561.238816.222930@g47g2000cwa.googlegroups.com>,
Momchil Velikov <momchil.velikov@gmail.com> writes
>> Agreed. Fortunately, specifying order of evaluation
>> does not involve breaking anything.
>
>Yes, it breaks the language by removing means to
>specify independence of the order of evaluation.

It also breaks implementations, and ones that have excellent optimisers
may be broken in ways that require a major redesign of the optimiser.
Such things should not be taken lightly.


--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Sat, 24 Dec 2005 14:18:38 GMT
Raw View
Momchil Velikov wrote:
> Yes, it breaks the language by removing means to
> specify independence of the order of evaluation.

But C++ has no such construct now. Given
     void f(), g();
please tell me how I can write a program
which allows the compiler to call these
in an unspecified order?

Oh, wait, I know!
     int call_f() { f(); return 0; }
     int call_g() { g(); return 0; }
     call_f() + call_g();
We can even write templates to help!

I shall have to go through my existing code
looking for such opportunities to assist the
compiler. Or maybe not.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Sat, 24 Dec 2005 14:18:57 GMT
Raw View
Christopher Dearlove wrote:
> Except existing compilers. And hence it would be ten years or more
> before we could usefully rely on this even after specification, whilst
> in the transition period I suspect there would likely be more resulting
> bugs, not fewer. So it wouldn't be, I submit, a painless process.

This is a variant on the "we've always done it that way"
argument. Any change needs to be implemented by the compiler
vendors before people can use the new specifications. Until
you get a conforming compiler, don't write order-dependent
code.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Sat, 24 Dec 2005 14:20:00 GMT
Raw View
Francis Glassborow wrote:
> It also breaks implementations, and ones that have excellent optimisers
> may be broken in ways that require a major redesign of the optimiser.

Again, I point out that optimizers are perfectly
capable of dealing with sequences of ststements.
It would be hard to characterize as "excellent"
an optimizer which could not do so. Therefore,
it is difficult to see how requiring a particular
sequence of evaluations in an expression would
break an optimizer. After all, I can rewrite an
expression to make the order explicit. If I have
     int a(), b();
     void f(int, int);
     f(a(), b());
then I can do
     const int &va = a();
     const int &vb = b();
     f(va, vb);
If I have (the currently undefined)
     extern int a[], i;
     a[i++] = ++i;
then I can do
     i++;
     int &ra = a[i];
     ++i;
     ra = i;
So unless we have this mythical smart/stupid optimizer
which is brilliant in the first cases and clueless in
the second, nothing is going to break;

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ben-public-nospam@decadentplace.org.uk (Ben Hutchings)
Date: Sat, 24 Dec 2005 14:20:50 GMT
Raw View
David Abrahams <dave@boost-consulting.com> wrote:
> hyrosen@mail.com (Hyman Rosen) writes:
<snip>
>> This hypothetical high-performance code and smart/stupid compiler is
>> a myth anyway, but people confused by order of evaluation issues are
>> all too real, ...
>
> Okay, maybe I'm one of them, and you can clear it up.  Andrew Koenig
> described why evaluation order might matter to a compiler
> (http://groups.google.com/group/comp.std.c++/msg/54716a09f97cc000).
> Do you have an argument against his post?

It *might* matter, but Sethi-Ullman numbering is not the state of the
art in code optimisation.

> Also, it seems obvious to me that in some cases at least, evaluating
> the arguments in the same order they need to be pushed onto the stack
> could be important:
>
>       g( f1(), f2(), f3() )
>
>       call  f3
>       push  r0
>       call  f2
>       push  r0
>       call  f1
>       push  r0
>       call  g
>
> If you force an ordering that conflicts with the calling convention,
> you end up with
>
>       sub   sp, #3
>       call  f1
>       mv    r0, sp+1
>       call  f2
>       mv    r0, sp+2
>       call  f3
>       mv    r0, sp+3
>       call  g
>
> What am I missing?
<snip>

First, AFAIK the only current architecture that has stack-based
calling conventions even for small numbers of function parameters is
x86, and for performance-sensitive code I suspect that will soon be
obsoleted by x86-64, which has register-based calling conventions due
to its larger register set.

Second, "push" is relatively complex and expensive even on x86;
certainly sufficiently so that current compilers already prefer
sp-relative addressing to pushing in some cases when generating code
for a function call.

--
Ben Hutchings
It is easier to write an incorrect program than to understand a correct one.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Sat, 24 Dec 2005 14:21:16 GMT
Raw View
""Falk Tannh=E4user"" <tannhauser86549spam@free.fr> skrev i meddelandet=20
news:43a9cb8e$0$21226$626a54ce@news.free.fr...
> David Abrahams wrote:
>> Also, it seems obvious to me that in some cases at least,=20
>> evaluating
>> the arguments in the same order they need to be pushed onto the=20
>> stack
>> could be important:
>>
>>       g( f1(), f2(), f3() )
>>
>>       call  f3
>>       push  r0
>>       call  f2
>>       push  r0
>>       call  f1
>>       push  r0
>>       call  g
>>
>> If you force an ordering that conflicts with the calling=20
>> convention,
>> you end up with
>>
>>       sub   sp, #3
>>       call  f1
>>       mv    r0, sp+1
>>       call  f2
>>       mv    r0, sp+2
>>       call  f3
>>       mv    r0, sp+3
>>       call  g
>
> What would be the run-time penalty of the second code sequence
> compared to the first one? Would it make a *measurable* difference,
> especially when the functions f1(), f2(), f3() are not completely
> trivial themselves? (If they were trivial and if execution time
> matters, one would probably let the compiler inline them, giving
> the latitude for reordering as to match the calling convention.)
>

The alternative is to change the calling convention: pass parameters=20
in registers, in reverse order.

call    f1
mv    r0, r2
call   f2
mv    r0, r1
call   f3
call   g

That's one instruction less than the original.

*All* existing code would have to be recompiled, of course.  :-)


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Sat, 24 Dec 2005 14:21:26 GMT
Raw View
Christopher Dearlove wrote:
> Hyman Rosen wrote:
>
>>Fortunately, specifying order of evaluation
>>does not involve breaking anything.
>
>
> Except existing compilers. And hence it would be ten years or more
> before we could usefully rely on this even after specification, whilst
> in the transition period I suspect there would likely be more resulting
> bugs, not fewer. So it wouldn't be, I submit, a painless process.

That's what happened with changing the scope of the declaration in the
"for" statement. Yet it made it through because many people thought it's
worth it, altough there was no hard need for it (you could always add an
extra scope or #define FOR if (false) ; else for or whatever).

Change ain't easy, but not impossible.

Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Sat, 24 Dec 2005 14:20:59 GMT
Raw View
"Francis Glassborow" <francis@robinton.demon.co.uk> skrev i
meddelandet news:KrOt0IC9k9qDFw1x@robinton.demon.co.uk...
> In article <40tnh4F1c1imcU1@individual.net>, Bo Persson <bop@gmb.dk>
> writes
>>That we might have to change the calling convention?
>
> If that were the case we would be considering breaking ABIs and that
> could be immensely expensive.

Agree.

But we are talking about a revised language standard. That would
surely break all binary compatibility.

I personally don't see any benefits from the change, I just argue that
it can be done. Andrei convinced me that the change will not
negatively affect my code, so if others believe it's an improvement,
that's fine with me.

There will be some costs, for sure. But as long at the cost is not
that my old code suddenly runs slower, I don't mind if the language
change is accepted. My vote is - abstain.


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: chris@phaedsys.org (Chris Hills)
Date: Sat, 24 Dec 2005 14:22:11 GMT
Raw View
In article <wDTqf.8811$3Z.6249@newsread1.news.atl.earthlink.net>, Edward
Diener No Spam <eldiener_no_spam_here@earthlink.net> writes
>Christopher Dearlove wrote:
>> Hyman Rosen wrote:
>>
>>>Fortunately, specifying order of evaluation
>>>does not involve breaking anything.
>>
>>
>> Except existing compilers.
>
>It would not break existing compilers, it would change them.

Which is the same thing?

>I view the
>programmer as more important than the compiler implementors when
>designing a programming language.

Good for you. Many have taken this view and their dead languages can be
found littering the pages of history. They usually died because they
could not convince enough people to write decent tools for the language.

>> And hence it would be ten years or more
>
>I am glad you have measured this so accurately.

Ten years or more is somewhere between 10 and infinity. In other words
not less than 10 years. This is reasonable as virtually no one has
implemented C99 in the 6 years it has been out (and very few are pushing
for it) people are STILL hanging on to C90 (and bit so K&R) some 15
years later.

I think at least 10 years is a good estimate as it will require
substantial engineering effort for most compiler writers. These are the
people you think of as "less important". Remind me, how were you going
pursued then to do all this work.

>> before we could usefully rely on this even after specification, whilst
>> in the transition period I suspect there would likely be more resulting
>> bugs, not fewer. So it wouldn't be, I submit, a painless process.
>
>Then you might as well argue that any change to any language will change
>existing compilers and therefore should never be made.

This could be argued. It is a case of balancing. Is the new feature that
important that users demand it and compiler companies will spend time
and effort on it?

This is a commercial problem as much as an engineering one.  History is
littered with dead languages that are academically good but never made
it commercially.



--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: chris@phaedsys.org (Chris Hills)
Date: Sat, 24 Dec 2005 15:31:56 GMT
Raw View
In article <200512231602.jBNG2KuC038844@horus.isnic.is>, Hyman Rosen
<hyrosen@mail.com> writes
>Christopher Dearlove wrote:
>> Except existing compilers. And hence it would be ten years or more
>> before we could usefully rely on this even after specification, whilst
>> in the transition period I suspect there would likely be more resulting
>> bugs, not fewer. So it wouldn't be, I submit, a painless process.
>
>This is a variant on the "we've always done it that way"
>argument.

Not at all. All change requires effort. those making the effort have to
be convinced that is it worth making the effort or that the change is
even practical to implement.

>Any change needs to be implemented by the compiler
>vendors before people can use the new specifications. Until
>you get a conforming compiler, don't write order-dependent
>code.

Yes. Exactly so you need to convince the compiler writers who by the way
ARE programmers. SO if the changes make sense to "programmers" then it
will make sense to compiler writers.

However, as most commercial compiler writers are some of the better
programmers it does not stop a multitude of lesser programmers who don't
really understand the problem or the answers asking for things that in
reality are pointless.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Sat, 24 Dec 2005 10:08:51 CST
Raw View
Ben Hutchings wrote:
> David Abrahams <dave@boost-consulting.com> wrote:
> > hyrosen@mail.com (Hyman Rosen) writes:
> <snip>
> >> This hypothetical high-performance code and smart/stupid compiler is
> >> a myth anyway, but people confused by order of evaluation issues are
> >> all too real, ...
> >
> > Okay, maybe I'm one of them, and you can clear it up.  Andrew Koenig
> > described why evaluation order might matter to a compiler
> > (http://groups.google.com/group/comp.std.c++/msg/54716a09f97cc000).
> > Do you have an argument against his post?
>
> It *might* matter, but Sethi-Ullman numbering is not the state of the
> art in code optimisation.

And still Sethi-Ullman numbering is probably the "most ordered"
compared to Aho-Tjiang  dynamic programming code generation, BURG/IBURG
style pattern matching or results of the instruction scheduling.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Sat, 24 Dec 2005 10:08:41 CST
Raw View
Hyman Rosen wrote:
> Momchil Velikov wrote:
> > Yes, it breaks the language by removing means to
> > specify independence of the order of evaluation.
>
> But C++ has no such construct now.

Sure it has, it is called "expression". It may not be universal,
but it sure is there.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Edward Diener No Spam <eldiener_no_spam_here@earthlink.net>
Date: Sat, 24 Dec 2005 18:19:55 CST
Raw View
Chris Hills wrote:
> In article <wDTqf.8811$3Z.6249@newsread1.news.atl.earthlink.net>, Edward
> Diener No Spam <eldiener_no_spam_here@earthlink.net> writes
>
>>Christopher Dearlove wrote:
>>
>>>Hyman Rosen wrote:
>>>
>>>
>>>>Fortunately, specifying order of evaluation
>>>>does not involve breaking anything.
>>>
>>>
>>>Except existing compilers.
>>
>>It would not break existing compilers, it would change them.
>
>
> Which is the same thing?

It is the wording you used, as if making a change to the way C++
evaluates currently unspecified behavior, somehow is going to cause the
compilers to no longer work at all.

>
>
>>I view the
>>programmer as more important than the compiler implementors when
>>designing a programming language.
>
>
> Good for you. Many have taken this view and their dead languages can be
> found littering the pages of history. They usually died because they
> could not convince enough people to write decent tools for the language.

Lots of dead computer languages become that way because they are
ossified into never changing, as programming models and programmer
techniques get better. Other computer languages, which change fairly
rapidly, garner a great deal of support for programmers and language
implementors, both of whom welcome the better use of the language.

>
>
>>>And hence it would be ten years or more
>>
>>I am glad you have measured this so accurately.
>
>
> Ten years or more is somewhere between 10 and infinity. In other words
> not less than 10 years.

Your claim, that it would take 10 years for a compiler to change so that
it could evaluate expressions in a specified order which are currently
unspecified, is preposterous.

>>Then you might as well argue that any change to any language will change
>>existing compilers and therefore should never be made.
>
>
> This could be argued. It is a case of balancing. Is the new feature that
> important that users demand it and compiler companies will spend time
> and effort on it?

I agree with you here but I would place user's demands over compiler
companies' willingness. It is after all the programmer who has to use
the language, and like what the language gives them as far as
correpsonding to their model of a good language for a particular domain,
which drives the creation and sale of compiler implementations.

>
> This is a commercial problem as much as an engineering one.

Really. So if we do it for GCC, which is free, then it no longer becomes
a commercial problem ? Granted that we do not want C++, or any language,
so arcane and complex in its rules that no company will want to put out
a conforming compiler for it, still companies will put out conforming
versions of a computer language if they think it is beneficial for their
customers and their sales. A well-reasoned and welcome change in the
direction of beneficiality should not cause companies to refuse to
support the change.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Sun, 25 Dec 2005 12:54:36 GMT
Raw View
>>>>>Fortunately, specifying order of evaluation
>>>>>does not involve breaking anything.

If a specified order of evaluation were to be of such value, why has no
major compiler implementor provided it? There is nothing in the C++
Standard that prohibits an implementation from both specifying and
documenting an order of evaluation, yet, AFAIK, not one of them has done
so.

Could it be that either there are real optimisation costs that no one is
willing to pay in the highly competitive world of development tools? Or
perhaps the judgement of the implementors is that it is of no great
commercial value.

I do not know the answers but just raise the questions.

--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "P.J. Plauger" <pjp@dinkumware.com>
Date: Sun, 25 Dec 2005 11:55:45 CST
Raw View
"Edward Diener No Spam" <eldiener_no_spam_here@earthlink.net> wrote in
message news:Wnlrf.9458$3Z.2475@newsread1.news.atl.earthlink.net...

> Your claim, that it would take 10 years for a compiler to change so that
> it could evaluate expressions in a specified order which are currently
> unspecified, is preposterous.

Perhaps, but I thought the claim was somewhat different. Should either
the C or C++ committee decide next March/April, by unanimous vote,
that their language will most certainly have this feature in its
next revision, it would easily be ten years before most programmers
could depend on a typical compiler supporting it. IOW, it won't
become a practical part of the language for most people for another
decade.

If you think that's preposterous, consider:

-- wide-character functions and I/O in C (approved in 1992,
standardized since 1994 in C, standardized since 1998 in C++,
fully implemented so far only by IBM and Dinkumware last I
looked)

-- defining C headers in namespace std for C++ (approved in 1994,
standardized since 1998, fully implemented so far only by a few
compilers)

-- separate compilation of templates (approved in 1996, standardized
since 1998, implemented so far only by EDG)

-- varargs/restrict/complex/etc. in C (approved circa 1997,
standardized since 1999, implemented so far only by Dinkumware,
EDG, and Sun)

>>>Then you might as well argue that any change to any language will change
>>>existing compilers and therefore should never be made.
>>
>>
>> This could be argued. It is a case of balancing. Is the new feature that
>> important that users demand it and compiler companies will spend time
>> and effort on it?
>
> I agree with you here but I would place user's demands over compiler
> companies' willingness. It is after all the programmer who has to use the
> language, and like what the language gives them as far as correpsonding to
> their model of a good language for a particular domain, which drives the
> creation and sale of compiler implementations.

Yep, that was the argument given by the C++ committee on several
occasions when they dismissed vendor warnings. See list above.
You need both programmer desire and vendor willingness to make
a market.

>> This is a commercial problem as much as an engineering one.
>
> Really. So if we do it for GCC, which is free, then it no longer becomes a
> commercial problem ?

No, then it becomes yet another GCC dialect problem. Besides,
free software is but a small component of a software-development
effort, which pays for change in many ways.

>                       Granted that we do not want C++, or any language, so
> arcane and complex in its rules that no company will want to put out a
> conforming compiler for it,

Good thing we don't have that situation now.

>                               still companies will put out conforming
> versions of a computer language if they think it is beneficial for their
> customers and their sales. A well-reasoned and welcome change in the
> direction of beneficiality should not cause companies to refuse to support
> the change.

Right, that's Motherhood when you describe it with warm fuzzy
adjectives. Finding the right balance is rather harder, as the
above list of lapses demonstrates.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Tue, 27 Dec 2005 14:49:45 CST
Raw View
P.J. Plauger wrote:
> it won't become a practical part of the language
 > for most people for another decade.

That's not true of all changes to the language.
For example, when template metaprogramming became
popular, it took far less time than that for
compiler vendors to issue upgrades which allowed
for more levels of template instantiation and greater
inlining. So you can't know a priori that specifying
evaluation order will take that long to reach users.

> Finding the right balance is rather harder, as the
> above list of lapses demonstrates.

Since we're not about to have heavenly voices dictate
the One True Way, the best we can do is debate the
issue amongst ourselves and make a decision. And I
wouldn't worry too much about those "lapses". If the
committee adopts a feature that vendors decide not to
implement, the users aren't any worse off than they
were before. If the committee fails to adopt a feature
just out of such worry, then the users never stand a
chance of getting it.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: chris@phaedsys.org (Chris Hills)
Date: Tue, 27 Dec 2005 20:48:53 GMT
Raw View
In article <Wnlrf.9458$3Z.2475@newsread1.news.atl.earthlink.net>, Edward
Diener No Spam <eldiener_no_spam_here@earthlink.net> writes
>Chris Hills wrote:
>> In article <wDTqf.8811$3Z.6249@newsread1.news.atl.earthlink.net>, Edward
>> Diener No Spam <eldiener_no_spam_here@earthlink.net> writes
>>
>>>Christopher Dearlove wrote:
>>>
>>>>Hyman Rosen wrote:
>>>>
>>>>
>>>>>Fortunately, specifying order of evaluation
>>>>>does not involve breaking anything.
>>>>
>>>>
>>>>Except existing compilers.
>>>
>>>It would not break existing compilers, it would change them.
>>
>>
>> Which is the same thing?
>
>It is the wording you used, as if making a change to the way C++
>evaluates currently unspecified behavior, somehow is going to cause the
>compilers to no longer work at all.

It will break the compilers. If it is unspecified they can do it any way
they like. As soon as you specify it all those who do not fit your
specification are broken.

You also have the problem that as soon as you decide to implement this
in the standard you are going to have al the commercial vendors pushing
their version and screaming "foul!!!" and commercial bias if you use
some one else's commercial system.


>>>I view the
>>>programmer as more important than the compiler implementors when
>>>designing a programming language.
>>
>>
>> Good for you. Many have taken this view and their dead languages can be
>> found littering the pages of history. They usually died because they
>> could not convince enough people to write decent tools for the language.
>
>Lots of dead computer languages become that way because they are
>ossified into never changing, as programming models and programmer
>techniques get better. Other computer languages, which change fairly
>rapidly, garner a great deal of support for programmers and language
>implements, both of whom welcome the better use of the language.

You mean as with C where the ISO panel produced  C99 and 98% of the
worlds compiler vendors have ignored most of it.....

As with ISO BASIC. The ISO BASIC standard if now dead because the
commercial world ignored it.

>>>>And hence it would be ten years or more
>>>
>>>I am glad you have measured this so accurately.
>>
>>
>> Ten years or more is somewhere between 10 and infinity. In other words
>> not less than 10 years.
>
>Your claim, that it would take 10 years for a compiler to change so that
>it could evaluate expressions in a specified order which are currently
>unspecified, is preposterous.

As an ISO C and C++ committee member who works with several compiler
vendors I can tell you it is not preposterous. It is going to take at
least a year for the panels to agree a solution, then you have a year or
two before compiler vendors decide to adopt it. So we are looking at
least 4 years before it is likely to be available. It than takes a while
for things to filter though.

For example C99 is hardly used and AFAIK there is no single full
implementation of it yet. That started in about 1995.... What year are
we on now?

>>>Then you might as well argue that any change to any language will change
>>>existing compilers and therefore should never be made.
>>
>>
>> This could be argued. It is a case of balancing. Is the new feature that
>> important that users demand it and compiler companies will spend time
>> and effort on it?
>
>I agree with you here but I would place user's demands over compiler
>companies' willingness.

Well the users can only use what the compiler writers produce. SO if the
users demand it in sufficient numbers the compiler writers will do it.

However you have top remember that the compiler writers are also "users"
and "programmers" as well.

SO how come you place the demand's of the "users" over the skilled
programmers who write the compilers?

>It is after all the programmer

such as the compiler writer...

>who has to use
>the language, and like what the language gives them as far as
>correpsonding to their model of a good language for a particular domain,
>which drives the creation and sale of compiler implementations.

SO if there is a demand for it then the compiler writers will do it.
This explains why there are no C99 compilers (well no full
implementations in wide spread use) .

>
>>
>> This is a commercial problem as much as an engineering one.
>
>Really. So if we do it for GCC, which is free, then it no longer becomes
>a commercial problem ?

Not at all it means that Gcc will be a non-standard compiler with it's
own extensions. The commercial problem still remains.

> Granted that we do not want C++, or any language,
>so arcane and complex in its rules that no company will want to put out
>a conforming compiler for it,

as has happened with other languages.

> still companies will put out conforming
>versions of a computer language if they think it is beneficial for their
>customers and their sales.

Yes. So for complex change with no real benefit is going to mean that
the standard diverges from it's user base and no one implements it. (see
C99 again)

>A well-reasoned and welcome change in the
>direction of beneficiality should not cause companies to refuse to
>support the change.

Yes. SO what have you learnt Grasshopper?


1 If it is a good idea it will take a decade to come in to effect.
2 if you have no support it will not come into effect.
3 this idea of yours has no support from those who understand its
ramifications.


--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 16 Dec 2005 15:52:41 GMT
Raw View
David Abrahams wrote:
> So you seem to be starting with the axiom that
>    foo( auto_ptr<T>( new T ), auto_ptr<T>( new T ) )
> isn't "unsafe at any moment."  But of course it is completely unsafe
> or we wouldn't be having this discussion.  And you find that to
> conflict with your axiom.  So that line of reasoning seems circular to
> me.
>
> All that said, I can see an argument for your position.  I think you'd
> like exception-safety to be "context free," so that if expression1 and
> expression2 are each exception-safe, then some expression3 composed of
> expression1 and expression2 is also exception-safe.

What I would like is for
     void foo(auto_ptr<T>, auto_ptr<T>);
     foo(new T, new T);
also not to be unsafe at any moment.

Your composition explanation isn't really meaningful.
After all, you are *supposed* to write
     auto_ptr<T> p(new T);
so writing the unsafe 'new T' as part of safe code is
expected. The problem with the current language definition
is that the compiler has too much freedom to reorder
expressions, and that leads to unsafe code. You are left
with explaining to people that while
     void bar(auto_ptr<T>);
     bar(new T);
is safe and cannot leak resources, the same is not true for
the case above. That's not a situation that you want to be
in. It badly fails the principle of least astonishment. It
badly fails those people to whom you have not gotten to in
time, because it takes a C++ genius to realize that there
is a problem at all.

And best of all, you get the fix for free while you are
getting rid of the rest of the order of evaluation problems.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Sat, 17 Dec 2005 04:14:40 GMT
Raw View
In article <1134551229.842582.80670@z14g2000cwz.googlegroups.com>,
 "Momchil Velikov" <momchil.velikov@gmail.com> wrote:

> To restate the question, will the rvalue proposal [1] enable a
> programmer to perform source-to-source transformations
> whenever (s)he wants to specify a concrete evaluation order
> of subexpressions and function arguments ?

I'm coming to this discussion late, so I'm not positive I understand
your question.  But as a player in the rvalue proposal, I will attempt
an answer anyway:

The rvalue proposal does not address evaluation order of subexpressions
and function arguments.  All it does is add a new type of reference,
named "rvalue reference", but spelled A&& (as opposed to A&).  The
existing reference type is named "lvalue reference" just to distinguish
it, but its behavior is unchanged.  The rvalue reference behaves just
like the lvalue reference except that you can bind a temporary to a
non-const rvalue reference.  One can overload on the two types of
reference.  The overloading rules have been crafted to address both
"move semantics" and "perfect forwarding".

> [1] I guess by "rvalue proposal" you mean this
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1690.html

Yes, that is one of the papers.  The entire list is:

The originals:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1377.htm
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm

A review:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1690.html

A library impact survey:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1771.html

Proposed wording for core:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1855.html
(there will be another minor revision to this one).

Proposed wording for library:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1856.html
..
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1862.html

Adds rvalue/lvalue overloading on *this:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1784.htm

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: David Abrahams <dave@boost-consulting.com>
Date: Fri, 16 Dec 2005 22:17:59 CST
Raw View
SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)") writes:

> David Abrahams wrote:
>> "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes:
>>> Hmmm... I think Hyman had a different definition of "code that
>>> tries to be safe". The meaning would be "code that doesn't
>>> manipulate at any moment bald pointers". You see, the thing is that
>>> the code
>>>
>>> function_taking_two_auto_ptrs(auto_ptr<int>(new int),
>>> auto_ptr<int>(new int))
>>>
>>> does not expose at any moment any bald pointer,
>> Of course      new int
>> exposes a bald pointer.  That's *precisely* the root cause of the
>> problem.  That should be encapsulated in
>>      new_<auto_ptr<int> >()
>> It's even less typing.
>
> Well I guess it depends on how we define "expose". For example, I
> believe the code below doesn't:
>
> auto_ptr<int> sp(new int);
>
> The programmer calls new to create a temporary pointer that's
> immediately passed to a class that manages it. No stars in sight, no
> trouble, end of story.

It's not the presence of stars in the code that causes the problem.

     struct might_throw
     {
         might_throw(int*) throw(std::exception);
     };

     int f(might_throw);
     int x = f(new int);

> If you claim the line above does expose a bald pointer,

I do.

> you have a different definition, our criteria don't compare, end of
> discussion.

So we can't discuss the validity of your criteria?

> If we agree that the code above is sensible,

It's sensible, because it works.  And it works because of what you
know about the definition of auto_ptr.

> then I claim it is sensible that also an unnamed temporary:
>
> auto_ptr<int>(new int)
>
> oughtn't leak memory,

Sensible, yes.  But if people stop initializing auto_ptr with bare
pointers, it's a non-issue.

> and I'd also claim that it's exactly because of
> unspecified order of evaluation of funtion arguments that:
>
> extern f(auto_ptr<int>, auto_ptr<int>);
> f(auto_ptr<int>(new int), auto_ptr<int>(new int));
>
> might leak. There's no other context in which the leak is possible
> that I can imagine.

Which leak?  Leaks are obviously possible in other contexts.  I don't
understand how to tell whether any of those are "the leak."

Anyway, I'd rather cure the fundamental leaking problem than fix order
of evaluation, which I deem to be less important.

>> So you seem to be starting with the axiom that
>>    foo( auto_ptr<T>( new T ), auto_ptr<T>( new T ) )
>> isn't "unsafe at any moment."  But of course it is completely unsafe
>> or we wouldn't be having this discussion.  And you find that to
>> conflict with your axiom.  So that line of reasoning seems circular to
>> me.
>
> I'm starting simply with the desideratum that said line *oughtn't be*
> unsafe at any moment. I desire that because there are no other contexts
> in which auto_ptr<T>( new T ) could leak.

Is this a different context?

  template <class T>
  int f( T x, int* y = new int);

  int x = f( auto_ptr<T>( new T ) );


>> All that said, I can see an argument for your position.  I think
>> you'd like exception-safety to be "context free," so that if
>> expression1 and expression2 are each exception-safe, then some
>> expression3 composed of expression1 and expression2 is also
>> exception-safe.
>
> That sounds like a very nice formalization of a worthy goal. And I
> believe that function argument evaluation is the only instance where
> that goal is unrealized.

Oh, maybe that's what you mean by context.

What about this?

     *auto_ptr<int>( new int ) + *auto_ptr<int>( new int );

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Sat, 17 Dec 2005 04:15:28 GMT
Raw View
hyrosen@mail.com (Hyman Rosen) writes:

> David Abrahams wrote:
>> So you seem to be starting with the axiom that
>>    foo( auto_ptr<T>( new T ), auto_ptr<T>( new T ) )
>>
>> isn't "unsafe at any moment."  But of course it is completely
>> unsafe or we wouldn't be having this discussion.  And you find that
>> to conflict with your axiom.  So that line of reasoning seems
>> circular to me.  All that said, I can see an argument for your
>> position.  I think you'd like exception-safety to be "context
>> free," so that if expression1 and expression2 are each
>> exception-safe, then some expression3 composed of expression1 and
>> expression2 is also exception-safe.
>
> What I would like is for
>     void foo(auto_ptr<T>, auto_ptr<T>);
>     foo(new T, new T);
> also not to be unsafe at any moment.

That won't compile at any moment, so it's perfectly safe ;-)

> Your composition explanation isn't really meaningful.

I disagree.

> After all, you are *supposed* to write
>     auto_ptr<T> p(new T);
> so writing the unsafe 'new T' as part of safe code is
> expected.

a) That's what I'm talking about.  I understand why, if you're
   supposed to write auto_ptr<T> p(new T), you'd like to be able to
   use auto_ptr<T>(new T) in an expression safely.

b) Yes, but I want to change that.  You shouldn't be *supposed* to
   write auto_ptr<T> p(new T).

> The problem with the current language definition
> is that the compiler has too much freedom to reorder
> expressions, and that leads to unsafe code.

I disagree.  The handling of bare unmanaged resources is a more
fundamental reason for the leakage problem.  It's easy to prove that:
if you add evaluation ordering, you are still left with many other
cases where handling unmanaged resources leads to the same kind of
leakage problem.  However, if you take away the use of unmanaged
resources, the leakage problem goes away completely.

> You are left
> with explaining to people that while
>     void bar(auto_ptr<T>);
>     bar(new T);
> is safe and cannot leak resources, the same is not true for
> the case above.

Let's just tell people not to use the builtin "new" except with
extreme care.

> That's not a situation that you want to be in. It badly fails the
> principle of least astonishment. It badly fails those people to whom
> you have not gotten to in time, because it takes a C++ genius to
> realize that there is a problem at all.
>
> And best of all, you get the fix for free while you are
> getting rid of the rest of the order of evaluation problems.

While leaving the unmanaged resource problems alone.  I don't believe
"the rest of the order of evaluation problems" are nearly as serious
as the leakage problems, and I am doubly predisposed towards solving
those because it can be done with a library and no core language
changes.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Sat, 17 Dec 2005 17:59:35 CST
Raw View
Howard Hinnant wrote:
> In article <1134551229.842582.80670@z14g2000cwz.googlegroups.com>,
>  "Momchil Velikov" <momchil.velikov@gmail.com> wrote:
>
> > To restate the question, will the rvalue proposal [1] enable a
> > programmer to perform source-to-source transformations
> > whenever (s)he wants to specify a concrete evaluation order
> > of subexpressions and function arguments ?
>
> I'm coming to this discussion late, so I'm not positive I understand
> your question.  But as a player in the rvalue proposal, I will attempt
> an answer anyway:
>
> The rvalue proposal does not address evaluation order of subexpressions
> and function arguments.  All it does is add a new type of reference,
> named "rvalue reference", but spelled A&& (as opposed to A&).  The
> existing reference type is named "lvalue reference" just to distinguish
> it, but its behavior is unchanged. The rvalue reference behaves just
> like the lvalue reference except that you can bind a temporary to a
> non-const rvalue reference.

Yes, I understand that.

The context of my question is the following by Andrei Alexandrescu:

> Not sure I understand. For the call (expr0)(arg1, arg2, ..., argn) the
> evaluation algorithm should be as if the following happens:
>
> 1. Evaluate expr0 resulting in a function f
> 2. For each i in 1..n in this order, evaluate argi resulting in a value vi
> 3. Invoke f(v1, v2, ..., vn)
>
> It's a pity that the intended semantics can't be easily expressed as a
> source-to-source transformation. (The problem is that rvalue and lvalue
> expressions would lead to different types of temporaries.)

Again, will the rvalue proposal enable the programmer to perform
source-to-source transformations like the above?

I find such source-to-source transformation *much* more preferrable to
the dubious attempts to impose arbitrary restrictions on the compiler
for no good reason.

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Sat, 17 Dec 2005 23:59:38 GMT
Raw View
In article <uwti43m41.fsf@boost-consulting.com>,
 dave@boost-consulting.com (David Abrahams) wrote:

> > The problem with the current language definition
> > is that the compiler has too much freedom to reorder
> > expressions, and that leads to unsafe code.
>
> I disagree.  The handling of bare unmanaged resources is a more
> fundamental reason for the leakage problem.  It's easy to prove that:
> if you add evaluation ordering, you are still left with many other
> cases where handling unmanaged resources leads to the same kind of
> leakage problem.  However, if you take away the use of unmanaged
> resources, the leakage problem goes away completely.

I suspect we all agree that handling unmanaged resources is risky.  But
the problem is that:

f(auto_ptr<T>(new T), g());

looks *very much* like code that is only handling managed resources.
Joe Coder is going to look at that and say :  Good job!  You're safely
handling your resources!

Banning "new T" in C++0X isn't an option.

Smart pointer factory functions in C++0X sound great.  Let's have them
(I hope to see your proposal soon).  But that's only a partial solution.
We also very much need this to be as safe as it looks:

f(auto_ptr<T>(new T), g());

And to get to that point, mandating left-to-right, or right-to-left is
overkill.  We only need to mandate that there is a sequence point
between the argument evaluations.

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Sun, 18 Dec 2005 01:20:11 GMT
Raw View
Howard Hinnant wrote:
> And to get to that point, mandating left-to-right, or right-to-left is
> overkill.  We only need to mandate that there is a sequence point
> between the argument evaluations.

No. It's sequence points that led to this nonsense in the first place.
Order of evaluation *is* the problem. The fact that it leads to code
which is unsafe for resource handling is only one manifestation of the
confusion and errors it causes.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Sun, 18 Dec 2005 01:20:51 GMT
Raw View
howard.hinnant@gmail.com (Howard Hinnant) writes:

> In article <uwti43m41.fsf@boost-consulting.com>,
>  dave@boost-consulting.com (David Abrahams) wrote:
>
>> > The problem with the current language definition
>> > is that the compiler has too much freedom to reorder
>> > expressions, and that leads to unsafe code.
>>
>> I disagree.  The handling of bare unmanaged resources is a more
>> fundamental reason for the leakage problem.  It's easy to prove that:
>> if you add evaluation ordering, you are still left with many other
>> cases where handling unmanaged resources leads to the same kind of
>> leakage problem.  However, if you take away the use of unmanaged
>> resources, the leakage problem goes away completely.
>
> I suspect we all agree that handling unmanaged resources is risky.  But
> the problem is that:
>
> f(auto_ptr<T>(new T), g());
>
> looks *very much* like code that is only handling managed resources.
> Joe Coder is going to look at that and say :  Good job!  You're safely
> handling your resources!
>
> Banning "new T" in C++0X isn't an option.

No, of course it isn't.  But neither is it an option to ban many other
dangerous, but ocassionally necessary, constructs.

We could very easily make "new T" something that is hardly ever to be
used directly in "good code," just like, say, "new (p) T" and "~T",
and, for that matter, "delete p" are today.  IMO "new T" should be
red flag in code reviews.

> Smart pointer factory functions in C++0X sound great.  Let's have
> them (I hope to see your proposal soon).

EWG or LWG?

> But that's only a partial solution.  We also very much need this to
> be as safe as it looks:
>
> f(auto_ptr<T>(new T), g());

Why?  More importantly, why does that look safe to you?

It doesn't look safe to me.  It's a complicated expression involving a
bare unmanaged resource.  Forget for a moment that you already know a
lot about auto_ptr.  The general case is something like:

  f( fancy_component<T>( create_handle<T>() ), g() );

There's a lot going on in that line.

> And to get to that point, mandating left-to-right, or right-to-left is
> overkill.  We only need to mandate that there is a sequence point
> between the argument evaluations.

And my point is that it's not the best way to solve the biggest
problem it purports to solve (leaks).  It's not even a complete
solution to that problem.  It's a big hammer that only hits the edge
of the nail head.  I'm not sure if it bends the nail or drives it in
partway, but it makes me nervous.  My thumb is nearby ;-)

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Sun, 18 Dec 2005 01:21:13 GMT
Raw View
In article <1134817134.381520.102980@o13g2000cwo.googlegroups.com>,
 "Momchil Velikov" <momchil.velikov@gmail.com> wrote:

> > Not sure I understand. For the call (expr0)(arg1, arg2, ..., argn) the
> > evaluation algorithm should be as if the following happens:
> >
> > 1. Evaluate expr0 resulting in a function f
> > 2. For each i in 1..n in this order, evaluate argi resulting in a value vi
> > 3. Invoke f(v1, v2, ..., vn)
> >
> > It's a pity that the intended semantics can't be easily expressed as a
> > source-to-source transformation. (The problem is that rvalue and lvalue
> > expressions would lead to different types of temporaries.)
>
> Again, will the rvalue proposal enable the programmer to perform
> source-to-source transformations like the above?

Again, no.  Unless I'm severely misunderstanding your question (which is
of course quite possible).

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Sat, 17 Dec 2005 23:28:16 CST
Raw View
David Abrahams wrote:
>>But that's only a partial solution.  We also very much need this to
>>be as safe as it looks:
>>
>>f(auto_ptr<T>(new T), g());
>
>
> Why?  More importantly, why does that look safe to you?

Because this:

f(auto_ptr<T>(new T));

is safe. Because this:

x = auto_ptr<T>(new T)->Compute();

is safe. Because this:

PtrVector v[] = { smart_ptr<T>(new T), smart_ptr<T>(new T) };

is safe. Because this:

a = ((sp = auto_ptr<T>(new T)), sp->Compute());

is safe. Because this:

a = (auto_ptr<Lock>(new Lock), DoLockedComputation());

is safe.

> It doesn't look safe to me.  It's a complicated expression involving a
> bare unmanaged resource.  Forget for a moment that you already know a
> lot about auto_ptr.  The general case is something like:
>
>   f( fancy_component<T>( create_handle<T>() ), g() );
>
> There's a lot going on in that line.

Yah, but there's a lot going on in this line, too:

f( fancier_component<T>( fancy_component<T>( create_handle<T>() ) );

which is safer than the safest vault in the safest Swiss bank.

[Howard wrote]
>>And to get to that point, mandating left-to-right, or right-to-left is
>>overkill.  We only need to mandate that there is a sequence point
>>between the argument evaluations.

Howard, I can't believe you wrote that :o(.

> And my point is that it's not the best way to solve the biggest
> problem it purports to solve (leaks).  It's not even a complete
> solution to that problem.  It's a big hammer that only hits the edge
> of the nail head.  I'm not sure if it bends the nail or drives it in
> partway, but it makes me nervous.  My thumb is nearby ;-)

Leaks are not the biggest problem of unspecified defined order of
evaluation. They are a pretty good showcase, though.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: nagle@animats.com (John Nagle)
Date: Sun, 18 Dec 2005 05:40:16 GMT
Raw View
David Abrahams wrote:

> howard.hinnant@gmail.com (Howard Hinnant) writes:
>
>
>>In article <uwti43m41.fsf@boost-consulting.com>,
>> dave@boost-consulting.com (David Abrahams) wrote:
>>
>>
>>>>The problem with the current language definition
>>>>is that the compiler has too much freedom to reorder
>>>>expressions, and that leads to unsafe code.

What's really going on here is that the sequence
of events from the "new" to the assignment of the
result of auto_ptr is an atomic operation.  Either
it needs to run to completion or it needs to have
no effect.  The way you protect an atomic
operation is to enclose it in a try block.  You
can't do that inside an expression.

Actually, the problem is that the compiler doesn't
know enough about side effects.  Properly,
functions without side effects are reorderable
(and even subject to folding of multiple calls),
while functions with side effects should not
be reordered.

The most elegant solution is that the compiler
should know, for inlines and well-known
library functions, which functions have
no side effects.  (Throwing an exception
is a side effect here.)  Reordering of calls
in the presence of potential side effects
should be prohibited.

If a function is too complex to resolve the
side effect issue or not inline, it's best to
assume that it has side effects.  If it's a
big function, the small speed benefits of reordering
won't matter anyway.

   John Nagle

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Sun, 18 Dec 2005 16:23:30 GMT
Raw View
John Nagle wrote:
> The most elegant solution is that the compiler
> should know, for inlines and well-known
> library functions, which functions have
> no side effects.  (Throwing an exception
> is a side effect here.)  Reordering of calls
> in the presence of potential side effects
> should be prohibited.

I think it's not that nice to prevent other library writers and
programmers from benefitting of pure functions.

It would be nicer to offer a qualifier that tells whether the function
is pure. Functions in the standard library would be qualified
accordingly. People who write their own function can qualify them as
pure and that can be checked during compilation.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Sun, 18 Dec 2005 18:32:31 GMT
Raw View
"Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes:

> David Abrahams wrote:
>
>> It doesn't look safe to me.  It's a complicated expression involving a
>> bare unmanaged resource.  Forget for a moment that you already know a
>> lot about auto_ptr.  The general case is something like:
>>   f( fancy_component<T>( create_handle<T>() ), g() );
>> There's a lot going on in that line.
>
> Yah, but there's a lot going on in this line, too:
>
> f( fancier_component<T>( fancy_component<T>( create_handle<T>() ) );
>
> which is safer than the safest vault in the safest Swiss bank.

??  You don't know that at all!  What happens in fancy_component<T>?
Is it designed to take ownership of a bare handle immediately, or only
if it doesn't throw?  That makes all the difference.

> [Howard wrote]
>>> And to get to that point, mandating left-to-right, or right-to-left
>>> is overkill.  We only need to mandate that there is a sequence
>>> point between the argument evaluations.
>
> Howard, I can't believe you wrote that :o(.
>
>> And my point is that it's not the best way to solve the biggest
>> problem it purports to solve (leaks).  It's not even a complete
>> solution to that problem.  It's a big hammer that only hits the edge
>> of the nail head.  I'm not sure if it bends the nail or drives it in
>> partway, but it makes me nervous.  My thumb is nearby ;-)
>
> Leaks are not the biggest problem of unspecified defined order of
> evaluation. They are a pretty good showcase, though.

What's the biggest problem?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Sun, 18 Dec 2005 22:10:48 GMT
Raw View
David Abrahams wrote:
> "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes:
>
>
>>David Abrahams wrote:
>>
>>
>>>It doesn't look safe to me.  It's a complicated expression involving a
>>>bare unmanaged resource.  Forget for a moment that you already know a
>>>lot about auto_ptr.  The general case is something like:
>>>  f( fancy_component<T>( create_handle<T>() ), g() );
>>>There's a lot going on in that line.
>>
>>Yah, but there's a lot going on in this line, too:
>>
>>f( fancier_component<T>( fancy_component<T>( create_handle<T>() ) );
>>
>>which is safer than the safest vault in the safest Swiss bank.
>
>
> ??  You don't know that at all!  What happens in fancy_component<T>?
> Is it designed to take ownership of a bare handle immediately, or only
> if it doesn't throw?  That makes all the difference.

The important part is that it has the choice. Compare that with your
example. And, no comment to my other examples?

>>Leaks are not the biggest problem of unspecified defined order of
>>evaluation. They are a pretty good showcase, though.
>
> What's the biggest problem?

Their being a gratuitous source of bugs, incompatibilities, and
nonportability.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Mon, 19 Dec 2005 05:29:50 GMT
Raw View
In article <IroCFv.1H5p@beaver.cs.washington.edu>,
 "Andrei Alexandrescu (See Website For Email)"
 <SeeWebsiteForEmail@moderncppdesign.com> wrote:

> [Howard wrote]
> >>And to get to that point, mandating left-to-right, or right-to-left is
> >>overkill.  We only need to mandate that there is a sequence point
> >>between the argument evaluations.
>
> Howard, I can't believe you wrote that :o(.

Then explain why.  Educate us.  Leaving me (and the rest of the world)
guessing why isn't effective communication.  It is simply fud.  If I
read this reply and don't understand it, then perhaps others aren't
understanding too (although I freely admit that it is possible I am the
only idiot reading this).

For my own responsibility, I will try to explain why I think that only a
sequence point is needed.  Consider:

f(g1(g2()), g3(g4()));

Assumptions:

auto t1(g1(g2()));

and

auto t2(g3(g4()));

are both independent and exception safe.  That is, assuming that one
could write either:

A:

auto t1(g1(g2()));
auto t2(g3(g4()));
f(t1, t2);

or B:

auto t2(g3(g4()));
auto t1(g1(g2()));
f(t1, t2);

and have it mean the same thing.  Then I think a natural expectation is
that:

f(g1(g2()), g3(g4()));

has the exact same semantics of either A or B above.

We do not have this guarantee today (as far as I know), and the chief
victim is exception safety.  I.e. if g1(g2()) has the effect of creating
a resource in g2() and securing it in g1(), then by itself it is well
formed code.  But if you add it as a temporary parameter to a function
call, then it has the chance of being muted to something like:

auto t1 g2();
auto t2 g4();
auto t3(t1);
auto t4(t2);
f(t3, t4);

If the call to g4() throws, you're hosed.

> Howard, I can't believe you wrote that :o(.

Please tell me what I'm missing so that I don't continue to spread
misinformation.

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: howard.hinnant@gmail.com (Howard Hinnant)
Date: Mon, 19 Dec 2005 05:29:27 GMT
Raw View
In article <87bqzfp5t2.fsf@boost-consulting.com>,
 dave@boost-consulting.com (David Abrahams) wrote:

> > Smart pointer factory functions in C++0X sound great.  Let's have
> > them (I hope to see your proposal soon).
>
> EWG or LWG?

LWG please.  If you see core issues also involved (perhaps variadic
templates, or rvalue reference?), I'll make sure the EWG knows they have
more motivation for this core issue.  Workarounds (if possible) for lack
of core issues are appreciated.

> The general case is something like:
>
>   f( fancy_component<T>( create_handle<T>() ), g() );
>
> There's a lot going on in that line.

My point is that if the above line is effectively:

evaluate_in_any_order
{
auto t1(fancy_component<T>( create_handle<T>() ));
auto t2(g());
}
f(t1, t2);

that the world would be significantly safer than it is today, while at
the same time allowing compilers to continue with significant
optimizations (admittedly curtailing some that are allowable today).

-Howard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Mon, 19 Dec 2005 05:30:02 GMT
Raw View
David Abrahams wrote:
> What's the biggest problem?

As I have said many times, programming languages are a means
for driving the actions of a computer. As such, the actions
that a program specifies should be unambiguous. If you would
like for your programming language to have the ability to
state that a set of actions should be carried out in an
arbitrary rather than in a defined order, then this should be
an explicit construct within the language, so that this
ambiguity is manifestly clear to the readers of the program.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Mon, 19 Dec 2005 08:11:33 GMT
Raw View
Howard Hinnant wrote:
> In article <IroCFv.1H5p@beaver.cs.washington.edu>,
>  "Andrei Alexandrescu (See Website For Email)"
>  <SeeWebsiteForEmail@moderncppdesign.com> wrote:
>
>
>>[Howard wrote]
>>
>>>>And to get to that point, mandating left-to-right, or right-to-left is
>>>>overkill.  We only need to mandate that there is a sequence point
>>>>between the argument evaluations.
>>
>>Howard, I can't believe you wrote that :o(.
>
>
> Then explain why.  Educate us.  Leaving me (and the rest of the world)
> guessing why isn't effective communication.  It is simply fud.  If I
> read this reply and don't understand it, then perhaps others aren't
> understanding too (although I freely admit that it is possible I am the
> only idiot reading this).

Ok, ok, ok, I meant no offense. Sorry!

I understand a sequence point would suffice for exception safety. But
I'd advocate just going all the way and mandating the order of
evaluation, to the end of fewer bugs and better, more portable programs.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Mon, 19 Dec 2005 08:11:40 GMT
Raw View
Hyman Rosen wrote:
> David Abrahams wrote:
>
>> What's the biggest problem?
>
>
> As I have said many times, programming languages are a means
> for driving the actions of a computer. As such, the actions
> that a program specifies should be unambiguous. If you would
> like for your programming language to have the ability to
> state that a set of actions should be carried out in an
> arbitrary rather than in a defined order, then this should be
> an explicit construct within the language, so that this
> ambiguity is manifestly clear to the readers of the program.

Very nice! And to that I'd add: "Programs must be written for people to
read, and only incidentally for machines to execute" - Abelson and Sussman.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Tue, 20 Dec 2005 03:02:52 GMT
Raw View
SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)") writes:

> Hyman Rosen wrote:
>> David Abrahams wrote:
>>
>>> What's the biggest problem?
>> As I have said many times, programming languages are a means
>> for driving the actions of a computer. As such, the actions
>> that a program specifies should be unambiguous. If you would
>> like for your programming language to have the ability to
>> state that a set of actions should be carried out in an
>> arbitrary rather than in a defined order, then this should be
>> an explicit construct within the language, so that this
>> ambiguity is manifestly clear to the readers of the program.
>
> Very nice! And to that I'd add: "Programs must be written for people
> to read, and only incidentally for machines to execute" - Abelson and
> Sussman.

Oh, I'm _very_ big on that one.  I don't think decorating all the
high-performance numerics code with some "allow unordered evaluation"
construct is a good way to keep them readable, though.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Tue, 20 Dec 2005 03:02:24 GMT
Raw View
SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)") writes:

> David Abrahams wrote:
>> "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes:
>>
>>>David Abrahams wrote:
>>>
>>>
>>>>It doesn't look safe to me.  It's a complicated expression involving a
>>>>bare unmanaged resource.  Forget for a moment that you already know a
>>>>lot about auto_ptr.  The general case is something like:
>>>>  f( fancy_component<T>( create_handle<T>() ), g() );
>>>>There's a lot going on in that line.
>>>
>>>Yah, but there's a lot going on in this line, too:
>>>
>>>f( fancier_component<T>( fancy_component<T>( create_handle<T>() ) );
>>>
>>>which is safer than the safest vault in the safest Swiss bank.
>> ??  You don't know that at all!  What happens in fancy_component<T>?
>> Is it designed to take ownership of a bare handle immediately, or only
>> if it doesn't throw?

Not to mention that fancy_component<T> might not take ownership of the
handle at all.

> That makes all the difference.
>
> The important part is that it has the choice.

What choice, please?

Why should doing the right thing with deallocation be up to
fancy_component and some agreement between its implementation and the
caller?  If I use

    new_<unique_ptr<T> >()

then *I'm* taking responsibility for making sure that the pointer is
always managed, regardless of where it's passed.

By the way, we can write the above (for some reasonable number of
arguments, say, 5) today.

> Compare that with your example.

Which one, please?

> And, no comment to my other examples?

They all look the same to me: they all do a bare "new" and thus expose
the user to handling a raw pointer at some point or other.  Just as
you'd like an expression to be safely usable in any context if it can
be used safely alone, I'd like all expressions in common use to be
safe regardless of the context they're placed in.  "new T" is not like
that: you'd better be really careful where and how you do it.
Mandating evaluation order will only go a small distance toward fixing
that problem.

>>>Leaks are not the biggest problem of unspecified defined order of
>>>evaluation. They are a pretty good showcase, though.
>>
>> What's the biggest problem?
>
> Their being a gratuitous source of bugs, incompatibilities, and
> nonportability.

What kind of bugs and incompatibilities do you get that make this a
big problem?

I'm very suspicious of any crusade to eliminate all low-level
differences between compilers.  Smells like -- no offense intended;
I'm just identifying where my reaction comes from -- Java hype to me.
Do you also want to mandate standard sizes for short, int, long, etc.?

That said, I'm not closed-minded about this; I just need to be
convinced ;-) So far it seems like something we *could* do, that would
render existing compilers nonconforming in a fundamental way, create
backward compatibility problems, break existing (nonportable) code,
and consume valuable core language drafting time.  I'm just not
convinced the benefits justify the costs yet, especially when the
biggest problems *I've* seen identified so far can be solved more
completely with a library and (IMO painless) changes to common
programming practice.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Tue, 20 Dec 2005 03:02:44 GMT
Raw View
howard.hinnant@gmail.com (Howard Hinnant) writes:

> In article <87bqzfp5t2.fsf@boost-consulting.com>,
>  dave@boost-consulting.com (David Abrahams) wrote:
>
>> > Smart pointer factory functions in C++0X sound great.  Let's have
>> > them (I hope to see your proposal soon).
>>
>> EWG or LWG?
>
> LWG please.  If you see core issues also involved (perhaps variadic
> templates, or rvalue reference?), I'll make sure the EWG knows they have
> more motivation for this core issue.  Workarounds (if possible) for lack
> of core issues are appreciated.

Sure, that's easy enough.  Ping me after the first week of January,
though, if you *really* want it; it's likely to fall off the radar
otherwise.

>> The general case is something like:
>>
>>   f( fancy_component<T>( create_handle<T>() ), g() );
>>
>> There's a lot going on in that line.
>
> My point is that if the above line is effectively:
>
> evaluate_in_any_order
> {
>     auto t1(fancy_component<T>( create_handle<T>() ));
>     auto t2(g());

[don't use tabs :)]

> }
> f(t1, t2);
>
> that the world would be significantly safer than it is today

I'm just not convinced of how significant it is, yet.  People will
still be commonly trafficing in unmanaged resources, and we won't have
fixed that.

> while at the same time allowing compilers to continue with
> significant optimizations (admittedly curtailing some that are
> allowable today).

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Thu, 24 Nov 2005 20:44:07 GMT
Raw View
<kuyper@wizard.net> wrote in message
news:1132759176.368850.264850@g43g2000cwa.googlegroups.com...

> As long as the specified order of evaluation under  new rules was the
> same as one of the permitted orders of evaluation under the current
> rules, code which depends upon a different order of evaluation is (even
> under the current rules) non-portable. There's only a limited degree to
> which I care about what goes wrong with such code.

Evidently you're not a compiler vendor :-)

On several occasions I have heard representatives of compiler vendors
telling the standards committee that if the standard mandated behavior that
forced them to change their implementations in particular ways, they would
ship a nonconforming implementation rather than alienate their customers.
The unfortunate fact is that when people's programs change behavior from one
version of a compiler to another, they complain--and typically they don't
care whether the change was because their program relied on behavior that
isn't guaranteed.

I'm not saying that things should be this way--in fact, I wish they weren't.
But you can't ignore reality by wishing it away.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Thu, 24 Nov 2005 20:44:16 GMT
Raw View
"Razzer" <coolmandan@gmail.com> wrote in message
news:1132809799.629407.12230@z14g2000cwz.googlegroups.com...

> Why's that? AFAICS, defining the order of evaluation in cases where it
> is undefined in C should not have to worry about giving different
> results since there is no set result in C.

Part of the point is to be able to translate a C++ expression into the
equivalent C expression without changing its meaning.  That desire implies
that if a C expression is undefined, the corresponding C++ expression must
also be undefined.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hsutter@gotw.ca (Herb Sutter)
Date: Fri, 25 Nov 2005 01:03:27 GMT
Raw View
On Thu, 24 Nov 2005 04:40:36 GMT, eldiener_no_spam_here@earthlink.net
(Edward Diener No Spam) wrote:
>Hyman Rosen wrote:
>> Greg Herlihy wrote:
>>> Essentally, the result of evaluating i =3D v[i++] is unspecified beca=
use
>>> i's value is accessed only once.
>>=20
>> No, it's undefined in the built-in case, because i is modified
>> twice in the same expression (by the increment and by the
>> assignment) without an intervening sequence point.
>>=20
>> Once again, I invite everyone in the newsgroup to notice how
>> no one understands the rules as they exist.
>
>Obviously not true except as hyperbole.

Hyman's not exaggerating that much. In my experience at least, the great
majority of programmers aren't aware of the (lack of) rules, and the
minority who do understand the rules still tend to forget them
occasionally. I know it bites me every so often.

>> It's asinine not
>> to have a defined order of evaluation, including side effects.
>
>I totally agree with you here. With all due respect to Mr. Stroustrup's=20
>printed opinion about the importance of C++ maintaining compatibility=20
>with the C language, I also feel that at some time in the future, and I=20
>hope it is the near future, C++ should stop trying to maintain=20
>compatibility with the C language and do the right things as far as its=20
>own C++ language specification is concerned. This is just one of many=20
>other areas, which have been mentioned in numerous other posts on these=20
>NGs, where compatibility with the C language is holding C++ back from=20
>advancing as a language of its own.

I don't think you'd get as much push-back from Bjarne as you think. :-)

Evalution reordering is a well-known source of difficulty for programmers
and it has been raised about every other month for at least two decades.
But the reason why it isn't fixed in C++ (or C) has nothing to do with C
compatibility. The real reason is performance: When you talk about nailin=
g
down the order of evaluation, the first howls of protest usually come fro=
m
the people who write code optimizers and who demand to know why on earth
you want to tie their hands like this and turn off optimization
opportunities they want to exploit.

BTW, there's a direct parallel (pardon the pun) between this issue and th=
e
issue of instruction reordering, especially memory read/write reordering,
anywhere in the tool/hardware chain right down to the processor itself.
It's common wisdom in the hardware world that a sequentially consistent
memory model (to simplify a little, this means among other things that th=
e
chip doesn't get the flexibility to reorder memory reads or writes and
must follow exactly what's in the source code) is nice, but nobody
actually ships it because it is believed to be too slow for practical use.
This is part of what I had in mind when I wrote:

  Chip designers are under so much pressure to deliver ever-faster
  CPUs that they=92ll risk changing the meaning of your program,
  and possibly break it, in order to make it run faster

And:

  Two noteworthy examples in this respect are write reordering and
  read reordering: Allowing a processor to reorder write operations
  has consequences that are so surprising, and break so many
  programmer expectations, that the feature generally has to be
  turned off because it=92s too difficult for programmers to reason
  correctly about the meaning of their programs in the presence
  of arbitrary write reordering. Reordering read operations can also
  yield surprising visible effects, but that is more commonly left
  enabled anyway because it isn=92t quite as hard on programmers,

Note that evaluation reordering falls into the same category as read
reorder, including this final comment:

  and the demands for performance cause designers of operating
  systems and operating environments to compromise and choose
  models that place a greater burden on programmers because that
  is viewed as a lesser evil than giving up the optimization
  opportunities.

  -- "The Free Lunch Is Over"
     http://www.gotw.ca/publications/concurrency-ddj.htm

Have a nice day,

Herb

---
Herb Sutter (www.gotw.ca)      (www.pluralsight.com/blogs/hsutter)

Convener, ISO WG21 (C++ standards committee)     (www.gotw.ca/iso)
Contributing editor, C/C++ Users Journal         (www.gotw.ca/cuj)
Architect, Developer Division, Microsoft   (www.gotw.ca/microsoft)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Fri, 25 Nov 2005 12:56:40 CST
Raw View
Andrew Koenig wrote:
> Part of the point is to be able to translate a C++ expression into the
> equivalent C expression without changing its meaning.

Since when? Certainly there was a goal that C code should
carry forward into C++ with its meaning generally unchanged,
but why would anyone care about going the other way? And if
that is what you want, you have to avoid all sorts of C++
constructs anyway, so you would just avoid C-ambiguous
expressions as well.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 25 Nov 2005 18:56:07 GMT
Raw View
Herb Sutter wrote:
> The real reason is performance: When you talk about nailing down
 > the order of evaluation, the first howls of protest usually come
 > from the people who write code optimizers

Except that this is a bogus reason. We want to specify the exact
meaning of language constructs, not their implementation. This
affects optimizers only in the rare cases of expressions which
are actually ambiguous; obviously most are not, and their evaluation
can be ordered in any way the compiler sees fit by the as-if rules.

It seems to me that such complaints amount to "I'm so smart that it
would be bad to constrain me, but I'm so stupid that I don't know
that I'm not constrained." I don't think that language semantics
should be driven by that kind of consideration.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 25 Nov 2005 18:56:16 GMT
Raw View
Andrew Koenig wrote:
> But you can't ignore reality by wishing it away.

So that's why the committee decide to abandon two-phase
name lookup in templates?

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Fri, 25 Nov 2005 18:56:43 GMT
Raw View
Razzer wrote:
> However, I think the order of evaluation of arguments causes
 > such a minor set of problems in C++ that one could leave it
 > how it is

No, absolutely not. Leaving argument evaluation underspecified
is what caused the infamous 'f(auto_ptr, auto_ptr)' problem.
If we can define the order of initialization of class members
we can do the same for arguments. No half measures!

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Fri, 25 Nov 2005 12:57:22 CST
Raw View
"Andrew Koenig" wrote:
> <kuyper@wizard.net> wrote in message
> news:1132759176.368850.264850@g43g2000cwa.googlegroups.com...
>
> > As long as the specified order of evaluation under  new rules was the
> > same as one of the permitted orders of evaluation under the current
> > rules, code which depends upon a different order of evaluation is (even
> > under the current rules) non-portable. There's only a limited degree to
> > which I care about what goes wrong with such code.
>
> Evidently you're not a compiler vendor :-)
.
> But you can't ignore reality by wishing it away.

I'm not wishing it away; all I said is that I don't care about such
code. Compiler vendors, as you've pointed out, do have to care about
such code if it's become a widely used idiom. In the context of this
discussion, I doubt that "i=v[i++];" is in that category. On the other
hand, there are probably some widely used idioms where the order of
evaluation is both important and (at best) unpsecified.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hsutter@gotw.ca (Herb Sutter)
Date: Sat, 26 Nov 2005 01:11:38 GMT
Raw View
On Fri, 25 Nov 2005 18:56:07 GMT, hyrosen@mail.com (Hyman Rosen) wrote:
>It seems to me that such complaints amount to "I'm so smart that it
>would be bad to constrain me, but I'm so stupid that I don't know
>that I'm not constrained." I don't think that language semantics
>should be driven by that kind of consideration.

(BTW, that post was an incomplete one -- I reposted a completed version.)

I tend to agree, but in my other complete post I ask for data. Data is
probably hard to get and pin down because this stuff will vary a lot by
hardware architecture and/or the optimizer you test with, but there must
be papers about this.

It certainly is taken for granted among the hardware designer community
and the optimizer writer community alike that the ability to reorder work
is very important for performance.

Herb

---
Herb Sutter (www.gotw.ca)      (www.pluralsight.com/blogs/hsutter)

Convener, ISO WG21 (C++ standards committee)     (www.gotw.ca/iso)
Contributing editor, C/C++ Users Journal         (www.gotw.ca/cuj)
Architect, Developer Division, Microsoft   (www.gotw.ca/microsoft)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hsutter@gotw.ca (Herb Sutter)
Date: Sat, 26 Nov 2005 01:12:33 GMT
Raw View
[I sent a partial version of this message too soon. This is the complete
version.]

There are to interrelated issues on this thread, and I'll try to address
both of them. They are:

  1. sequence points
  2. (lack of) rules on order of evaluation of function arguments

Both issues are around leaving the implementation latitude to reorder
work, and the tradeoff is between leaving the rules loose so that
optimizers can generate better code and tightening the rules so that
programmers have less of a burden to understand their programs.

I'll show below that exactly the same tradeoff comes in a third related
case of reordering not yet mentioned in this thread:

  3. instruction reordering, especially memory read/write reordering

Note that #3 is of very current interest because memory access ordering
guarantees are a key part of the C++0x memory model for concurrency now
under development.

Coming in after Hyman and Edward, who were talking first about issue #1:

Edward Diener wrote:
>Hyman Rosen wrote:
>> Greg Herlihy wrote:
>>> Essentally, the result of evaluating i =3D v[i++] is unspecified beca=
use
>>> i's value is accessed only once.
>>=20
>> No, it's undefined in the built-in case, because i is modified
>> twice in the same expression (by the increment and by the
>> assignment) without an intervening sequence point.
>>=20
>> Once again, I invite everyone in the newsgroup to notice how
>> no one understands the rules as they exist.
>
>Obviously not true except as hyperbole.

Hyman is not exaggerating at all. There is perhaps no greater point of
misunderstanding of C than sequence points.

Virtually all C++ experts, including C++ committee members and including
myself until the last Santa Cruz meeting in fall 2002, think they
understand sequence points. We typically view them as an unfortunate and
complicated wart we inherited from C, but one that is at least well
understood in the C committee. That is not true.

My eyes were opened when I attended the fall 2002 C meeting, and saw a
detailed presentation of one person's theory of how sequence points
probably work, how they probably should work, and what we're still not
sure about. It was a revelation to me to learn that this was only the mos=
t
recent part of an ongoing series of discussions and debates within the C
committee about what exactly sequence points are, how they actually work,
and how they should work. Even if you know all the detailed reasons why "=
i
=3D i++;" is indeterminate, nobody should assume that the way C90 or C99
sequence points are specified is either well understood or that
implementations are consistent in the details.

Now we segue over to the related issue #2 of order of evaluation of
function arguments:

>> It's asinine not
>> to have a defined order of evaluation, including side effects.
>
>I totally agree with you here.=20

Me too. In my experience at least, most C/C++ programmers aren't aware of
the (lack of) rules around evaluation ordering, and the minority who do
understand the rules still tend to forget them occasionally. I know it
bites me every so often.

>With all due respect to Mr. Stroustrup's=20
>printed opinion about the importance of C++ maintaining compatibility=20
>with the C language, I also feel that at some time in the future, and I=20
>hope it is the near future, C++ should stop trying to maintain=20
>compatibility with the C language and do the right things as far as its=20
>own C++ language specification is concerned.=20

Aside: I don't think you'd get as much push-back from Bjarne as you think.
:-)

>This is just one of many=20
>other areas, which have been mentioned in numerous other posts on these=20
>NGs, where compatibility with the C language is holding C++ back from=20
>advancing as a language of its own.

Not exactly. Both sequence points and evalution reordering are a
well-known sources of difficulty for programmers, and these issues have
been raised about every other month for as long as C++ and C have existed.
But the reason for these relaxed rules, and why they aren't nailed down i=
n
C++ (or C), actually has nothing to do with C compatibility or ineptitude
or any of the other usual suspects.

The real reason why C and C++ have this latitude is for performance. When
you talk about nailing down the order of evaluation, the first howls of
protest come, not from language designers or compiler front-end writers
(who would be only too happy to oblige, because who likes having flaky
corner cases?), but from the people who write code optimizers and who
demand to know 'why on earth you want to tie our hands like this and turn
off optimization opportunities we want to exploit for you -- do you
_really_ want your code to run slow?'

There is a direct relationship between these two issues and issues #3: th=
e
issue of instruction reordering, especially memory read/write reordering,
anywhere in the tool/hardware chain right down to the processor itself.
It's common wisdom in the hardware world that a sequentially consistent
memory model is nice (to simplify a little, SC means that the hardware
must not reorder memory reads or writes and must follow exactly what's in
the source code), but nobody actually ships that because it is believed t=
o
be too slow for practical use. It's quite a revelation to most people
doing lock-free concurrent programming for the first time that the memory
reads and writes they put into their code might not be respected at all,
if the processor (or the optimizer, or any other part of the chain)
decides it would rather do things in a different order than you asked for=
,
so sorry.

This is part of what I had in mind when I wrote (in "The Free Lunch Is
Over," http://www.gotw.ca/publications/concurrency-ddj.htm):

  Chip designers are under so much pressure to deliver ever-faster
  CPUs that they=92ll risk changing the meaning of your program,
  and possibly break it, in order to make it run faster

And:

  Two noteworthy examples in this respect are write reordering and
  read reordering: Allowing a processor to reorder write operations
  has consequences that are so surprising, and break so many
  programmer expectations, that the feature generally has to be
  turned off because it=92s too difficult for programmers to reason
  correctly about the meaning of their programs in the presence
  of arbitrary write reordering. Reordering read operations can also
  yield surprising visible effects, but that is more commonly left
  enabled anyway because it isn=92t quite as hard on programmers,

Note that the latitude around sequence points and evaluation reordering
falls into the same category as read reordering, including this final
comment:

  and the demands for performance cause designers of operating
  systems and operating environments to compromise and choose
  models that place a greater burden on programmers because that
  is viewed as a lesser evil than giving up the optimization
  opportunities.

That's why we have at least some read reordering in nearly all memory
models now in production, why we have argument evaluation reordering, and
why we have the latitude around sequence points in all its inglory. It is
believed to be necessary. No, I can't point to measurements, but I'm sure
someone can and I would be interested to see real data about how much
nailing each of these down would cost for standard optimizers on various
popular architectures (alas, that's fairly hard to measure).

Herb

---
Herb Sutter (www.gotw.ca)      (www.pluralsight.com/blogs/hsutter)

Convener, ISO WG21 (C++ standards committee)     (www.gotw.ca/iso)
Contributing editor, C/C++ Users Journal         (www.gotw.ca/cuj)
Architect, Developer Division, Microsoft   (www.gotw.ca/microsoft)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Sat, 26 Nov 2005 20:46:29 GMT
Raw View
Hyman Rosen wrote:
> Andrew Koenig wrote:
>> Part of the point is to be able to translate a C++ expression into the
>> equivalent C expression without changing its meaning.
>
> Since when? Certainly there was a goal that C code should
> carry forward into C++ with its meaning generally unchanged,
> but why would anyone care about going the other way? And if
> that is what you want, you have to avoid all sorts of C++
> constructs anyway, so you would just avoid C-ambiguous
> expressions as well.
>
It's not true anyhow.  C++ already has differences that
keep expressions from functioning the same.   The difference
in typing of char literals for example.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: David Abrahams <dave@boost-consulting.com>
Date: Sat, 26 Nov 2005 18:57:08 CST
Raw View
ron@spamcop.net (Ron Natalie) writes:

> [Example:
> i = v[i++]; // the behavior is unspecified
> i = 7, i++, i++; // i becomes 9
> i = ++i + 1; // the behavior is unspecified
> i = i + 1; // the value of i is incremented
>    end example]

FWIW, examples are non-normative.  If your argument holds up based on
the rest of the text, you're OK.  Otherwise the example is broken and
needs to be fixed.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Krzysztof Zelechowski" <krixel@qed.pl>
Date: Sat, 26 Nov 2005 19:26:57 CST
Raw View
Uzytkownik "Hyman Rosen" <hyrosen@mail.com> napisal w wiadomosci
news:20051125144435.31EA211406E@mscan6.ucar.edu...
> Andrew Koenig wrote:
>> Part of the point is to be able to translate a C++ expression into the
>> equivalent C expression without changing its meaning.
>
> Since when? Certainly there was a goal that C code should
> carry forward into C++ with its meaning generally unchanged,
> but why would anyone care about going the other way? And if
> that is what you want, you have to avoid all sorts of C++
> constructs anyway, so you would just avoid C-ambiguous
> expressions as well.
>

That is how the C++ front end works: it translates C++ to C.  But there is
an easy workaround: create an automatic variable for each function argument;
assign the values to the local variables one by one creating sequence points
between them; call the function with the corresponding local variables.

Happy coding
Chris


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: David Abrahams <dave@boost-consulting.com>
Date: Sun, 27 Nov 2005 00:35:41 CST
Raw View
Hyman, Edward, and Herb wrote:

>> It's asinine not to have a defined order of evaluation, including
>> side effects.
>
> I totally agree with you here.

Me too.

hsutter@gotw.ca (Herb Sutter) writes:

> The real reason why C and C++ have this latitude is for performance.

The other reason is backward compatibility.  Even when order of
evaluation is unpredictable, many lines of code have been written that
depend on a particular evaluation order -- the one that happened to be
chosen by the compiler.  If you force a specified evaluation order you
will break any such code that isn't already using the order
specified.  For many vendors, that kind of breakage induces
unacceptable friction with the customer base.

Between this argument and the one about performance, there seem at
least to be plausible reasons for the status quo.  Labelling it
"asinine" not to define an order of evaluation seems inappropriate.
As a practical matter, I'm unlikely to buy into an argument about
anything but the most obvious mistakes if it comes with language like
that because of what it says about the speaker's willingness to give
counterarguments their due.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Ron Natalie <ron@spamcop.net>
Date: Sun, 27 Nov 2005 16:26:32 CST
Raw View
David Abrahams wrote:
> ron@spamcop.net (Ron Natalie) writes:
>
>> [Example:
>> i = v[i++]; // the behavior is unspecified
>> i = 7, i++, i++; // i becomes 9
>> i = ++i + 1; // the behavior is unspecified
>> i = i + 1; // the value of i is incremented
>>    end example]
>
> FWIW, examples are non-normative.  If your argument holds up based on
> the rest of the text, you're OK.  Otherwise the example is broken and
> needs to be fixed.
>
I know that, but the text that immediately preceeds the example
CLEARLY supports the point I was making and the examples are
consistent with what the preceding text you omitted.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Ron Natalie <ron@spamcop.net>
Date: Sun, 27 Nov 2005 16:26:40 CST
Raw View
David Abrahams wrote:

> The other reason is backward compatibility.  Even when order of
> evaluation is unpredictable, many lines of code have been written that
> depend on a particular evaluation order -- the one that happened to be
> chosen by the compiler.  If you force a specified evaluation order you
> will break any such code that isn't already using the order
> specified.  For many vendors, that kind of breakage induces
> unacceptable friction with the customer base.

That's spurious.  The code is already broken.   The compiler is free
to pick a different ordering at whim.   If you recompile the
application, perhaps with a newer version of the compiler or
a different compiler, or perhaps just a different set of compiler
flags, the results will be different.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Sun, 27 Nov 2005 22:26:42 GMT
Raw View
Krzysztof Zelechowski wrote:

> That is how the C++ front end works: it translates C++ to C.

There's no such standard concept as a C++ front end.  The language
does not define nor proposes any convention for a conversion of
C++ to C.

There are already a number of constructs in C++ that are DIFFERENT
from how they are implemented in C++.

The answer here is that C++ leaves the behavior undefined for
the same reason C does, the supposed advantage to prescribing
an ordered behavior doesn't override efficiency concerns.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Mon, 28 Nov 2005 00:13:51 GMT
Raw View
Ron Natalie <ron@spamcop.net> writes:

> David Abrahams wrote:
>
>> The other reason is backward compatibility.  Even when order of
>> evaluation is unpredictable, many lines of code have been written that
>> depend on a particular evaluation order -- the one that happened to be
>> chosen by the compiler.  If you force a specified evaluation order you
>> will break any such code that isn't already using the order
>> specified.  For many vendors, that kind of breakage induces
>> unacceptable friction with the customer base.
>
> That's spurious.  The code is already broken.

No, the code is nonportable.  It works in the context where it is
expected to.  It does no good for a vendor to respond with "fix your
code; it is broken" if the important customers say, "fine, we'll go
with someone else who can provide stability across versions."

> The compiler is free to pick a different ordering at whim.

Often it is not.  Compiler implementors constrain themselves beyond
what the standard specifies, in order to satisfy various customer
demands -- even unreasonable ones.  Often they constrain themselves in
ways that provide "guarantees" they won't document, such as stable
evaluation order.

> If you recompile the application, perhaps with a newer version of
> the compiler

Unless the compiler implementor constrains himself to avoid changing
evaluation order.  That *does* happen.

> or a different compiler, or perhaps just a different set of compiler
> flags, the results will be different.

All possibly true.  It's also possible for customers to develop
expectations of behavior that isn't guaranteed by the standard or
documented by vendors (actually if you think hard about it I bet
you'll find you have some such expectations).  Vendors sometimes will
choose to meet those expectations in order to stay in business.


--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Sun, 27 Nov 2005 23:29:24 CST
Raw View
David Abrahams wrote:
> Between this argument and the one about performance, there seem at
> least to be plausible reasons for the status quo.  Labelling it
> "asinine" not to define an order of evaluation seems inappropriate.

This is where we start quoting Emerson.
"A foolish consistency is the hobgoblin of little minds."

> As a practical matter, I'm unlikely to buy into an argument about
> anything but the most obvious mistakes if it comes with language like
> that because of what it says about the speaker's willingness to give
> counterarguments their due.

Believe me, I'm giving those counterarguments *more* than
they're due.

We've got made-up customers who are so tied to a vendor
that they demand compatibility for undocumented and
unspecified features but who are still able to change
vendors if they don't get it.

We've got made-up optimizers who are so brilliant that
they mustn't be constrained for fear of slowing the code,
but who are at the same time so stupid that they can't
figure out the ordinary cases where order doesn't matter.

We've got a (probably) made-up Standard Committee who is
so responsive to compiler vendors that they won't agree to
specify formerly unspecifed behavior, but did standardize
two-phase name lookup in templates, which broke every
implementation in existence, half of whom still haven't
caught up years later.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: pjp@dinkumware.com ("P.J. Plauger")
Date: Tue, 29 Nov 2005 06:15:46 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:1twif.776$iZ3.758@trndny03...

> We've got made-up customers who are so tied to a vendor
> that they demand compatibility for undocumented and
> unspecified features but who are still able to change
> vendors if they don't get it.
>
> We've got made-up optimizers who are so brilliant that
> they mustn't be constrained for fear of slowing the code,
> but who are at the same time so stupid that they can't
> figure out the ordinary cases where order doesn't matter.
>
> We've got a (probably) made-up Standard Committee who is
> so responsive to compiler vendors that they won't agree to
> specify formerly unspecifed behavior, but did standardize
> two-phase name lookup in templates, which broke every
> implementation in existence, half of whom still haven't
> caught up years later.

Welcome to the wonderful world of software. Every one of the
things you cite above is true, contradictions and all.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: dave@boost-consulting.com (David Abrahams)
Date: Tue, 29 Nov 2005 06:16:17 GMT
Raw View
Hyman Rosen <hyrosen@mail.com> writes:

> David Abrahams wrote:
>
>> Between this argument and the one about performance, there seem at
>> least to be plausible reasons for the status quo.  Labelling it
>> "asinine" not to define an order of evaluation seems inappropriate.
>
> This is where we start quoting Emerson.
> "A foolish consistency is the hobgoblin of little minds."

Consistency with what?

>> As a practical matter, I'm unlikely to buy into an argument about
>> anything but the most obvious mistakes if it comes with language
>> like that because of what it says about the speaker's willingness
>> to give counterarguments their due.
>
> Believe me, I'm giving those counterarguments *more* than
> they're due.

The tone of the following belies that statement.

> We've got made-up customers who are so tied to a vendor
> that they demand compatibility for undocumented and
> unspecified features but who are still able to change
> vendors if they don't get it.
>
> We've got made-up optimizers who are so brilliant that
> they mustn't be constrained for fear of slowing the code,
> but who are at the same time so stupid that they can't
> figure out the ordinary cases where order doesn't matter.

Is it implausible to you that detecting the cases where expressions
must not be reordered would add complexity to an already gnarly area
in any good compiler?

> We've got a (probably) made-up Standard Committee who is
> so responsive to compiler vendors that they won't agree to
> specify formerly unspecifed behavior, but did standardize
> two-phase name lookup in templates, which broke every
> implementation in existence, half of whom still haven't
> caught up years later.

It's normal for those who have never been to a meeting to have no
respect for the committee and its process, and to come charging in
with the idea that one's own concerns represent those of the whole C++
developer community.  Usually it doesn't take more than one meeting
for those people to wake up and realize that C++ serves the needs of a
much broader group than they thought and that the committee process is
much more thoughtful and less capricious than they assumed.  I invite
you to come to the next meeting and prove yourself the exception to
that rule.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "kanze" <kanze@gabi-soft.fr>
Date: Tue, 29 Nov 2005 00:16:25 CST
Raw View
Herb Sutter wrote:

> It certainly is taken for granted among the hardware designer
> community and the optimizer writer community alike that the
> ability to reorder work is very important for performance.

Is it?  I don't know anyone in the hardware designer community,
so I can't speak for them, but the first people I heard
clamoring for a more defined order were experts in compiler
optimization techniques.  They're the ones who told me that it
didn't make a difference.

There was a time that it did.  Back when using Sethi-Ullman
numbers for register allocation was state of the art.  The rule
in C dates from K&R C.  Which dates from the days when
Sethi-Ullman numbers were state of the art.  But optimization
technology has comme a long way since then, even in "everyday"
compilers.

--
James Kanze                                           GABI Software
Conseils en informatique orient   e objet/
                   Beratung in objektorientierter Datenverarbeitung
9 place S   mard, 78210 St.-Cyr-l'   cole, France, +33 (0)1 30 23 00 34


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "kanze" <kanze@gabi-soft.fr>
Date: Tue, 29 Nov 2005 00:16:33 CST
Raw View
Hyman Rosen wrote:
> David Abrahams wrote:
> > Between this argument and the one about performance, there
> > seem at least to be plausible reasons for the status quo.
> > Labelling it "asinine" not to define an order of evaluation
> > seems inappropriate.

> This is where we start quoting Emerson.  "A foolish
> consistency is the hobgoblin of little minds."

> > As a practical matter, I'm unlikely to buy into an argument
> > about anything but the most obvious mistakes if it comes
> > with language like that because of what it says about the
> > speaker's willingness to give counterarguments their due.

> Believe me, I'm giving those counterarguments *more* than
> they're due.

> We've got made-up customers who are so tied to a vendor that
> they demand compatibility for undocumented and unspecified
> features but who are still able to change vendors if they
> don't get it.

I think you know my position on this, but...

The customers here are not made-up.  All too often, I've seen
explications of how C/C++ works which carefully explain that
parameters are evaluated and pushed onto the stack from right to
left.  Because, of course, that's the way most early PC
compilers did it.

I still think that we need to specify.  The customers here are
not the problem of the standard's committee, anymore than were
the customers who wrote code depending on CFront's lifetime of
temporaries, or templates acting exactly like macros.  Such
customers are a problem for the vendors, and the vendors will
handle it exactly like they handled the other two issues.
Depending on their attitudes and positions, they will offer
compiler options to support one behavior or the other, or they
will simply tell the customer where he can get off.

> We've got made-up optimizers who are so brilliant that they
> mustn't be constrained for fear of slowing the code, but who
> are at the same time so stupid that they can't figure out the
> ordinary cases where order doesn't matter.

The real experts in optimization technology (people like David
Chase, for example) argue in favor of defined behavior.  They
seem to think that the problem is tractable.  The fact that Java
has defined behavior, and regularly beats C++ in benchmarks,
would seem to indicate that it isn't a killer problem, at any
rate.

> We've got a (probably) made-up Standard Committee who is so
> responsive to compiler vendors that they won't agree to
> specify formerly unspecifed behavior, but did standardize
> two-phase name lookup in templates, which broke every
> implementation in existence, half of whom still haven't caught
> up years later.

:-)

Formally, templates didn't exist until the standard committee
invented them, so there weren't any implementations to break:-).
In practice, at least from what little I've been able to gather
(thanks to people like Gaby Dos Reis and David Vandevoorde
looking up the facts for me), two phase name lookup was
introduced into the committee drafts, or at least the discussion
papers, long before most compilers had any support for
templates.  So I fear you'll have to blame this one on
irresponsible vendors, who preferred bringing out
implementations that they knew would be broken by the standard,
rather than waiting a little, or at least warning.

--
James Kanze                                           GABI Software
Conseils en informatique orient   e objet/
                   Beratung in objektorientierter Datenverarbeitung
9 place S   mard, 78210 St.-Cyr-l'   cole, France, +33 (0)1 30 23 00 34


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Tue, 29 Nov 2005 15:11:58 GMT
Raw View
David Abrahams wrote:
> Consistency with what?

The past. The argument that we are so tied to backward
compatibility that we may not even specify what was
formerly unspecified.

> Is it implausible to you that detecting the cases where expressions
> must not be reordered would add complexity to an already gnarly area
> in any good compiler?

Yes, very implausible. After all, optimizers do deal
with improving code over multiple statements, and in
that case evaluation order is defined. Any compiler
text which speaks about optimization describes various
analyses used to determine when statements can be moved
around. It's very nearly fundamental to what optimizers
do.

> no respect for the committee

This has nothing to do with respect. One argument against
specifying evaluation order is that the committee is
reluctant to break implementations (see "consistency" above).
But it's clear that the standardization process in fact broke
implementations across the board in dozens of ways, and many
of those implementations still haven't caught up. That's not
a bad thing, that's a good thing. It demonstrates that the
committee is willing to break implementations for a good
cause, and that should be true as well for evaluation order.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: wade@stoner.com
Date: Tue, 29 Nov 2005 09:12:51 CST
Raw View
Hyman Rosen wrote:
> We want to specify the exact
> meaning of language constructs,

Very often that is true.

But which standard have you been reading?  I'd argue that almost the
entire libraries section goes out of its way to specify as little of
the meaning as it can get away with, so that competing vendors can
actually compete.

Even the core language is full of cases where exact meanings could have
been specified, but were left implementation-defined, or undefined.  My
gut feeling is that most of these cases are there specifically to give
vendors freedom (to optimize, to build a simpler compiler, ...).

Assuming vendors are responsive to their market, (sometimes this is a
bit of a stretch ;-), the market is more interested in speed than in
exact meaning.  I see compilers offering 'restrict' or 'fast but sloppy
floating point math' as options (or even defaults).  I don't see
options that enable a strict evaluation order.

We could have had a language that told us the value of 'b' - 'a'.
Instead we got a language where it is implementation-defined, and {
char v='a'; v++; } is undefined.

> not their implementation.

But with a language that lets you get this close to the metal, a great
deal of the implementation is observable (just not in the standard's
sense of observable behavior).

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: pjp@dinkumware.com ("P.J. Plauger")
Date: Wed, 30 Nov 2005 03:35:54 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:200511291438.jATEcauC061819@horus.isnic.is...

>> no respect for the committee
>
> This has nothing to do with respect. One argument against
> specifying evaluation order is that the committee is
> reluctant to break implementations (see "consistency" above).
> But it's clear that the standardization process in fact broke
> implementations across the board in dozens of ways, and many
> of those implementations still haven't caught up. That's not
> a bad thing, that's a good thing. It demonstrates that the
> committee is willing to break implementations for a good
> cause, and that should be true as well for evaluation order.

It has *everything* to do with respect. You're second guessing
the committee on inadequate information and you think that's
okay. You don't let in the possibility that the committee
could make a number of decisions that may appear to be
capricious and/or contradictory to you, yet they all can have
defensible reasons.

In the particular case of the committee "breaking" implementations,
C++ had a once in a lifetime opportunity to create a new standard
language. It "broke" various past dialects of C++, and there was
more than one. But none of those were standardized. OTOH, the
C++ committee had rather less latitude to "break" C, for C has
been standardized since 1989.

Please note that I don't necessarily approve of all of the
breaks with popular prior art in C++. Nor do I think the
committee always handled backward compatibility with C as
I would like. Nor do I have an opinion about the importance
of the particular issue of preserving latitude to change order
of evaluation. But in every case I saw enough of the process by
which the C++ committee deliberated, made tradeoffs, and arrived
at final decisions that I believe they're deserving of respect.

I also believe Dave Abrahams was correct to make the comments he
did.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Sandor Hojtsy" <sandor.hojtsy@gmail.com>
Date: Tue, 29 Nov 2005 21:30:55 CST
Raw View
You keep quoting the standard:
  i = v[i++];                     // the behavior is unspecified
  i = 7, i++, i++;                // i becomes 9
  i = ++i + 1;                    // the behavior is unspecified
  i = i + 1;                      // the value of i is incremented
Note that this contains an identified defect, it should correctly read:
i = v[i++];                     //  the behavior is undefined
i = 7, i++, i++;                //   i  becomes  9
i = ++i + 1;                    //  the behavior is undefined
i = i + 1;                      //  the value of  i  is incremented
See http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#351

I have a question: are these expressions undefined, or well-formed?
i = i = 1;
i = ++i;
a = (i = 1) + (i = 1);
If I take the standard word by word, they do not modify the value of i
twice in an expression, because one of the assignments is not
modification - just reassigning the existing value.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ben-public-nospam@decadentplace.org.uk (Ben Hutchings)
Date: Wed, 30 Nov 2005 15:06:48 GMT
Raw View
Sandor  Hojtsy <sandor.hojtsy@gmail.com> wrote:
> You keep quoting the standard:
>   i = v[i++];                     // the behavior is unspecified
>   i = 7, i++, i++;                // i becomes 9
>   i = ++i + 1;                    // the behavior is unspecified
>   i = i + 1;                      // the value of i is incremented
> Note that this contains an identified defect, it should correctly read:
> i = v[i++];                     //  the behavior is undefined
> i = 7, i++, i++;                //   i  becomes  9
> i = ++i + 1;                    //  the behavior is undefined
> i = i + 1;                      //  the value of  i  is incremented
> See http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#351
>
> I have a question: are these expressions undefined, or well-formed?

They're all well-formed; that's a syntactical property.

> i = i = 1;

Despite the lack of a sequence point I believe this may be defined due
to this wording in 5.17/1: "The result of the assignment operation is
the value stored in the left operand *after* the assignment has taken
place..." (my emphasis).

> i = ++i;
> a = (i = 1) + (i = 1);

In these cases, the order of the two modifications of i is undefined,
so they fall foul of 5/4.

> If I take the standard word by word, they do not modify the value of i
> twice in an expression, because one of the assignments is not
> modification - just reassigning the existing value.

All built-in assignment operators are considered to modify their left
hand side, whether its value changes or not.

--
Ben Hutchings
Horngren's Observation:
                   Among economists, the real world is often a special case.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Ron Natalie <ron@spamcop.net>
Date: Wed, 30 Nov 2005 23:29:30 CST
Raw View
Ben Hutchings wrote:

> Despite the lack of a sequence point I believe this may be defined due
> to this wording in 5.17/1: "The result of the assignment operation is
> the value stored in the left operand *after* the assignment has taken
> place..." (my emphasis).

That doesn't imply there's a sequence point.  The value of the
expression is the value that will end up there, but that doesn't
imply that the store as actually happened (a sequence point is
required for that).

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Wed, 30 Nov 2005 23:29:21 CST
Raw View
P.J. Plauger wrote:
> It has *everything* to do with respect. You're second guessing
> the committee on inadequate information and you think that's
> okay.

No, I'm not. Has the committee ever received and rejected a
proposal to fix order of evaluation? If it has, I'm not aware
of it. The second guessing that's going on is people here on
the newsgroup (although maybe they're on the committee?) saying
that the committee would reject such a proposal because of vendor
objections to changing unspecified behavior and incompatibility
with C. I'm pointing out that the committee has seen fit to
cause such breakage in the past, so there is reason to think that
they might do so again, in a good cause.

Again, when I say that the committee went ahead and broke things
in the past, that's not disrespect. I think it's a *good* thing
that they did it, and I hope they do it again.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Thu, 1 Dec 2005 05:28:21 GMT
Raw View
Sandor Hojtsy wrote:
> You keep quoting the standard:

I quoted the normative part of the standard which is CORRECT.
Ignore the darned exmaples.

> I have a question: are these expressions undefined, or well-formed?
> i = i = 1;
Well-formed and undefined are not mutually exclusive.  The above is
well-formed and it's behavior is undefined as are your other examples.

> i = ++i;
> a = (i = 1) + (i = 1);
> If I take the standard word by word, they do not modify the value of i
> twice in an expression, because one of the assignments is not
> modification - just reassigning the existing value.
>
Huh?   The only one that you might make that argument for is
the first one.   But as far as C++ is concerned even storing the
same value that already there is a modification, hence:
 const int x = 5;
 x = 5;
is ill-formed.

In the second case, you are not "reassigning an existing value"
as you don't know when the side effect is applied (it's not guaranteed
until the sequence point which may be AFTER the assignment).

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Fri, 2 Dec 2005 04:09:32 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:20051125144435.31EA211406E@mscan6.ucar.edu...
> Andrew Koenig wrote:

>> Part of the point is to be able to translate a C++ expression into the
>> equivalent C expression without changing its meaning.

> Since when? Certainly there was a goal that C code should
> carry forward into C++ with its meaning generally unchanged,
> but why would anyone care about going the other way?

The first C++ compiler compiled into C.  Such a compiler is much easier to
implement if it can translate expressions that involve only C types into the
corresponding C expressions.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Fri, 2 Dec 2005 04:12:52 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:E1Efenr-00018u-00@chx400.switch.ch...

> Andrew Koenig wrote:
>> But you can't ignore reality by wishing it away.

> So that's why the committee decide to abandon two-phase
> name lookup in templates?

Explain, please.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Sat, 3 Dec 2005 01:33:54 GMT
Raw View
Andrew Koenig wrote:
>>So that's why the committee decide to abandon two-phase
>>name lookup in templates?
>
> Explain, please.

Existing compilers which implemented templates
didn't use two-phase name lookup. Many vendors
continue to ship nonconforming implementations.
Many users of conforming compilers are confused
and upset when they discover that their old code
doesn't work any more.

And yet, the committee standardized that approach
anyway. This demonstrates that the committee was
willing to override those objections in what it
deemed a good cause. The changes required to fix
order of implementation would be much less severe
than the changes required for two-phase lookup,
and no previously defined program behavior would
change. That is why I believe I can "wish away"
the reality that vendors might object to this.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Bob Bell" <belvis@pacbell.net>
Date: Fri, 2 Dec 2005 19:48:54 CST
Raw View
kuyper@wizard.net wrote:
> "Andrew Koenig" wrote:
> > <kuyper@wizard.net> wrote in message
> > news:1132759176.368850.264850@g43g2000cwa.googlegroups.com...
> >
> > > As long as the specified order of evaluation under  new rules was the
> > > same as one of the permitted orders of evaluation under the current
> > > rules, code which depends upon a different order of evaluation is (even
> > > under the current rules) non-portable. There's only a limited degree to
> > > which I care about what goes wrong with such code.
> >
> > Evidently you're not a compiler vendor :-)
> .
> > But you can't ignore reality by wishing it away.
>
> I'm not wishing it away; all I said is that I don't care about such
> code. Compiler vendors, as you've pointed out, do have to care about
> such code if it's become a widely used idiom. In the context of this
> discussion, I doubt that "i=v[i++];" is in that category. On the other
> hand, there are probably some widely used idioms where the order of
> evaluation is both important and (at best) unpsecified.

I think it would be helpful for your cause to list those widely-used
idioms. Personally, I can't think of a widely-used idiom that depends
on a particular evaluation order, and to the best of my knowledge any
such idiom should merely be considered broken. However, if you know of
any such idioms, I'd be interested to hear about them.

Bob

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Sat, 3 Dec 2005 01:46:59 CST
Raw View
Bob Bell wrote:
> kuyper@wizard.net wrote:
.
> > discussion, I doubt that "i=v[i++];" is in that category. On the other
> > hand, there are probably some widely used idioms where the order of
> > evaluation is both important and (at best) unpsecified.
>
> I think it would be helpful for your cause to list those widely-used
> idioms. Personally, I can't think of a widely-used idiom that depends
> on a particular evaluation order, and to the best of my knowledge any
> such idiom should merely be considered broken. However, if you know of
> any such idioms, I'd be interested to hear about them.

I wouldn't have used the phrase "probably", if I knew for certain that
any such idioms exist.

I think that such idioms probably exist, simply because from my
experience most C programmers are not experts in writing portable code
(too many of them arent' even experts in writing non-portable code).
When they write code that depends upon implementation-specific
behavior, they're not even aware of the fact. Order of evaluation of
sub-expressions is just one of many different implementation-specific
behaviors that people unwittingly become dependent on.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: brangdon@cix.co.uk (Dave Harris)
Date: Sun, 4 Dec 2005 12:05:31 CST
Raw View
pongba@gmail.com () wrote (abridged):
> It's just that I couldn't believe this little simple expression
> has undefined behavior, though, I think you were right anyway.

If you think of i as being stored as two words that are manipulated
separately, then it becomes easier to see. You might get the high word of
the new value and the low word of the old value. For example:

     i = 0x1000ffff;
     i = i++;

might yield i == 0x1001ffff, which is different to (and much bigger
than) any value that you would consider "correct". If i is a pointer, this
could point to memory the application doesn't own, which could lead to a
hardware fault even if *i is not accessed. And ints can have trapping
values too.

-- Dave Harris, Nottingham, UK.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Sun, 4 Dec 2005 22:42:02 GMT
Raw View
Andrew Koenig wrote:
> "Hyman Rosen" <hyrosen@mail.com> wrote in message
> news:20051125144435.31EA211406E@mscan6.ucar.edu...
>> Andrew Koenig wrote:
>
>>> Part of the point is to be able to translate a C++ expression into the
>>> equivalent C expression without changing its meaning.
>
>> Since when? Certainly there was a goal that C code should
>> carry forward into C++ with its meaning generally unchanged,
>> but why would anyone care about going the other way?
>
> The first C++ compiler compiled into C.  Such a compiler is much easier to
> implement if it can translate expressions that involve only C types into the
> corresponding C expressions.
>
Well the standards community messed that up, because it is not true now
nor has it always been true that it was safe to just pass expressions
that use only C types to the C compiler.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Tue, 6 Dec 2005 04:29:58 GMT
Raw View
kuyper@wizard.net wrote:
>>I think it would be helpful for your cause to list those widely-used
>>idioms. Personally, I can't think of a widely-used idiom that depends
>>on a particular evaluation order, and to the best of my knowledge any
>>such idiom should merely be considered broken. However, if you know of
>>any such idioms, I'd be interested to hear about them.
>
>
> I wouldn't have used the phrase "probably", if I knew for certain that
> any such idioms exist.

I don't know for sure, but one guess would be code like this,
     cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
where the authors don't realize that the function calls can
happen in any order. The compiler which they use happens to give
them an order which works, but there may be some particular order
of evaluation which would be bad, and it silently lurks in the
code until one day it shows up because of some change to compiler
version.

I would say that the apparent semantics of this code (first print
this, then that, then that) are so strong that it takes unusual
mental effort to realize that the calls are not (necessarily) made
left-to-right.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Wed, 7 Dec 2005 03:33:39 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> skrev i meddelandet
news:E1EjGBD-0005T8-00@chx400.switch.ch...
> kuyper@wizard.net wrote:
>>>I think it would be helpful for your cause to list those
>>>widely-used
>>>idioms. Personally, I can't think of a widely-used idiom that
>>>depends
>>>on a particular evaluation order, and to the best of my knowledge
>>>any
>>>such idiom should merely be considered broken. However, if you know
>>>of
>>>any such idioms, I'd be interested to hear about them.
>>
>>
>> I wouldn't have used the phrase "probably", if I knew for certain
>> that
>> any such idioms exist.
>
> I don't know for sure, but one guess would be code like this,
>     cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
> where the authors don't realize that the function calls can
> happen in any order.

If the functions really affect each other, this is *really* bad code.
Why don't we require a diagnostic

Warning: Terrible code - please rewrite!

instad of having the compilers make it work anyway?


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrew Koenig" <ark@acm.org>
Date: Tue, 6 Dec 2005 21:35:48 CST
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:200512021422.jB2EMEuC046212@horus.isnic.is...
> Andrew Koenig wrote:
>>>So that's why the committee decide to abandon two-phase
>>>name lookup in templates?

>> Explain, please.

> Existing compilers which implemented templates
> didn't use two-phase name lookup. Many vendors
> continue to ship nonconforming implementations.
> Many users of conforming compilers are confused
> and upset when they discover that their old code
> doesn't work any more.

> And yet, the committee standardized that approach
> anyway. This demonstrates that the committee was
> willing to override those objections in what it
> deemed a good cause. The changes required to fix
> order of implementation would be much less severe
> than the changes required for two-phase lookup,
> and no previously defined program behavior would
> change. That is why I believe I can "wish away"
> the reality that vendors might object to this.

This isn't an explanation.

You said that the committee decided "to abandon two-phase name lookup in
templates," and when I asked you for an explanation, you said that they
didn't abandon it.

So I guess you were being sarcastic, which is not a good way to get people
to take you seriously in a technical discussion.

Here are some facts that I think go a long way toward explaining the current
state of affairs.

The C++ standards committee had its organizational meeting literally the day
after the C89 standard was ratified.  It was chartered to standardize C++
using two documents as its basis:

    1) The newly ratified C standard;
    2) The Annotated Reference Manual.

The C standard was quite explicit about the behavior of built-in operators
on values of built-in types.  Moreover, the ARM was reasonably consistent
about deferring to C the definition of how such operators behave.

In contrast, the C standard was, of course, utterly silent about templates
and exceptions, and the ARM marked both those features as "experimental."

Accordingly, I find it completely unsurprising that the committee was much
more deferential to past usage in the case of order of evaluation than it
was in the case of templates, and equally unsurprising that vendors went
along with changes in template behavior where they would not have tolerated
changes in behavior of built-in operators on operands of built-in types.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Wed, 7 Dec 2005 03:35:44 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:E1EjGBD-0005T8-00@chx400.switch.ch...
> kuyper@wizard.net wrote:

>> I wouldn't have used the phrase "probably", if I knew for certain that
>> any such idioms exist.

> I don't know for sure, but one guess would be code like this,
>     cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
> where the authors don't realize that the function calls can
> happen in any order. The compiler which they use happens to give
> them an order which works, but there may be some particular order
> of evaluation which would be bad, and it silently lurks in the
> code until one day it shows up because of some change to compiler
> version.
>
> I would say that the apparent semantics of this code (first print
> this, then that, then that) are so strong that it takes unusual
> mental effort to realize that the calls are not (necessarily) made
> left-to-right.

Under ordinary circumstances, the order in which the calls are made doesn't
matter.  It matters only when the functions have side effects that interfere
with each other in some way.

You don't need order-of-evaluation guarantees to ensure that the various
components will be printed in the right sequence, even if they are evaluated
out of sequence.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Bob Bell" <belvis@pacbell.net>
Date: Tue, 6 Dec 2005 21:37:31 CST
Raw View
Hyman Rosen wrote:
> kuyper@wizard.net wrote:
> >>I think it would be helpful for your cause to list those widely-used
> >>idioms. Personally, I can't think of a widely-used idiom that depends
> >>on a particular evaluation order, and to the best of my knowledge any
> >>such idiom should merely be considered broken. However, if you know of
> >>any such idioms, I'd be interested to hear about them.
> >
> >
> > I wouldn't have used the phrase "probably", if I knew for certain that
> > any such idioms exist.
>
> I don't know for sure, but one guess would be code like this,
>      cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
> where the authors don't realize that the function calls can
> happen in any order. The compiler which they use happens to give
> them an order which works, but there may be some particular order
> of evaluation which would be bad, and it silently lurks in the
> code until one day it shows up because of some change to compiler
> version.

Or perhaps the author simply modifies the expression to push it beyond
some kind of complexity boundary, triggering the compiler to reorder
the expression(s). It doesn't even take a new version of the compiler.
It's hard to regard such code as anything but broken.

I was hoping for an example along the lines of "Compiler vendor ABC
states in their documentation that evaluation order of function
arguments is right to left, as if there are sequence points at the
commas, and here's an example of an idiom that exploits the evaluation
order that's common among users of ABC's compiler."

> I would say that the apparent semantics of this code (first print
> this, then that, then that) are so strong that it takes unusual
> mental effort to realize that the calls are not (necessarily) made
> left-to-right.

On the contrary, this code doesn't make me think the calls are made
left to right, only that the output will be serialized that way. But
then again, I understood (and got used to) the way C++ evaluates
expressions a long time ago.

Bob

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "ThosRTanner" <ttanner2@bloomberg.net>
Date: Tue, 6 Dec 2005 21:34:22 CST
Raw View
Hyman Rosen wrote:
> kuyper@wizard.net wrote:

> > I wouldn't have used the phrase "probably", if I knew for certain that
> > any such idioms exist.
>
> I don't know for sure, but one guess would be code like this,
>      cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
> where the authors don't realize that the function calls can
> happen in any order. The compiler which they use happens to give
> them an order which works, but there may be some particular order
> of evaluation which would be bad, and it silently lurks in the
> code until one day it shows up because of some change to compiler
> version.
>
> I would say that the apparent semantics of this code (first print
> this, then that, then that) are so strong that it takes unusual
> mental effort to realize that the calls are not (necessarily) made
> left-to-right.
Isn't that one situation where it will work as expected, because it
"translates to":

operator<<(operator<<(operator<<(operator<<(cout, "\n"), h()), "zzz"),
g(), "yyy") etc etc

and there's a sequence point after every function call

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Tue, 6 Dec 2005 23:28:11 CST
Raw View
"Bo Persson" wrote:
> "Hyman Rosen" <hyrosen@mail.com> skrev i meddelandet
> news:E1EjGBD-0005T8-00@chx400.switch.ch...
.
> > I don't know for sure, but one guess would be code like this,
> >     cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
> > where the authors don't realize that the function calls can
> > happen in any order.
>
> If the functions really affect each other, this is *really* bad code.
> Why don't we require a diagnostic
>
> Warning: Terrible code - please rewrite!
>
> instad of having the compilers make it work anyway?

Of course it's bad code. The point was that people who write such code
would have a motive for pressuring implementors to continue making it
work as they expect it to. That's a real type of pressure that
implementors face.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Tue, 6 Dec 2005 23:28:22 CST
Raw View
ThosRTanner wrote:
> Hyman Rosen wrote:
> > kuyper@wizard.net wrote:
>
> > > I wouldn't have used the phrase "probably", if I knew for certain that
> > > any such idioms exist.
> >
> > I don't know for sure, but one guess would be code like this,
> >      cout << "xxx" << f() << "yyy" << g() << "zzz" << h() << "\n";
> > where the authors don't realize that the function calls can
> > happen in any order. The compiler which they use happens to give
> > them an order which works, but there may be some particular order
> > of evaluation which would be bad, and it silently lurks in the
> > code until one day it shows up because of some change to compiler
> > version.
> >
> > I would say that the apparent semantics of this code (first print
> > this, then that, then that) are so strong that it takes unusual
> > mental effort to realize that the calls are not (necessarily) made
> > left-to-right.
> Isn't that one situation where it will work as expected, because it
> "translates to":
>
> operator<<(operator<<(operator<<(operator<<(cout, "\n"), h()), "zzz"),
> g(), "yyy") etc etc
>
> and there's a sequence point after every function call

Yes, the function calls introduce sequence points into that expression.
However, sequence points only enforce a connection between the
evalation of an expression and its side effects. They don't impose a
required ordering on the expressions.
Let me use a simplified version of the example given:

cout << f() << g();

This is equivalent to:

cout.operator<<(f()).operator<<(g()).

This expression involves four function calls:
A: f()
B: cout.operator<<()
C: g()
D: cout.operator<<(f()).operator<<()

Let t(X) be the time at which function X is evaluated. Since function
arguments must be evaluated before the function itself can be
evaluated, we have the following constraints:

t(A) < t(B)
t(C) < t(D)

Since the return value from a function can't be used until after the
function has been evaluated, we have one additional constraint:

t(B) < T(D)

There are exactly two orderings consistent with all of those
constraints:

t(A) < t(B) < t(C) < t(D)

t(C) < t(A) < t(B) < t(D)

and there's no other requirement in the standard that is violated by
either of those orderings. Therefore, if it matters whether f() is
called before or after the call to g(), you've got problems.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:44:43 GMT
Raw View
ThosRTanner wrote:
> Isn't that one situation where it will work as expected, because it
> "translates to":
> operator<<(operator<<(operator<<(operator<<(cout, "\n"), h()), "zzz"),
> g(), "yyy") etc etc
> and there's a sequence point after every function call

Your translation is correct, and there is a sequence point after
every call, but it still doesn't work as "expected". The compiler
is free to call f(), g(), and h() in any order before calling any
of the operator<<() methods if it wants to.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:45:01 GMT
Raw View
kuyper@wizard.net wrote:
> Of course it's bad code.

It's only bad code because it's written in a language
with bad semantics for it.

> The point was that people who write such code would have a
 > motive for pressuring implementors to continue making it work
 > as they expect it to. That's a real type of pressure that
> implementors face.

No, the point was that people who write such code don't realize
when an order dependency has crept in because their compiler
happens to pick an order that works for them. If that order were
defined, then their code would not be subject to accidental
breakage when the compiler decided to pick a different order.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:52:09 GMT
Raw View
Bob Bell wrote:
> It's hard to regard such code as anything but broken.

That's only because of the widespread reluctance to see
the behavior of the language as the problem rather than
the behavior of the code. Why do you perceive as broken
     call( f(), g(), h() );
but not
     f(); g(); h();
It's only because long experience has ingrained into you
that in the first expression the calls are necessarily
unordered, while in the second they are ordered. But that's
just an artifact of the language, and it can be changed,
just as the Java creators decided to do.

> On the contrary, this code doesn't make me think the calls are made
> left to right, only that the output will be serialized that way. But
> then again, I understood (and got used to) the way C++ evaluates
> expressions a long time ago.

Exactly. Now go read further responses, especially the one from
ThosRTanner, and notice that he apparently does not have your
level of understanding. Which is my point. In the mathematical
sense, ignorance about sequence points and order of evaluation
is "almost everywhere". Rather than feeling proud about our level
of understanding, we should be sorry that such a level is necessary,
and endeavor to do away with it. Then we could explain the now simple
behavior to everyone, and they would all understand.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:52:20 GMT
Raw View
Bo Persson wrote:
> If the functions really affect each other, this is *really* bad code.

No, the code is fine. It's the language that is bad.

> Why don't we require a diagnostic
> Warning: Terrible code - please rewrite!

Because the functions may be separately compiled,
so the compiler cannot know whether they interact.

 > instad of having the compilers make it work anyway?

Because programming languages are expressions of what
we wish the computer to do, and having unspecified
constructs in the languages are useless towards that
end.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:52:32 GMT
Raw View
kuyper@wizard.net wrote:
> Yes, the function calls introduce sequence points into that expression.
> However
. 40 more lines of explanation
> you've got problems.

Good exposition. I hope this contributes to everyone's
understanding of why the current semantics are a mess and
order of evaluation should be properly defined! Wouldn't
it be nice to say that the calls are evaluated from left
to right, arguments before calls? So much simpler!

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: pongba@gmail.com
Date: Mon, 21 Nov 2005 23:20:16 CST
Raw View
C++03 5/4:
[Example:
i = v[i++]; // the behavior is unspecified
.
]

I wanna ask that if v is a std::vector which overloads operator[], is
this unspecified any more?

What I understand is that there's a sequence point at the entry and
exit of a function call, well, operator function certainly is a
function, so if we change that 'v' to a object of std::vector, v[i++]
becomes a function call, the side effect of which takes place before
the assignment operation, therefore 'i' gets a determinable value,
which is v[*old value of the i*], which of course is not unspecified.

That said, I wonder if this analysis is right, did I miss or
misunderstand anything?
If that is true, is this an evidence that some inconsistency, in some
extremely non-obvious way, exists between build-in operator and
operator function(informally known as 'overloaded operator').

Another question is:

What is the side-effect of 'i++' actually? Two options, first of which
is "fetch i from memory, add it by 1, write the new value back", second
is "write the new value stored previously somewhere into the storage of
'i'". Is the answer 'both' or 'either' or whatever? Plus, the words
below(also excerpted from [c++03;5/4])are really puzzling to me, can
anyone explain it please? Does it have anything to do with the two
questions I asked?

[C++03;5/4]"Between the previous and next sequence point a scalar
object shall have its stored value modified at most once by the
evaluation of an expression. Furthermore, the prior value shall be
accessed only to determine the value to be stored.The requirements of
this paragraph shall be met for each allowable ordering of the
subexpressions of a full expression; otherwise the behavior is
undefined."

Any help is appreciated;-)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Tue, 22 Nov 2005 09:02:42 CST
Raw View
pongba@gmail.com wrote:
> C++03 5/4:
> [Example:
> i = v[i++]; // the behavior is unspecified
> .
> ]
>
> I wanna ask that if v is a std::vector which overloads operator[], is
> this unspecified any more?
>
> What I understand is that there's a sequence point at the entry and
> exit of a function call, well, operator function certainly is a
> function, so if we change that 'v' to a object of std::vector, v[i++]
> becomes a function call, the side effect of which takes place before
> the assignment operation, therefore 'i' gets a determinable value,
> which is v[*old value of the i*], which of course is not unspecified.
>
> That said, I wonder if this analysis is right, did I miss or
> misunderstand anything?

Yes, that's correct. When operators are overloaded, it's actually just
shorthand for function calls, with all of the corresponding sequence
points.

> If that is true, is this an evidence that some inconsistency, in some
> extremely non-obvious way, exists between build-in operator and
> operator function(informally known as 'overloaded operator').

Yes, this is one of the several inconsistencies between them. Backwards
compatibility with C was an important objective during the development
of C++, which prevented complete consistency between built-in and
user-defined operators. Note: the phrase "overloaded operator" refers
to the operator that is being overloaded. The standard uses the term
"overloaded operator function" for the function that overloads the
overloaded operator, so there's nothing particularly informal about
that terminology.

> Another question is:
>
> What is the side-effect of 'i++' actually? Two options, first of which
> is "fetch i from memory, add it by 1, write the new value back", second
> is "write the new value stored previously somewhere into the storage of
> 'i'". Is the answer 'both' or 'either' or whatever?

Accessing the value of 'i' is not a side effect unless 'i' was declared
volatile. Otherwise, the only side effect of i++ is the writing of the
new value. The read of the previous value (if not volatile), and the
calculation of the new value, are the "main" effect. See 1.9p7 for the
exact definition of "side effect".

> Plus, the words
> below(also excerpted from [c++03;5/4])are really puzzling to me, can
> anyone explain it please? Does it have anything to do with the two
> questions I asked?
>
> [C++03;5/4]"Between the previous and next sequence point a scalar
> object shall have its stored value modified at most once by the
> evaluation of an expression.

This part is pretty straightforward. Several different operators can
modifiy the value of an object: all of the assignment operators,  ++
and --. If two such operators modify the value of the same object
without an intervening sequence point, it's a violation of 5p4.

As a practical matter, this is because the absence of a sequence point
allows the implementor to rearrange the generated machine code, so that
there's no telling which of the two modifications will occur first, and
it may even be that the two modifications interfere with each other to
produce a result that's different from what would have happened if
either modification had been the only one.

> Furthermore, the prior value shall be
> accessed only to determine the value to be stored.The requirements of
> this paragraph shall be met for each allowable ordering of the
> subexpressions of a full expression; otherwise the behavior is
> undefined."

This one is much more difficult. If you read the current value of an
object, and then write to that same object, without an intervening
sequence point, then you can't read the value for any purpose other
than determining the value that is to be written.

Again, as a practical matter that's because the absences of a sequence
point allows the read and the write to be in any order, and it even
allows them to interfere with each other. It's allowed only if it's
inherently impossible to know what value to write, until you've
finished reading the value; that guarantees that the read and the write
have to be in the right order, and can't interfere with each other,
even in the absence of a sequence point.

However, there's a lot of confusion and argument about what constitutes
a use of the value for that purpose. The safest thing to do is to avoid
any construct that might be construed as using the previous value for
any other purpose.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 23 Nov 2005 01:54:04 GMT
Raw View
kuyper@wizard.net wrote:
> Yes, this is one of the several inconsistencies between them. Backwards
> compatibility with C was an important objective during the development
> of C++, which prevented complete consistency between built-in and
> user-defined operators.

Specifying order of evaluation completely in C++ would
be completely backwards-compatible with C, though, so
that's no excuse here. 'i = v[i++];' can and should be
the same regardless of whether v is vector or built-in,
and that would not be inconsistent with C.

> However, there's a lot of confusion and argument about what constitutes
> a use of the value for that purpose. The safest thing to do is to avoid
> any construct that might be construed as using the previous value for
> any other purpose.

The right thing to do is to get rid of this stupidity from the
language once and for all and define the order of evaluation
completely, including when side effects happen, as strictly
left-to-right and operands before operation.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Wed, 23 Nov 2005 05:52:04 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:E1EeZw5-0004wI-00@chx400.switch.ch...
> kuyper@wizard.net wrote:

> Specifying order of evaluation completely in C++ would
> be completely backwards-compatible with C, though, so
> that's no excuse here.

The trouble is that if order of evaluation were completely specified in C++,
there might be C++ compilers that would be required to give different
results from existing C implementations for the same program (assuming, of
course, that the program was in the intersection of C and C++).

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Greg Herlihy" <greghe@pacbell.net>
Date: Tue, 22 Nov 2005 23:53:09 CST
Raw View
pongba@gmail.com wrote:
> C++03 5/4:
> [Example:
> i = v[i++]; // the behavior is unspecified
> .
> ]
>
> I wanna ask that if v is a std::vector which overloads operator[], is
> this unspecified any more?

> What I understand is that there's a sequence point at the entry and
> exit of a function call, well, operator function certainly is a
> function, so if we change that 'v' to a object of std::vector, v[i++]
> becomes a function call, the side effect of which takes place before
> the assignment operation, therefore 'i' gets a determinable value,
> which is v[*old value of the i*], which of course is not unspecified.
>
> That said, I wonder if this analysis is right, did I miss or
> misunderstand anything?

Your analysis is correct. With an overloaded operator[], the evaluation
of the expression i=v[i++] is no longer unspecified, but is, in fact,
defined.

> If that is true, is this an evidence that some inconsistency, in some
> extremely non-obvious way, exists between build-in operator and
> operator function(informally known as 'overloaded operator').

Absolutely. There is no requirement that an overloaded operator behave
at all like the built-in operator. For example, a program could
overload the assignment operator (=) to test for equality, and overload
the equality operator (==) to perform an assignment, for any
user-declared type. It would be perfectly legal for a program to do so,
though probably not a particularly good idea. At least not if one
values consistency. But rather than have the Standard restrict
overloaded operators in some arbitrary manner, it simply leaves their
implementation up to the programmer's own good judgement.

> Another question is:
>
> What is the side-effect of 'i++' actually? Two options, first of which
> is "fetch i from memory, add it by 1, write the new value back", second
> is "write the new value stored previously somewhere into the storage of
> 'i'". Is the answer 'both' or 'either' or whatever? Plus, the words
> below(also excerpted from [c++03;5/4])are really puzzling to me, can
> anyone explain it please? Does it have anything to do with the two
> questions I asked?

It depends on the context in which i++ appears. As a function
parameter, i++ must be evaluated before the function call is made.
Since the function must be passed the value of i before it is
incremented, the compiler must first copy i, increment it, and then
pass the copy of i to the function being called.

> [C++03;5/4]"Between the previous and next sequence point a scalar
> object shall have its stored value modified at most once by the
> evaluation of an expression. Furthermore, the prior value shall be
> accessed only to determine the value to be stored.The requirements of
> this paragraph shall be met for each allowable ordering of the
> subexpressions of a full expression; otherwise the behavior is
> undefined."

I believe the Standard has some examples to illustrate when result of
evaluating an expression is unspecified and when it is undefined.
Essentally, the result of evaluating i = v[i++] is unspecified because
i's value is accessed only once. The evaluation of i++ = v[i++] would
be undefined, since i's value is accessed more than once between
sequence points.

Greg

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Wed, 23 Nov 2005 08:45:36 CST
Raw View
Greg Herlihy wrote:
> Essentally, the result of evaluating i = v[i++] is unspecified because
> i's value is accessed only once.

No, it's undefined in the built-in case, because i is modified
twice in the same expression (by the increment and by the
assignment) without an intervening sequence point.

Once again, I invite everyone in the newsgroup to notice how
no one understands the rules as they exist. It's asinine not
to have a defined order of evaluation, including side effects.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: eldiener_no_spam_here@earthlink.net (Edward Diener No Spam)
Date: Thu, 24 Nov 2005 04:40:36 GMT
Raw View
Hyman Rosen wrote:
> Greg Herlihy wrote:
>
>> Essentally, the result of evaluating i = v[i++] is unspecified because
>> i's value is accessed only once.
>
>
> No, it's undefined in the built-in case, because i is modified
> twice in the same expression (by the increment and by the
> assignment) without an intervening sequence point.
>
> Once again, I invite everyone in the newsgroup to notice how
> no one understands the rules as they exist.

Obviously not true except as hyperbole.

> It's asinine not
> to have a defined order of evaluation, including side effects.

I totally agree with you here. With all due respect to Mr. Stroustrup's
printed opinion about the importance of C++ maintaining compatibility
with the C language, I also feel that at some time in the future, and I
hope it is the near future, C++ should stop trying to maintain
compatibility with the C language and do the right things as far as its
own C++ language specification is concerned. This is just one of many
other areas, which have been mentioned in numerous other posts on these
NGs, where compatibility with the C language is holding C++ back from
advancing as a language of its own.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: pongba@gmail.com
Date: Wed, 23 Nov 2005 22:45:31 CST
Raw View
Hyman Rosen wrote:
> No, it's undefined in the built-in case, because i is modified
> twice in the same expression (by the increment and by the
> assignment) without an intervening sequence point.

yeah, that's where my doubt lies exactly. The standard says that "i =
v[i++]" has unspecified behavior, but according to what you have said,
it should just be undefined behavior.
Is there anything wrong with the corresponding standard wording?

BTW. If what you said is true, similarly the expression " i =
i++"/"i=++i"(whatever) would have undefined behavior,too. It's just
that I couldn't believe this little simple expression has undefined
behavior, though, I think you were right anyway.

Plus, to me it seems very, very confusing to separate the concept of
"evaluation" from that of "side-effect", in particular, the evaluation
of an expression doesn't necessarily mean that the side-effect of this
evaluation takes place simultaneously. Sometimes this behavior is quite
puzzling, it's just counterintuitive.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Wed, 23 Nov 2005 22:40:23 CST
Raw View
"Andrew Koenig" wrote:
> "Hyman Rosen" <hyrosen@mail.com> wrote in message
> news:E1EeZw5-0004wI-00@chx400.switch.ch...
.
> > Specifying order of evaluation completely in C++ would
> > be completely backwards-compatible with C, though, so
> > that's no excuse here.
>
> The trouble is that if order of evaluation were completely specified in C++,
> there might be C++ compilers that would be required to give different
> results from existing C implementations for the same program (assuming, of
> course, that the program was in the intersection of C and C++).

As long as the specified order of evaluation under  new rules was the
same as one of the permitted orders of evaluation under the current
rules, code which depends upon a different order of evaluation is (even
under the current rules) non-portable. There's only a limited degree to
which I care about what goes wrong with such code.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Wed, 23 Nov 2005 22:40:26 CST
Raw View
Hyman Rosen wrote:
> kuyper@wizard.net wrote:
> > Yes, this is one of the several inconsistencies between them. Backwards
> > compatibility with C was an important objective during the development
> > of C++, which prevented complete consistency between built-in and
> > user-defined operators.
>
> Specifying order of evaluation completely in C++ would
> be completely backwards-compatible with C, though, so
> that's no excuse here.

A less extreme possibility would have been to give built-in operators
the same exact sequence points they would have had if there were
actually user-defined operator overloads. That would produce almost the
same effect.

I wasn't involved, so I don't know what the actual reasons for this
decision were, but I suspect that it was considered desireable that
built-in operators would retain all of the opportunities for
optimization allowed by the C sequence point rules. However, you
couldn't easily fit operator overloads into such a scheme, without
giving them the same sequence points as the corresponding function
calls.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Razzer" <coolmandan@gmail.com>
Date: Thu, 24 Nov 2005 00:30:47 CST
Raw View
"Andrew Koenig" wrote:
> "Hyman Rosen" <hyrosen@mail.com> wrote in message
> news:E1EeZw5-0004wI-00@chx400.switch.ch...
> > kuyper@wizard.net wrote:
>
> > Specifying order of evaluation completely in C++ would
> > be completely backwards-compatible with C, though, so
> > that's no excuse here.
>
> The trouble is that if order of evaluation were completely specified in C++,
> there might be C++ compilers that would be required to give different
> results from existing C implementations for the same program (assuming, of
> course, that the program was in the intersection of C and C++).

Why's that? AFAICS, defining the order of evaluation in cases where it
is undefined in C should not have to worry about giving different
results since there is no set result in C. The only time, AFAIK, you
could not set a definate order of evaluation is the evaluation of
function arguments without potentially getting different results
between C and C++. However, I think the order of evaluation of
arguments causes such a minor set of problems in C++ that one could
leave it how it is while defining other areas of order of evaluation
and still get a substantial benefit.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Thu, 24 Nov 2005 19:03:49 GMT
Raw View
pongba@gmail.com wrote:
> Hyman Rosen wrote:

> yeah, that's where my doubt lies exactly. The standard says that "i =3D
> v[i++]" has unspecified behavior, but according to what you have said,
> it should just be undefined behavior.
> Is there anything wrong with the corresponding standard wording?
>=20
The order of evaluation is unspecified, but changing the value twice
between sequence points is undefined behavior.   With built-in
operators there are no sequence points other than the end of the
full expression.


Fourth paragraph of Chapter 5 -- Expressions from the standard.

Except where noted, the order of evaluation of operands of individual=20
operators and subexpressions of individual
expressions, and the order in which side effects take place, is=20
unspecified.53) Between the previous
and next sequence point a scalar object shall have its stored value=20
modified at most once by the evaluation
of an expression. Furthermore, the prior value shall be accessed only to=20
determine the value to be stored.
The requirements of this paragraph shall be met for each allowable=20
ordering of the subexpressions of a full
expression; otherwise the behavior is undefined. [Example:
i =3D v[i++]; // the behavior is unspecified
i =3D 7, i++, i++; // i becomes 9
i =3D ++i + 1; // the behavior is unspecified
i =3D i + 1; // the value of i is incremented
=97end example]

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:53:03 GMT
Raw View
Andrew Koenig wrote:
>     1) The newly ratified C standard;

The newly ratified C standard itself had novelties and changes
to long-existing C behavior. For example, it changed the rules
governing the widening of smaller unsigned types to integral
types, and added stringification and token pasting to the
preprocessor. The even newer C99 standard adds hefty new syntax
to the language for initializations.

> Accordingly, I find it completely unsurprising that the committee was much
> more deferential to past usage in the case of order of evaluation than it
> was in the case of templates, and equally unsurprising that vendors went
> along with changes in template behavior where they would not have tolerated
> changes in behavior of built-in operators on operands of built-in types.

Since I wasn't there, I have no idea whether anyone actually
proposed changing evaluation order during the original process.
I don't even care. I am advocating that it should be changed
now. When I say this, others object that the committee will
not consider changes that impact vendors so much, so I point
out that the committee has impacted vendors before, and that
this would be a change that leaves formerly specified behavior
alone.

I apologize in advance, but I get a very strong sense that the
objections stem more form a reluctance to change the way C and
C++ have always behaved than from any other reason, even though
the old behavior is bad.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hyrosen@mail.com (Hyman Rosen)
Date: Wed, 7 Dec 2005 15:53:27 GMT
Raw View
Andrew Koenig wrote:
> Under ordinary circumstances, the order in which the calls are made doesn't
> matter.  It matters only when the functions have side effects that interfere
> with each other in some way.

Well, that's what I said. But often compilers actually have a
fixed order of evaluation, it's just that they don't tell anyone
what it is, and it's subject to change between versions or vendors.
So order dependencies might creep in and accidentally work, until
something changes.

> You don't need order-of-evaluation guarantees to ensure that the various
> components will be printed in the right sequence, even if they are evaluated
> out of sequence.

True but irrelevant. The OP asked for code where the order
of evaluation could be important. This is such code. It's
hard to come up with much more that isn't undefined behavior.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ark@acm.org ("Andrew Koenig")
Date: Thu, 8 Dec 2005 07:00:42 GMT
Raw View
"Hyman Rosen" <hyrosen@mail.com> wrote in message
news:200512071313.jB7DDUuC077731@horus.isnic.is...

> Well, that's what I said. But often compilers actually have a
> fixed order of evaluation, it's just that they don't tell anyone
> what it is, and it's subject to change between versions or vendors.
> So order dependencies might creep in and accidentally work, until
> something changes.

I haven't done a survey of compilers, but I have certainly encountered
compilers that do not have a fixed order of evaluation in the sense in which
I think you mean it.  It is certainly true that most compilers will evaluate
a given expression in the same order every time they encounter it (assuming
that the types are the same), but that doesn't imply a fixed order of
evaluation.

Here's why.  One common code-generation technique is to try to minimize the
total number of registers or temporary locations needed to evaluate an
expression.  One well-known algorithm for doing that is that whenever there
is an operator with operands that can be evaluated in either order, the
compiler should try to evaluate first the operand that requires the most
registers.

A compiler that uses such an algorithm might well compile f()+g(x,y) by
evaluating g(x,y) first, then f(), then computing the sum, but nevertheless
might compile h()+f()+g(x,y) by evaluating h(), then f(), and finally
g(x,y).  This would happen if the subexpression h()+f() required more
registers to evaluate than the subexpression g(x,y).

I would expect this kind of behavior to be common, especially among
optimizing compilers.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hsutter@gotw.ca (Herb Sutter)
Date: Thu, 8 Dec 2005 07:02:14 GMT
Raw View
hyrosen@mail.com (Hyman Rosen) wrote:
>Bob Bell wrote:
>> It's hard to regard such code as anything but broken.
>
>That's only because of the widespread reluctance to see
>the behavior of the language as the problem rather than
>the behavior of the code. Why do you perceive as broken
>     call( f(), g(), h() );
>but not
>     f(); g(); h();
>It's only because long experience has ingrained into you
>that in the first expression the calls are necessarily
>unordered, while in the second they are ordered. But that's
>just an artifact of the language, and it can be changed,
>just as the Java creators decided to do.

I really think Hyman is making good points here, though he seems to be
somewhat of a lone voice crying out in the wilderness right now in this
thread.

Here's an example I posted a few days ago to a parallel discussion that's
going on within the C++ standards committee reflectors:

---
Here's a specific version of this pitfall (adapted from a similar example
in http://www.gotw.ca/gotw/012.htm):

  // Case 1
  cout << "x = " << itoa(42,buf,10) << ", y = " << itoa(43,buf,10);

Some compilers I just tried print 42 and 42, some 42 and 43, and others
that I don't have could print 43 and 42.

In fact, one of the compilers I tried gave me different results between
the above code and the following minor rearrangement (and it is or should
be easy to go from the above to the below during maintenance!):

  // Case 2
  cout << string("x = ") + itoa(42,buf,10) + ", y = " + itoa(43,buf,10);

YMMV, but to me it seems bad that there should be a pitfall like this with
straightforward maintenance of code, moving from Case 1 to Case 2 on the
same compiler and getting different results. But that's our status quo and
it still happened in my re-test this morning on a popular compiler.

I had actually forgotten about this GotW #12 example (it's been 8 years
since I first wrote about this particular one), and it's just another case
where the same old issue comes up.
---

The few responses on the committee reflector seemed to be inclined to view
this as stupid code, and 'don't do that.'

The reason I don't buy that answer is that if the language makes an idiom
natural, it should either make it work predictably or else provide guard
rails to help people avoid the problem. We don't do either, and that's
bad. Unfortunately, in an expert-friendly group of people, most people are
so used to avoiding the problem that they don't realize how serious this
category of problems is and why it's one of the major reasons people leave
C++ for other languages. There are few things more frustrating to
programmers than having naturally written code that compiles silently but
has unpredictable behavior.

Perhaps this category of pitfall is in the top-three list of reasons for
hair loss among C++ developers. :-)


>> On the contrary, this code doesn't make me think the calls are made
>> left to right, only that the output will be serialized that way. But
>> then again, I understood (and got used to) the way C++ evaluates
>> expressions a long time ago.
>
>Exactly. Now go read further responses, especially the one from
>ThosRTanner, and notice that he apparently does not have your
>level of understanding. Which is my point. In the mathematical
>sense, ignorance about sequence points and order of evaluation
>is "almost everywhere". Rather than feeling proud about our level
>of understanding, we should be sorry that such a level is necessary,

Hear, hear.

To steal a quote from a respected colleague of mine, which I included in
the committee reflector discussion (he was speaking about relaxed memory
models, but as I've pointed out earlier in this thread there are
similarities between that and relaxed evaluation ordering):

  "Meta point: A programming model is a model for programming.
  Semantics should enable efficient implementations, not expose
  them."

>and endeavor to do away with it. Then we could explain the now simple
>behavior to everyone, and they would all understand.

Of course, for balance I should reiterate that before proposing any such
change we also need to quantify the costs of enforcing an execution
ordering (presumably left-to-right) w.r.t.:

  a) how much optimization loss there actually is on popular
implementations and platforms (note this is likely to vary greatly by
application)

  b) how much existing code may rely on a left-to-right ordering (if
there's lots, there may be pressure to preserve its meaning even if it's
currently relying on unspecified and nonportable behavior)

So far, I've been met with more or less deafening silence every time I ask
people for quantified data about how great these costs really are. The
most concrete information I know of, but which still needs measuring, is
that a) is likely to cost several _times_ the throughput on some standard
Spec microbenchmarks where we can get an order-of-magnitude perf gain by
doing things like choosing to stride the other way across arrays (i.e.,
the program's loop strides rows then columns, and we know we'll get better
cache behavior by reordering all or just chunks of the loop by striding
columns then rows). But we don't know the cost on typical app code (e.g.,
some smart people I know expect <10%, maybe <5%, but we need to measure).

I'm willing to do my part: I've started asking around internally here for
people to do measurements of a) and b) on a certain popular compiler and
the resulting performance difference for certain large C/C++ code bases.
Maybe in a few months I'll be ready to share some results. In the
meantime, I would strongly encourage other C++ vendors to do the same.

If we (the industry) can measure the impact and the results show that the
cost of a) and b) is not prohibitive, then the standards committee could
usefully have a discussion about whether to require left-to-right. But we
do need to measure first so that we can accurately understand the costs
vs. benefits and have a discussion based on data.

Herb

---
Herb Sutter (www.gotw.ca)      (www.pluralsight.com/blogs/hsutter)

Convener, ISO WG21 (C++ standards committee)     (www.gotw.ca/iso)
Contributing editor, C/C++ Users Journal         (www.gotw.ca/cuj)
Architect, Developer Division, Microsoft   (www.gotw.ca/microsoft)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "4zumanga@gmail.com" <4zumanga@gmail.com>
Date: Thu, 8 Dec 2005 01:03:59 CST
Raw View
While this is order of function execution, I've had a number of people
suprised that:

cout << f() << f() << f() << endl;

Doesn't define the order in which the fs are executed.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Thu, 8 Dec 2005 15:35:55 GMT
Raw View
kuyper@wizard.net wrote:
>
> cout << f() << g();
>

> This expression involves four function calls:
> A: f()
> B: cout.operator<<()
> C: g()
> D: cout.operator<<(f()).operator<<()
>
> Let t(X) be the time at which function X is evaluated. Since function

>
> There are exactly two orderings consistent with all of those
> constraints:
>
> t(A) < t(B) < t(C) < t(D)
>
> t(C) < t(A) < t(B) < t(D)
>
> and there's no other requirement in the standard that is violated by
> either of those orderings. Therefore, if it matters whether f() is
> called before or after the call to g(), you've got problems.
>

Nope.  There are more.   There is nothing that requires A and
C to have any relative ordering.

A, C, B, D

is another allowable ordering.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Thu, 8 Dec 2005 15:36:03 GMT
Raw View
Andrew Koenig wrote:
> "Hyman Rosen" <hyrosen@mail.com> wrote in message
> news:200512071313.jB7DDUuC077731@horus.isnic.is...
>
>> Well, that's what I said. But often compilers actually have a
>> fixed order of evaluation, it's just that they don't tell anyone
>> what it is, and it's subject to change between versions or vendors.
>> So order dependencies might creep in and accidentally work, until
>> something changes.
>
> I haven't done a survey of compilers, but I have certainly encountered
> compilers that do not have a fixed order of evaluation in the sense in which
> I think you mean it.  It is certainly true that most compilers will evaluate
> a given expression in the same order every time they encounter it (assuming
> that the types are the same), but that doesn't imply a fixed order of
> evaluation

I don't know what you mean by fixed.   While I've never seen a compiler
that evaluates the functions differently over different invocations of
the compiler, I've certainly seen ones that do things differently for
different optimization settings.

Further, forcing Left-to-Right or Right-to-Left evaluationn order
doesn't work unless you further require sequence points.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: ron@spamcop.net (Ron Natalie)
Date: Thu, 8 Dec 2005 15:36:45 GMT
Raw View
Bo Persson wrote:
n any order.
>
> If the functions really affect each other, this is *really* bad code.
> Why don't we require a diagnostic
>
> Warning: Terrible code - please rewrite!

I suppose the above is sarcasm, but it's generally not possible to
tell that the functions have untoward effect.

All it takes is a write to the same ultimate output channel (which
may be unrelated C++ streams) to cause variability in observed behavior.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: Hyman Rosen <hyrosen@mail.com>
Date: Thu, 8 Dec 2005 10:08:40 CST
Raw View
Ron Natalie wrote:
> Further, forcing Left-to-Right or Right-to-Left evaluationn order
> doesn't work unless you further require sequence points.

The term "sequence point" will be banished to heck.
Expressions will be evaluated left-to-right, operands
before operation. Evaluating an expression with side
effects will cause those side effects to happen. Very
simple, very clear.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: hsutter@gotw.ca (Herb Sutter)
Date: Fri, 9 Dec 2005 04:52:37 GMT
Raw View
After sending this, I thought I should clarify something that on rereading
wasn't clear the way I first wrote it:

hsutter@gotw.ca (Herb Sutter) wrote:
>  a) how much optimization loss there actually is on popular
>implementations and platforms (note this is likely to vary greatly by
>application)
[...]
>The
>most concrete information I know of, but which still needs measuring, is
>that a) is likely to cost several _times_ the throughput on some standard
>Spec microbenchmarks where we can get an order-of-magnitude perf gain by
>doing things like choosing to stride the other way across arrays (i.e.,
>the program's loop strides rows then columns, and we know we'll get better
>cache behavior by reordering all or just chunks of the loop by striding
>columns then rows). But we don't know the cost on typical app code (e.g.,
>some smart people I know expect <10%, maybe <5%, but we need to measure).

Specifically, the stride issue was specifically about the performance gain
from of a memory model latitude for reordering reads/writes, not latitude
for reordering expression evaluation.

Herb

---
Herb Sutter (www.gotw.ca)      (www.pluralsight.com/blogs/hsutter)

Convener, ISO WG21 (C++ standards committee)     (www.gotw.ca/iso)
Contributing editor, C/C++ Users Journal         (www.gotw.ca/cuj)
Architect, Developer Division, Microsoft   (www.gotw.ca/microsoft)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Fri, 9 Dec 2005 04:54:02 GMT
Raw View
"Ron Natalie" <ron@spamcop.net> skrev i meddelandet
news:43982d20$0$28446$9a6e19ea@news.newshosting.com...
> Bo Persson wrote:
> n any order.
>>
>> If the functions really affect each other, this is *really* bad
>> code. Why don't we require a diagnostic
>>
>> Warning: Terrible code - please rewrite!
>
> I suppose the above is sarcasm,

It sure is.

> but it's generally not possible to
> tell that the functions have untoward effect.
>
> All it takes is a write to the same ultimate output channel (which
> may be unrelated C++ streams) to cause variability in observed
> behavior.
>

Yes, but should we encourage that kind of coding, by defining its
meaning? I have never felt that I need to write code like

i = f(i++, i++);

so I don't think it is very productive to spend time specifying
exactly what it means.

In both C and C++ we already have ways of specifying a particular
evaluation order when needed. We just write the expressions in the
particular order, and put a semicolon between each. That's it!

I don't belive there is any advantage in supporting function calls
containing large expressions, with multiple interdependent side
effects. In my opinion this would encourage writing hard to understand
code, rather than good and readable code.


I would rather see compiler writers spend their time on implementing
what is already in the standard. That would be more useful to me.


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Fri, 9 Dec 2005 04:56:20 GMT
Raw View
Herb Sutter wrote:
> Of course, for balance I should reiterate that before proposing any such
> change we also need to quantify the costs of enforcing an execution
> ordering (presumably left-to-right) w.r.t.:
>
>   a) how much optimization loss there actually is on popular
> implementations and platforms (note this is likely to vary greatly by
> application)
>
>   b) how much existing code may rely on a left-to-right ordering (if
> there's lots, there may be pressure to preserve its meaning even if it's
> currently relying on unspecified and nonportable behavior)
>
> So far, I've been met with more or less deafening silence every time I ask
> people for quantified data about how great these costs really are. The
> most concrete information I know of, but which still needs measuring, is
> that a) is likely to cost several _times_ the throughput on some standard
> Spec microbenchmarks where we can get an order-of-magnitude perf gain by
> doing things like choosing to stride the other way across arrays (i.e.,
> the program's loop strides rows then columns, and we know we'll get better
> cache behavior by reordering all or just chunks of the loop by striding
> columns then rows). But we don't know the cost on typical app code (e.g.,
> some smart people I know expect <10%, maybe <5%, but we need to measure).

I think point (a) needs a tad of refining. The real question is (deep
breath, complicated sentence follows):

a) how much opportunity for optimization from a combination "programmer
willing to optimize" + "optimizing compiler" is lost?

I am emphasizing these two details because:

(1) A "programmer willing to optimize" who knows that right-to-left
evaluation is algorithmically better AND knows that left-to-right
evaluation is guaranteed will introduce named temporary to force
right-to-left evaluation. That is the best solution of all, better than
the programmer just leaving optimality at the whim of the compiler. It's
guaranteed. In contrast, the compiler may or may not detect the
opportunity by itself so we can't know whether the optimization will be
done at all.

(2) An "optimizing compiler" can break the left-to-right evaluation rule
if it detects a micro-optimal reordering that doesn't change the
semantics of the code.

For these reasons I believe that in reality there's not a lot of lost
optimality, much less than one might think at first sight. Advocates of
the status quo should (at best for their case) showcase code that the
compiler optimizes and that's too hard, or too suble, to optimize at
source level.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: kuyper@wizard.net
Date: Thu, 8 Dec 2005 22:52:46 CST
Raw View
Ron Natalie wrote:
> kuyper@wizard.net wrote:
> >
> > cout << f() << g();
> >
>
> > This expression involves four function calls:
> > A: f()
> > B: cout.operator<<()
> > C: g()
> > D: cout.operator<<(f()).operator<<()
> >
> > Let t(X) be the time at which function X is evaluated. Since function
>
> >
> > There are exactly two orderings consistent with all of those
> > constraints:
> >
> > t(A) < t(B) < t(C) < t(D)
> >
> > t(C) < t(A) < t(B) < t(D)
> >
> > and there's no other requirement in the standard that is violated by
> > either of those orderings. Therefore, if it matters whether f() is
> > called before or after the call to g(), you've got problems.
> >
>
> Nope.  There are more.   There is nothing that requires A and
> C to have any relative ordering.
>
> A, C, B, D

That case differs from my first case only in the relative order of B
and C, not in the relative order of A and C. The order of B and C could
matter, most plausibly by g() using cout, directly or indirectly. I
missed that case because I was concentrating about the possibility of
the relative order of A and C being important.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Fri, 9 Dec 2005 16:06:08 GMT
Raw View
In article <Ir7L4s.xp1@beaver.cs.washington.edu>, "Andrei Alexandrescu
(See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes
>(1) A "programmer willing to optimize" who knows that right-to-left
>evaluation is algorithmically better AND knows that left-to-right
>evaluation is guaranteed will introduce named temporary to force
>right-to-left evaluation. That is the best solution of all, better than
>the programmer just leaving optimality at the whim of the compiler.
>It's guaranteed. In contrast, the compiler may or may not detect the
>opportunity by itself so we can't know whether the optimization will be
>done at all.

I think this is a very good point (that I had not considered
previously). If the programmer knows the default evaluation order s/he
can write code to force a different order. If you do not know the
evaluation order and you believe it matters you have to write code to
enforce your preferred ordering.

However there is more than just order of evaluation, there is the issue
of order of side-effects. Should we go the whole way and force an
ordering on those?


--
Francis Glassborow      ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: John Potter <jpotter@lhup.edu>
Date: Fri, 9 Dec 2005 22:59:23 CST
Raw View
On Fri,  9 Dec 2005 16:06:08 GMT, francis@robinton.demon.co.uk (Francis
Glassborow) wrote:

> However there is more than just order of evaluation, there is the issue
> of order of side-effects. Should we go the whole way and force an
> ordering on those?

It is all or nothing.  Here is some amusement.

#include <iostream>
int& inc (int& x) { return ++ x; }
int add (int x, int y) { return x + y; }
int main () {
// Undefined behavior.
    int x(3);
    std::cout << ++x + ++x + ++x << std::endl;
    x = 3;
    std::cout << ++x + (++x + ++x) << std::endl;
// Just unspecified now.
    x = 3;
    std::cout << inc(x) + inc(x) + inc(x) << std::endl;
    x = 3;
    std::cout << inc(x) + (inc(x) + inc(x)) << std::endl;
// Sequence points all over the place but still unspecified.
// When does the lvalue to rvalue conversion happen?  This
// problem is unique to references.
    x = 3;
    std::cout << add(add(inc(x), inc(x)), inc(x)) << std::endl;
    x = 3;
    std::cout << add(inc(x), add(inc(x), inc(x))) << std::endl;
    }

On one implementation, the output was 16 18 16 18 18 16.

Since optimization is part of the subject, -O9 which may totally break
the code produced 15 15 for the last two cases.  Amusing.

John

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Fri, 9 Dec 2005 22:58:44 CST
Raw View
Francis Glassborow wrote:
> In article <Ir7L4s.xp1@beaver.cs.washington.edu>, "Andrei Alexandrescu
> (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes
>
>> (1) A "programmer willing to optimize" who knows that right-to-left
>> evaluation is algorithmically better AND knows that left-to-right
>> evaluation is guaranteed will introduce named temporary to force
>> right-to-left evaluation. That is the best solution of all, better
>> than the programmer just leaving optimality at the whim of the
>> compiler. It's guaranteed. In contrast, the compiler may or may not
>> detect the opportunity by itself so we can't know whether the
>> optimization will be done at all.
>
>
> I think this is a very good point (that I had not considered
> previously). If the programmer knows the default evaluation order s/he
> can write code to force a different order. If you do not know the
> evaluation order and you believe it matters you have to write code to
> enforce your preferred ordering.

There remain the "not-so-obvious" opportunities for optimization, such
as those that reuse registers etc. By my assertion number (2)
("optimizing compiler") I am clarifying that an optimizing compiler can
still evaluate things in the order they please al long as the
left-to-right semantics are unaffected. Because of (1) and (2), I am
speculating that an overwhelming majority of cases are covered, and we
needn't worry about the remaining exceedingly few cases in which there
would be a remanining exceedingly few cycles to be saved.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: usenet-nospam@nmhq.net (Niklas Matthies)
Date: Sat, 10 Dec 2005 04:57:16 GMT
Raw View
On 2005-12-09 16:06, Francis Glassborow wrote:
> In article <Ir7L4s.xp1@beaver.cs.washington.edu>, "Andrei Alexandrescu
> (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com> writes
>>(1) A "programmer willing to optimize" who knows that right-to-left
>>evaluation is algorithmically better AND knows that left-to-right
>>evaluation is guaranteed will introduce named temporary to force
>>right-to-left evaluation. That is the best solution of all, better than
>>the programmer just leaving optimality at the whim of the compiler.
>>It's guaranteed. In contrast, the compiler may or may not detect the
>>opportunity by itself so we can't know whether the optimization will be
>>done at all.
>
> I think this is a very good point (that I had not considered
> previously). If the programmer knows the default evaluation order s/he
> can write code to force a different order. If you do not know the
> evaluation order and you believe it matters you have to write code to
> enforce your preferred ordering.

It might be difficult to know for the programmer which order is more
efficient, in particular when the code targets different implementations,
or for example with inlined functions where a change in the function's
implementation can change the optimal evaluation order of the function
arguments at the particular call site. Usually the compiler knows much
better than the programmer.

But the point is right in that it's the order-independent code that
should require special handling by the programmer if necessary, not
the order-dependent code. If the order is going to be defined, it
would be nice if some language constructs would be provided to mark a
group of statements to be executable "in parallel", i.e. having no
order dependencies.

> However there is more than just order of evaluation, there is the
> issue of order of side-effects. Should we go the whole way and force
> an ordering on those?

Yes, because these cause the actual UB most of the time.
Also it would be confusing that (for example) whether

   os << x++ << x;

has predictable behavior depends on whether the type of 'x' is a
built-in type or not (i.e. whether '++' is a function call or not).

-- Niklas Matthies

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail@moderncppdesign.com>
Date: Fri, 9 Dec 2005 22:58:23 CST
Raw View
Bo Persson wrote:
> "Ron Natalie" <ron@spamcop.net> skrev i meddelandet
>>but it's generally not possible to
>>tell that the functions have untoward effect.
>>
>>All it takes is a write to the same ultimate output channel (which
>>may be unrelated C++ streams) to cause variability in observed
>>behavior.
>>
>
>
> Yes, but should we encourage that kind of coding, by defining its
> meaning? I have never felt that I need to write code like
>
> i = f(i++, i++);
>
> so I don't think it is very productive to spend time specifying
> exactly what it means.

Maybe, however, at some point someon might write:

f(a++, b++);

where a and b are references that could alias. In that case, it is very
useful to define the behavior of the code. It would be naive to believe
that code tripping on unspecified order of execution can only be
"obviously dumb" by eye inspection.

> In both C and C++ we already have ways of specifying a particular
> evaluation order when needed. We just write the expressions in the
> particular order, and put a semicolon between each. That's it!

Given that they do offer terse ways to express complicated computations,
and that people will use and abuse those terse ways, it is mightlily
important to define behavior.

> I don't belive there is any advantage in supporting function calls
> containing large expressions, with multiple interdependent side
> effects. In my opinion this would encourage writing hard to understand
> code, rather than good and readable code.
>
>
> I would rather see compiler writers spend their time on implementing
> what is already in the standard. That would be more useful to me.

I believe that advocating a standard that leaves unnecessarily much to
the whim of the compiler is an entirely fallacious viewpoint that goes
exactly against what a standard is supposed to do, and that favors
vendor lock-in. I'm also sure that most people on the standardization
committee believe the same. The thing worth discussing, therefore, is
whether defining behavior can impact optimality of code generation or not.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Sat, 10 Dec 2005 04:58:00 GMT
Raw View
Herb Sutter wrote:
> After sending this, I thought I should clarify something that on rereading
> wasn't clear the way I first wrote it:
>
> hsutter@gotw.ca (Herb Sutter) wrote:
>
>> a) how much optimization loss there actually is on popular
>>implementations and platforms (note this is likely to vary greatly by
>>application)
>
> [...]
>
>>The
>>most concrete information I know of, but which still needs measuring, is
>>that a) is likely to cost several _times_ the throughput on some standard
>>Spec microbenchmarks where we can get an order-of-magnitude perf gain by
>>doing things like choosing to stride the other way across arrays (i.e.,
>>the program's loop strides rows then columns, and we know we'll get better
>>cache behavior by reordering all or just chunks of the loop by striding
>>columns then rows). But we don't know the cost on typical app code (e.g.,
>>some smart people I know expect <10%, maybe <5%, but we need to measure).
>
>
> Specifically, the stride issue was specifically about the performance gain
> from of a memory model latitude for reordering reads/writes, not latitude
> for reordering expression evaluation.

Ah, now that makes sense. (I was a bit confused.)

So the jury is still out on finding cases (that are not source-level
optimizable in an obvious way) in which a specified order of argument
evaluation forces the compiler to generate pessimized code.


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: bop@gmb.dk ("Bo Persson")
Date: Sat, 10 Dec 2005 22:22:37 GMT
Raw View
"Andrei Alexandrescu (See Website For Email)"
<SeeWebsiteForEmail@moderncppdesign.com> skrev i meddelandet
news:Ir8zAp.s0y@beaver.cs.washington.edu...
> Bo Persson wrote:
>> "Ron Natalie" <ron@spamcop.net> skrev i meddelandet
>>>but it's generally not possible to
>>>tell that the functions have untoward effect.
>>>
>>>All it takes is a write to the same ultimate output channel (which
>>>may be unrelated C++ streams) to cause variability in observed
>>>behavior.
>>>
>>
>>
>> Yes, but should we encourage that kind of coding, by defining its
>> meaning? I have never felt that I need to write code like
>>
>> i = f(i++, i++);
>>
>> so I don't think it is very productive to spend time specifying
>> exactly what it means.
>
> Maybe, however, at some point someon might write:
>
> f(a++, b++);
>
> where a and b are references that could alias. In that case, it is
> very useful to define the behavior of the code. It would be naive to
> believe that code tripping on unspecified order of execution can
> only be "obviously dumb" by eye inspection.
>

I would still argue that this is what I call "bad code", and that
doing the aliasing without noticing it is even worse.  :-)

However, in a previous post you wrote:

>(1) A "programmer willing to optimize" who knows that right-to-left
>evaluation is algorithmically better AND knows that left-to-right
>evaluation is guaranteed will introduce named temporary to force
>right-to-left evaluation. That is the best solution of all, better
>than the programmer just leaving optimality at the whim of the
>compiler.

And here I agree, totally!

Right now, I write my code under the assumption that the compiler is
smart enough to select the proper order - one that is good enough. I
have argued that in my code there is no advantage for a left-to-right
order, because I tend not to write code where it matters.

You just made me realize that the "smart enough" compiler, that I
trust to select a good order, must of course be smart enough to see
this as well. So, if my code is written without the nasty side
effects, and without order dependencies, the compiler can use the
as-if rule and continue to produce the same code for *my* programs.

So, instead of me telling the OP that he can arrange his code better,
we can both get what we want. He can have his left-to-right order of
evaluation, and I can still write my code so that it generally doesn't
matter. In the few cases where it really does matter for me, I can
rearrange *my* code to evaluate the arguments before the function
call, in any order I want. Fine with me!

So, specifying the order of evaluation in the Standard might be a good
idea after all.


Thanks Andrei!


Apologies to Hyman Rosen, for telling you how to write your code.


Bo Persson


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: "Momchil Velikov" <momchil.velikov@gmail.com>
Date: Sat, 10 Dec 2005 16:24:20 CST
Raw View
"Andrei Alexandrescu See Website For Email wrote:
> So the jury is still out on finding cases (that are not source-level
> optimizable in an obvious way) in which a specified order of argument
> evaluation forces the compiler to generate pessimized code.

  How about finding cases in which the order of evaluation is not
enforceable at the source level in an obvious way ?

~velco

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]





Author: SeeWebsiteForEmail@moderncppdesign.com ("Andrei Alexandrescu (See Website For Email)")
Date: Sun, 11 Dec 2005 02:11:05 GMT
Raw View
Momchil Velikov wrote:
> "Andrei Alexandrescu See Website For Email wrote:
>
>>So the jury is still out on finding cases (that are not source-level
>>optimizable in an obvious way) in which a specified order of argument
>>evaluation forces the compiler to generate pessimized code.
>
>
>   How about finding cases in which the order of evaluation is not
> enforceable at the source level in an obvious way ?

Not sure I understand. For the call (expr0)(arg1, arg2, ..., argn) the
evaluation algorithm should be as if the following happens:

1. Evaluate expr0 resulting in a function f
2. For each i in 1..n in this order, evaluate argi resulting in a value vi
3. Invoke f(v1, v2, ..., vn)

It's a pity that the intended semantics can't be easily expressed as a
source-to-source transformation. (The problem is that rvalue and lvalue
expressions would lead to different types of temporaries.)


Andrei

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]