Topic: Async C++ Exceptions [was: Is STL MT-Safe?]


Author: Dave Butenhof <butenhof@zko.dec.com>
Date: 1996/05/21
Raw View
Tom Payne (thp@cs.ucr.edu) wrote
>
> Discussion has focused on implemetation overhead and techniques
> necessary for valid use.  E.g., "What if an asynchronous exception
> interrupts a function member of an automatic object and subsequent
> invocation of the destructor find the object in an incoherent state?"
>
> My claim is that multithreading is the proper model through which to
> view and solve these problem:  The main program is a thread and each
> signal occurrence activates a concurrent pseudo-thread that blocks the
> main thread and (commonly) the other pseudo-threads.  Destructors and
> exception handlers (catchers) invoked by an asynchronous exception are
> activities of the pseudo-thread that threw the exception.

I agree that "multithreading is the proper model" -- I disagree that your
vision has anything to do with multithreading. What you're proposing is
nothing other than UNIX signals, just as they've always been. Perhaps a
little more structured -- say, more like VMS ASTs, or the full "event"
model originally proposed for POSIX.4 (before it got simplified into
signals plus struct sigevent/siginfo).

In either case, a "thread" that asynchronously preempts the identity of a
thread cannot be dealt with using any multithread programming methods. You
can't use mutexes, you can't use condition variables, scheduling controls
become meaningless, and so forth. They're simply not threads. Sure, you can
call them "pseudo-threads", but they're not remotely like "threads" in any
useable feature. And if they "lock out all other threads", you've totally
hosed any useful multiprocessor (or even multiple "kernel entity")
implementation.

> In MT programming, certain updates to certain objects (or collections
> of objects) that are shared with other threads must be atomic relative
> to the other threads.  Since the pseudo-threads commonly lock out all
> other threads, this is not a problem for them and the destructors and
> catchers they invoke.  The main thread, however, must lock out the
> pseudo-threads, by blocking signals whose handlers throw exceptions,
> whenever it performs such an update to an object that might be
> accessed by the destructor of an automatic object.  Typically, this
> would include all member functions of all automatic objects, plus
> operations on the managers of resources that those destructors
> release, plus both local and global objects accessed by the catchers.
> (Furthermore, those shared objects need to be volatile and need to
> impose atomicity through signal blocking even on objects as simple an
> integer, since on some architectures assignments to integers are not
> atomic relative to signals.)

If the "lockout" is one instruction, then maybe you could manage this. But it'd
have to be ALL threads, not just the main thread. And you can't do it in one
instruction. Blocking signals is expensive, and there's no reason this would be
any different. Even if you use PC range tables, you're going to have to do it so
often that you'll have a horrendous amount of data to search. And if your async
exceptions are used much, it'll have to be searched a lot. Everything will need
to be volatile. The result will be slow and cumbersome. And to what end? Why
would anyone want to use asynchronous signals within a thread as a programming
methodology? Why don't you simply use threads -- real threads -- to perform
operations asynchronously?

> The significance of this MT view of asynchronous exceptions is that:
>
>  1) It implies that, as in MT programming, the responsibility for
>     imposing such atomicity is the responsibility of the program, not
>     the compiler.  Placing that responsibility on the programmer
>     simplifies implementation and removes the need for this portion
>     of the overhead from programs that don't use asynchronous
>     exceptions.

Except there's no portable way to synchronize. You can't use mutexes, because
you've asynchronously preempted the real thread. Masking signals while a mutex
is locked, or during arbitrary sections of code (like constructors and
destructors) isn't practical. It's certainly not efficient. Do you mean that
instead of using "x = y", everyone should code machine-specific asm() sequences
(or calls to assembler?) to get atomic copies?

>  2) It gives the programmer a known framework from which to view the
>     task of coordination between a program and asynchronously invokable

No, it doesn't -- unless you're talking about the "known framework" of UNIX
signals. Sure, it's known. It's also extremely limited and dangerous.

> The MT perspective does not, however, deal with the problem of stack
> coherence, which arises from the fact that an asynchronous exception
> might be thrown during the prolog and/or the epilog of the interrupted
> function invocation.

Threads are asynchronous because their state is independent. Make it dependent,
as you're proposing, and they're not threads anymore. Certainly not in the same
sense as POSIX.1c, or SysV4.2MP, or any other common threading model. People
love "unified field" theories -- everyone wants to find least common
denominators. That doesn't mean they exist, or that they're useful once you've
found them. Especially in software engineering, where they need to do something
other than increase one's abstract understanding of the universe to be of use.

+-------------------------------------------------------------------+
| Dave Butenhof                       Digital Equipment Corporation |
| butenhof@zko.dec.com                110 Spit Brook Rd, ZKO2-3/Q18 |
| PH 603.881.2218, FAX 603.881.0120   Nashua, NH 03062-2711         |
+----------+     "Better Living Through Concurrency"     +----------+
           +---------------------------------------------+
---
[ comp.std.c++ is moderated.  To submit articles: Try just posting with your
                newsreader.  If that fails, use mailto:std-c++@ncar.ucar.edu
  comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
  Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
  Comments? mailto:std-c++-request@ncar.ucar.edu
]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/23
Raw View
Dave Butenhof (butenhof@zko.dec.com) wrote:
: Tom Payne (thp@cs.ucr.edu) wrote
[...]
: > My claim is that multithreading is the proper model through which to
: > view and solve these problem:  The main program is a thread and each
: > signal occurrence activates a concurrent pseudo-thread that blocks the
: > main thread and (commonly) the other pseudo-threads.  Destructors and
: > exception handlers (catchers) invoked by an asynchronous exception are
: > activities of the pseudo-thread that threw the exception.
:
: I agree that "multithreading is the proper model" -- I disagree that your
: vision has anything to do with multithreading. What you're proposing is
: nothing other than UNIX signals, just as they've always been.

The proposal is to (eventually) extend the C++ standard by allowing signal
handlers to throw exceptions, i.e., unwind the stack to a certain point
invoking the destructors of the automatic objects in the deleted invocation
records.  The main current issues are:
 1)  how to maintain coherence of data structures, including the stack
     of invocation records, in the presence of such a feature,
 2)  whether it is possible to implement such a feature without undue
     overhead to programs that don't use it.

A Unix signal can longjmp back to a previous frame, but doing so will not
invoke the destructors of local objects in the unwound portion of the stack.

[...]
: In either case, a "thread" that asynchronously preempts the identity of a
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^
Right.  It's like the interrupted host thread becomes temporarily posessed by
a body-snatcher pseudo-thread and made to do its bidding.

: thread cannot be dealt with using any multithread programming methods. You
: can't use mutexes, you can't use condition variables, scheduling controls
: become meaningless, and so forth. They're simply not threads. Sure, you can
: call them "pseudo-threads", but they're not remotely like "threads" in any
: useable feature.
[...]
: Threads are asynchronous because their state is independent. Make it
: dependent, as you're proposing, and they're not threads anymore.
: Certainly not in the same sense as POSIX.1c, or SysV4.2MP, or any other
: common threading model. People love "unified field" theories --
: everyone wants to find least common denominators. That doesn't mean
: they exist, or that they're useful once you've found them. Especially
: in software engineering, where they need to do something other than
: increase one's abstract understanding of the universe to be of use.

Like a thread, such a "pseudo-thread" is an asynchronous stream of
activity that consumes CPU cycles and accesses objects shared with
other such streams of activity in ways that need coordination.  This
model provides useful guidance in the implementation and use of the
proposed language feature, asynchronous exceptions.  Specifically, the
multithreading model (analogy) suggests that responsibility for the
coherence of data structures other than the stack of invocation records
should be the responsibility of the program, rather than the compiler,
and that the program can and must make certain (collections of) objects
volatile and access to them atomic relative to other threads.

: > In MT programming, certain updates to certain objects (or collections
: > of objects) that are shared with other threads must be atomic relative
: > to the other threads.  Since the pseudo-threads commonly lock out all
: > other threads, this is not a problem for them and the destructors and
: > catchers they invoke.  The main thread, however, must lock out the
: > pseudo-threads, by blocking signals whose handlers throw exceptions,
: > whenever it performs such an update to an object that might be
: > accessed by the destructor of an automatic object.  Typically, this
: > would include all member functions of all automatic objects, plus
: > operations on the managers of resources that those destructors
: > release, plus both local and global objects accessed by the catchers.
: > (Furthermore, those shared objects need to be volatile and need to
: > impose atomicity through signal blocking even on objects as simple an
: > integer, since on some architectures assignments to integers are not
: > atomic relative to signals.)
:
: If the "lockout" is one instruction, then maybe you could manage this.
: But it'd have to be ALL threads, not just the main thread. And you
: can't do it in one instruction. Blocking signals is expensive, and
: there's no reason this would be any different. Even if you use PC range
: tables, you're going to have to do it so often that you'll have a
: horrendous amount of data to search. And if your async exceptions are
: used much, it'll have to be searched a lot. Everything will need to be
: volatile. The result will be slow and cumbersome. And to what end? Why
: would anyone want to use asynchronous signals within a thread as a
: programming methodology? Why don't you simply use threads -- real
: threads -- to perform operations asynchronously?

Coherence in the presence of concurrency is not free!  Everthing you say holds
equally for standard multithreading environments, where, for a program to be
portable:
  -- shared objects must be volatile (Otherwise, they can be kept in registers
     and never written to the shared memory location.)
  -- updates to shared objects must be made atomic (Otherwise, one thread's
     read might occur in the middle of another thread's write.)

: > The significance of this MT view of asynchronous exceptions is that:
: >
: >  1) It implies that, as in MT programming, the responsibility for
: >     imposing such atomicity is the responsibility of the program, not
: >     the compiler.  Placing that responsibility on the programmer
: >     simplifies implementation and removes the need for this portion
: >     of the overhead from programs that don't use asynchronous
: >     exceptions.
:
: Except there's no portable way to synchronize. You can't use mutexes,
: because you've asynchronously preempted the real thread.

If a signal-sustained pseudo-thread takes a periodic sensor readings and
places them in a queue for subsequent processing by another thread, that
shared queue must be monitored.  There's no reason you can't use a mutex for
this.

: Masking signals while a mutex
: is locked, or during arbitrary sections of code (like constructors and
: destructors) isn't practical. It's certainly not efficient.

Blocking a signal needn't involve more than setting a bit, and unblocking a
signal needn't involve more than resetting it and testing another.  To prevent
self-deadlock, however, the holder of a mutex must block any signal whose
handler might attempt to lock that mutex (even indirectly through a function
that it calls).

: Do you mean that instead of using "x = y", everyone should code
: machine-specific asm() sequences (or calls to assembler?) to get atomic
: copies?

This is not a suggestion; it's an observation.  To be portable to an
environment where integers are not atomic, a program that shares an integer
among threads must make any assignments to that variable atomic relative to
access by other threads.  Typically, one would protect it along with related
data with a mutex, e.g., inside a monitor.

: >  2) It gives the programmer a known framework from which to view the
: >     task of coordination between a program and asynchronously invokable
:
: No, it doesn't -- unless you're talking about the "known framework" of UNIX
: signals. Sure, it's known. It's also extremely limited and dangerous.

Techniques for maintaining the coherence of data sturctures in the presence of
such concurrency have been studied in detail for the last thirty years.  The
general framework for such studies has been "cooperating processes."  The term
"process," however, has acquired multiple meanings, and the term "thread" has
been attached to the meaning that was used in those studies.


For some reason, I get the impression that you are tying to say that this
proposal is:
  1)  unworkable and
  2)  already standard practice.

C++ programs generally handle resource acquisition via constructors and
resource release via destructors.  In a multithreaded C++ program, how would
you externally kill a thread and get it to release its resources (e.g.,
mutexes and memory) in the process?


Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: dak <pierreba@poster.cae.ca>
Date: 1996/05/25
Raw View
Tom Payne (thp@cs.ucr.edu) wrote:
> Dave Butenhof (butenhof@zko.dec.com) wrote:
> : Tom Payne (thp@cs.ucr.edu) wrote
> [...]
> : > My claim is that multithreading is the proper model through which to
> : > view and solve these problem:  The main program is a thread and each

[snip]

> :
> : I agree that "multithreading is the proper model" -- I disagree that your
> : vision has anything to do with multithreading. What you're proposing is
> : nothing other than UNIX signals, just as they've always been.

[snip]

> A Unix signal can longjmp back to a previous frame, but doing so will not
> invoke the destructors of local objects in the unwound portion of the stack.

[snip]

> Like a thread, such a "pseudo-thread" is an asynchronous stream of
> activity that consumes CPU cycles and accesses objects shared with
> other such streams of activity in ways that need coordination.  This
> model provides useful guidance in the implementation and use of the
> proposed language feature, asynchronous exceptions.  Specifically, the
> multithreading model (analogy) suggests that responsibility for the
> coherence of data structures other than the stack of invocation records
> should be the responsibility of the program, rather than the compiler,
> and that the program can and must make certain (collections of) objects
> volatile and access to them atomic relative to other threads.

[snip]

> Coherence in the presence of concurrency is not free!  Everthing you
> say holds equally for standard multithreading environments, where, for
> a program to be portable:
>   -- shared objects must be volatile (Otherwise, they can be kept in
>      registers and never written to the shared memory location.)
>
>   -- updates to shared objects must be made atomic (Otherwise, one thread's
>      read might occur in the middle of another thread's write.)

Your whole line of arguments seems like reverse engineering of the desired
goal: by using a threading model, you evade compiler complexity by dumping
the burden of coherency on the user. There is one major problem that.

Unlike threads (and thread-safe libraries), using thread as a model for
signals throwing exceptions makes ALL data structures subject to coherency
problems. When you are using threads, you _choose_ what is shared and
select thread-safe libraries or handle your coherency by using mutexes.

An exception-throwing signal-handler puts any data type that can ever be
found on the stack a subject of coherency problem. You can't possibly expect
everyone to have dealt with that possibility before-hand. It must be a
compiler burden.

If it is a compiler burden then unless all compilation unit generate range
tables for all object files, and mutexes enable-disable code for all
possible critical sections, you can't use an off-the-shelf libraries.
Expecting that a linker could detect all such code and generate it afterward
seems impossible in conjunction iwth optimizations.

Of course, we could have exception-throwing signal safe library (like there
are thread safe libraries). But just like the later, which is not standard
under C++ and is provided by third parties in conjunction with a specific
thread packages (say POSIX), the former could be provided as an extension
from a compiler vendor for a particular platform.
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/28
Raw View
dak (pierreba@poster.cae.ca) wrote:
[...]
:
: Your whole line of arguments seems like reverse engineering of the desired
: goal: by using a threading model, you evade compiler complexity by dumping
: the burden of coherency on the user. There is one major problem that.
:
: Unlike threads (and thread-safe libraries), using thread as a model for
: signals throwing exceptions makes ALL data structures subject to coherency

Right.  Specifically, all data objects that are accessed by
destructors of local (automatic) objects or the handlers of
asynchronous exceptions are "subject to coherency problems" (i.e.,
require access control).  Typically, these include the destructed
objects themselves and their associated resource allocators.

: problems. When you are using threads, you _choose_ what is shared and
: select thread-safe libraries or handle your coherency by using mutexes.
:
: An exception-throwing signal-handler puts any data type that can ever be
: found on the stack a subject of coherency problem. You can't possibly expect
: everyone to have dealt with that possibility before-hand. It must be a
: compiler burden.
:
: If it is a compiler burden then unless all compilation unit generate range
: tables for all object files, and mutexes enable-disable code for all
: possible critical sections, you can't use an off-the-shelf libraries.

Keep in mind that asynchronous exceptions are proposed to take care of
those cases where a program needs to be able to revert to a previous
phase in response to an external signal.  Currently, the only way to
do that is via polling.  Since off-the-shelf libraries do not do such
polling, they already pose difficulties.

: Expecting that a linker could detect all such code and generate it afterward
: seems impossible in conjunction iwth optimizations.

Agreed.

: Of course, we could have exception-throwing signal safe library (like there
: are thread safe libraries). But just like the later, which is not standard

Both for multi-threading and for asynchronous exceptions, it is
possible to develop safe libraries and produce safe or non-safe (i.e.,
zero-overhead) object code from a single set of source files via
standard preprocessor tricks.  Adding asynchronous-exception safety to
the sources for an MT-safe library is likely to be quite tedious.

: under C++ and is provided by third parties in conjunction with a specific
: thread packages (say POSIX), the former could be provided as an extension
: from a compiler vendor for a particular platform.

I do not consider the current status of multi-threaded programming
under C++ to be a desirable model for language development and/or
standardization.  It would be an unreasonable performance and
development burden for the standard to require MT-safety in the
standard library.  It would, however, be helpful to have a
standardized syntax and model for multi-threading.

Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/15
Raw View
In article <4n6hmf$qqd@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> : Also, I have seen architectures in which there were some instructions that
> : cannot be emulated; for example, instructions that do atomic "test and set"
> : kinds of operations cannot always be emulated.  Also problematic are
> : special instructions for interfacing to special hardware, which can depend
> : on saving the program counter where the instruction was executed.
>
> So the questions would be
>   *  How prevalent are such instructions?
>   *  How vital are they to efficient prologues and epilogues?

I don't really see why this is important.  If your runtime is relying on
being able to emulate "critical sections", and if your compiler is capable
of optimizing prologues and epilogues, including perhaps intermingling them
with user code, then your runtime must be prepared to emulate any
instruction it might find.  What do you propose to do if you find one of
these instructions while trying to emulate a critical section?  Just give
up?

> So, ALL MEMBER FUNCTIONS OF
> ALL AUTOMATIC OBJECTS ARE CRITICAL.

Thank you.  That's what I thought.

> One could argue that, since C is Turing complete, we don't "need" any
> of the extensions of C++.  They are an "asset," the utility of will be
> debated for years to come.

Agreed.  Features were added to C++ (ostensibly) because they were of
general utility and either (1) their cost was zero if you don't use them,
or (2) the cost was considered minimal if you don't use them *and* the
majority of programmers would use them anyway.  At least, I think that's a
reasonable criteria and I think it has been pretty consistently applied to
C++ (even if, perhaps, it wasn't the precise rationale used).

> There is a well-known and widespread need to be able to asynchronously
> abort a phase of a computation in a way that releases the resources
> acquired during that phase.

This claim has neither been established or refuted. :-) Especially the
widespread part.  If it's so widespread, how come so few programs seem to
do it?

> :   * Implementing this feature requires considerable work for the compiler
> :     and/or runtime (probably both).  It also appears to impose overhead
> :     for all C++ users, whether they use the feature or not.
>
> Neither of these claims have been established (or refuted).  They are
> exactly what this discussion is about.

I disagree.  All of the proposals I've seen thus far involve generating
tables that can become large, or disabling some optimizations.  The
compiler *must* be capable of generating the information, which means extra
cost in constructing the compiler.  That code isn't going to write itself!

> :   * Writing code that is safe in the presence of asynchronous exceptions
> :     appears to be impossible, especially for third-party library writers.
>
> This claim as well has been neither established or refuted.  What is
> clear, however, is that programming in the presence of asynchronous
> exceptions is a form of MT programming, which is a tedious, but not
> impossible, business.

But you've said yourself that you weren't even sure how to write those
"asynchronous destructors".  I think it's much harder than MT programming,
because thread A cannot normally cause a local object in thread B to be
destroyed while B is still using it!  Think of it this way: can thread A
cause thread B to take a branch asynchronously?  Not normally, but allowing
asynchronous exceptions amounts to the same thing.

--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL  33309
Bill.Leonard@mail.hcsc.com

These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.

------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.

"Hard work never scared me.  Oh, sure, it has startled me from a distance."
                                                       -- Professor Fishhawk
------------------------------------------------------------------------------


[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: schwea@aur.alcatel.com (E. Schweitz)
Date: 1996/05/17
Raw View
In article <4n6hmf$qqd@galaxy.ucr.edu>, Tom Payne <thp@cs.ucr.edu> wrote:
>:   * Writing code that is safe in the presence of asynchronous exceptions
>:     appears to be impossible, especially for third-party library writers.
>This claim as well has been neither established or refuted.  What is
>clear, however, is that programming in the presence of asynchronous
>exceptions is a form of MT programming, which is a tedious, but not
>impossible, business.

I may have missed this in the previous discussion...

Could you clarify why "asynchronous exceptions" are a "form of MT
programming"?  If I'm following this discourse correctly, it seems an
"asynchronous exception" has been loosely defined much as a signal -- an
exception which comes from a source other than the executing process.  A
C++ exception however can only be generated through the actions of the
currently executing process and transfers control from one point within
that process to another point.  Now applying the C++ exception to a
thread (replacing process with thread in the last sentence) does not
violate this internal/external condition -- the thread would only raise
an exception for itself.  Thus, it is not clear, unless you mean to use
exceptions as a messaging passing system between distinct threads,
why MT implies "asynchronous exceptions" or vice versa.

Eric
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/18
Raw View
{ Followups set to comp.std.c++. -mod (clc++m) }

E. Schweitz (schwea@aur.alcatel.com) wrote:
: In article <4n6hmf$qqd@galaxy.ucr.edu>, Tom Payne <thp@cs.ucr.edu> wrote:
[...]
: >What is
: >clear, however, is that programming in the presence of asynchronous
: >exceptions is a form of MT programming, which is a tedious, but not
: >impossible, business.
[...]
: Could you clarify why "asynchronous exceptions" are a "form of MT
: programming"?  If I'm following this discourse correctly, it seems an
: "asynchronous exception" has been loosely defined much as a signal -- an
[...]

I've suggested that the standard should eventually allow signal
handlers to:
 1) read variables, subject to certain disclaimers about the
    indeterminacy of the values obtained
 2) throw exceptions, again subject to certain disclaimers.
Such exceptions, especially those thrown by he handlers of
asynchronous signals, are being called "asynchronous exceptions."

Discussion has focused on implemetation overhead and techniques
necessary for valid use.  E.g., "What if an asynchronous exception
interrupts a function member of an automatic object and subsequent
invocation of the destructor find the object in an incoherent state?"

My claim is that multithreading is the proper model through which to
view and solve these problem:  The main program is a thread and each
signal occurrence activates a concurrent pseudo-thread that blocks the
main thread and (commonly) the other pseudo-threads.  Destructors and
exception handlers (catchers) invoked by an asynchronous exception are
activities of the pseudo-thread that threw the exception.

In MT programming, certain updates to certain objects (or collections
of objects) that are shared with other threads must be atomic relative
to the other threads.  Since the pseudo-threads commonly lock out all
other threads, this is not a problem for them and the destructors and
catchers they invoke.  The main thread, however, must lock out the
pseudo-threads, by blocking signals whose handlers throw exceptions,
whenever it performs such an update to an object that might be
accessed by the destructor of an automatic object.  Typically, this
would include all member functions of all automatic objects, plus
operations on the managers of resources that those destructors
release, plus both local and global objects accessed by the catchers.
(Furthermore, those shared objects need to be volatile and need to
impose atomicity through signal blocking even on objects as simple an
integer, since on some architectures assignments to integers are not
atomic relative to signals.)

The significance of this MT view of asynchronous exceptions is that:

 1) It implies that, as in MT programming, the responsibility for
    imposing such atomicity is the responsibility of the program, not
    the compiler.  Placing that responsibility on the programmer
    simplifies implementation and removes the need for this portion
    of the overhead from programs that don't use asynchronous
    exceptions.

 2) It gives the programmer a known framework from which to view the
    task of coordination between a program and asynchronously invokable
    destructors and catchers.

The MT perspective does not, however, deal with the problem of stack
coherence, which arises from the fact that an asynchronous exception
might be thrown during the prolog and/or the epilog of the interrupted
function invocation.  I had suggested "virtual postponement" of the
occurrence of such signals via interpretive execution past these
"critical" sections, but Bill Leonard pointed out that, with
sufficiently agressive optimization, there may be no idenfifiable
prolog or epilog, or they may be interleaved with other computation
(perhaps even each other).  Alternatively, at the occurrence of such
an exception, the signal handler (or some implementation-provided
wrapper) can search forward for a return instruction and, then, check
what computation remains to be done on the relevant values for a
return (oldSP, oldPC, etc.).  These can be used, if necessary, in
propagating the exception from the interrupted function to its caller.
It would involve overhead only at the occurrence of an asynchronous
exception.


Also, Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
[...]
: But you've said yourself that you weren't even sure how to write those
: "asynchronous destructors".  I think it's much harder than MT programming,
: because thread A cannot normally cause a local object in thread B to be
: destroyed while B is still using it!  Think of it this way: can thread A
: cause thread B to take a branch asynchronously?  Not normally, but allowing
: asynchronous exceptions amounts to the same thing.

I think I understand this better now, and, as noted above, writing the
destructors and cathers is not the problem.  Rather, the tedium is in
establishing the atomicity of other code that updates the objects that
the destructors and catchers access.  Standard MT programming
techniques seem to apply and to be sufficient.

[...]
: Agreed.  Features were added to C++ (ostensibly) because they were of
: general utility and either (1) their cost was zero if you don't use them,
: or (2) the cost was considered minimal if you don't use them *and* the
: majority of programmers would use them anyway.  At least, I think that's a
: reasonable criteria and I think it has been pretty consistently applied to
: C++ (even if, perhaps, it wasn't the precise rationale used).

It appears that one can get an implementation with zero-overhead
for programs that don't throw asynchronous exceptions by:
  1)  placing the burden of signal blocking on the program and
  2)  using the exception-time path scanning to complete the prolog.
A programmer can write asynch-exception-safe libraries (having the
appropriate destructor coordination) and use a preprocessor switch to
produce either safe or non-safe (zero-overhead) object code.

[...]
: > There is a well-known and widespread need to be able to asynchronously
: > abort a phase of a computation in a way that releases the resources
: > acquired during that phase.
:
: This claim has neither been established or refuted. :-) Especially the
: widespread part.  If it's so widespread, how come so few programs seem to
: do it?

Because the feature isn't available.  Until five years ago, however,
few C++ programs threw exceptions.

[...]
: The
: compiler *must* be capable of generating the information, which means extra
: cost in constructing the compiler.  That code isn't going to write itself!

Agreed -- in any reasonable cost/benefit analysis, implementation cost
is an appropriate and important component of the numerator.


Tom Payne (thp@cs.ucr.edu)
---
      [ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
      [ about comp.lang.c++.moderated. First time posters: do this! ]
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: davea@quasar.engr.sgi.com (David B.Anderson)
Date: 1996/05/18
Raw View
In article <fjh-960519-014551@cs.mu.oz.au>, Tom Payne <thp@cs.ucr.edu> wrote:
>{ Followups set to comp.std.c++. -mod (clc++m) }
>
>E. Schweitz (schwea@aur.alcatel.com) wrote:
>: In article <4n6hmf$qqd@galaxy.ucr.edu>, Tom Payne <thp@cs.ucr.edu> wrote:
>[...]
>: >What is
>: >clear, however, is that programming in the presence of asynchronous
>: >exceptions is a form of MT programming, which is a tedious, but not
>: >impossible, business.
>[...]
>: Could you clarify why "asynchronous exceptions" are a "form of MT
>: programming"?  If I'm following this discourse correctly, it seems an
>: "asynchronous exception" has been loosely defined much as a signal -- an
>[...]
>
>I've suggested that the standard should eventually allow signal
>handlers to:
> 1) read variables, subject to certain disclaimers about the
>    indeterminacy of the values obtained
> 2) throw exceptions, again subject to certain disclaimers.
>Such exceptions, especially those thrown by he handlers of
>asynchronous signals, are being called "asynchronous exceptions."
>
>Discussion has focused on implemetation overhead and techniques
>necessary for valid use.  E.g., "What if an asynchronous exception
>interrupts a function member of an automatic object and subsequent
>invocation of the destructor find the object in an incoherent state?"
[ ]

>It appears that one can get an implementation with zero-overhead
>for programs that don't throw asynchronous exceptions by:
>  1)  placing the burden of signal blocking on the program and
>  2)  using the exception-time path scanning to complete the prolog.

While I am not taking a position on asynchronous
exceptions I must finally comment on the issue of stack
unwinding.

With the proper unwind information one can unwind from
asynchronous exceptions without difficulty (you knew that).

The .debug_frame frame description information (as defined by
the DWARF Version 2 debugging information format) allows simple
unwinding from any instruction without much overhead or
instruction disassembly/interpretation/stepping.

This is a compact format.  There is no significant space
penalty and for practical purposes no significant time
penalty.

SGI has been using this method for 64-bit programs August
1994.  We now use it for both 32 and 64 bit programs.

The notion that unwinding at an arbitrary instruction requires
lots of space (large tables) or time or requires instruction
disassembly is incorrect.

[ David B. Anderson              (415)933-4263            davea@sgi.com      ]
[Dwarf Version 2 was designed by a committee with representatives from
 various companies.  While there are other sources of the documentation,
 the specification is available in the directory
 sgigate.sgi.com:~ftp/pub/dwarf
 for public ftp.  The file dwarfMay96.tar.Z
 contains much beyond the basic Dwarf Version 2
 Document, but the document is there in both .mm and postscript forms.
 See the README and COPYING documents in ~ftp/pub/dwarf first.               ]
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: David Brownell <brownell@ix.netcom.com>
Date: 1996/05/07
Raw View
Tom Payne wrote:
> Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
> : In article <4mbm0o$4qc@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> :
> : > As things now stand, a function could see any exception thrown by any
> : > function it calls, however indirectly (including those passed as
> : > parameters).

Actually, most code will catch exceptions and recover from the faults they
report, then continue.  Sometimes it retries the failed operation, other times
it throws a different exception (reporting the fault according to the rules in
that operation's "throw" specification), sometimes it even just rethrows.
(The trivial case of "rethrow" is not even having a "try/catch" clause, when
you know the exceptions thrown by code you call are in your throw spec.)

Several points I don't understand about this notion of signal handlers that
throw C++ exceptions into arbitrary (unsuspecting :-) threads:

 - If I use exceptions in my signal handler, are they really going
   to be sent directly to some thread, or can I use them normally
   inside signal handling code without breaking other threads?

 - How is the integrity of throw specifications to be maintained?
   Keep in mind that a signal handler has no authoritative way to
   know a throw spec of the function at the top of any thread's
   stack.  (To you Visual C++ users out there:  ask Microsoft to
   implement the full ANSI-C++ DWP, not just "convenient" parts!)

Maybe I'd feel better about this mechanism if I saw a complete proposal,
or knew of some systems that had this feature.  What parts of the current
C++ language are you talking about changing?


> : >   As things would stand, a function could see any of those
> : > exceptions plus any thrown by the handler of an asynchronous signal
> : > that is enabled at that point in the code.  So, why is the second set
> : > less checkable than the first?
> :
> : The first allows me to write a function that need know only the exceptions
> : thrown by procedures it calls.  It need know nothing about the "external
> : environment" set up by functions that call it.  In particular, a function
> : that calls no procedures and throws no exceptions need not worry about
> : getting any exceptions at all.  This would not be true if signals propagate
> : exceptions.  Even in cases where a function calls another function, it need
> : only know the exceptions propagated by that function.
>
> It seems to me that your function would be fine just as you originally
> wrote it.  If it doesn't know how to handle a particular asynchronous
> exception, it let's the exception fall through (unwind) to a
> previously called function that can handle it.

But this violates the throw specifications, as has been pointed out several
times before.  Tom, what is your answer to the issue of how to maintain the
integrity of the throw specifications?  Remember, Bill's example was that
the function THROWS NO EXCEPTIONS whatever:

 int fn () throw () { ... }  // throw no exceptions
  - not -
 int fn () { ... }   // throw ANY exception

The C++ style guide I've been working disallows C++ functions that have
no "throw" specification, as a general rule.  Throw specs are common
(except in Visual C++, which still doesn't support them).



> As a point of interest, asynchronous exceptions have similar utility and
> problems to asynchronously killing a thread:

Well, that's sort of begging the question of whether async exceptions really
are that useful!  If they're similar, why is a new mechanism desirable?

Question:  when you say "kill" do you mean that in the way that POSIX means
it, "send a signal to" (allowing handling/continuation)?  Or in the "kill -9"
sense, more like making the thread execute pthread_exit()?  Cancellation?
Or something else?  Seems no single standard definition fits your comments.

I can't quite agree (or disagree) without knowing what you mean to say.
That applies both to defining "asynch exceptions" and "kill".

- Dave


[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: Doug Harrison <dHarrison@worldnet.att.net>
Date: 1996/05/07
Raw View
VC++ 2.xx-4.xx already do something like this. catch(...) will catch Win32
structured exceptions, such as divide-by-zero, illegal memory reference,
etc. In certain contexts, one of which (having to do with property sheets)
is described in the VC++ 4 ReadMe, it can interfere with normal operation
of the OS, in addition to hiding or disguising various bugs in your own
code. I'm told that this is to be changed in a future version so that
catch(...) will only catch C++ exceptions.

On Tuesday, May 07, 1996, David Brownell wrote...
>
> Several points I don't understand about this notion of signal handlers
that
> throw C++ exceptions into arbitrary (unsuspecting :-) threads:
>
>  - If I use exceptions in my signal handler, are they really going
>    to be sent directly to some thread, or can I use them normally
>    inside signal handling code without breaking other threads?
>
>  - How is the integrity of throw specifications to be maintained?
>    Keep in mind that a signal handler has no authoritative way to
>    know a throw spec of the function at the top of any thread's
>    stack.  (To you Visual C++ users out there:  ask Microsoft to
>    implement the full ANSI-C++ DWP, not just "convenient" parts!)
>
> Maybe I'd feel better about this mechanism if I saw a complete proposal,
> or knew of some systems that had this feature.  What parts of the
current
> C++ language are you talking about changing?
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/08
Raw View
David Brownell (brownell@ix.netcom.com) wrote:
: Tom Payne wrote:
: > : > As things now stand, a function could see any exception thrown by any
: > : > function it calls, however indirectly (including those passed as
: > : > parameters).
:
: Actually, most code will catch exceptions and recover from the faults they
: report, then continue.  Sometimes it retries the failed operation, other times
: it throws a different exception (reporting the fault according to the rules in
: that operation's "throw" specification), sometimes it even just rethrows.
: (The trivial case of "rethrow" is not even having a "try/catch" clause, when
: you know the exceptions thrown by code you call are in your throw spec.)

I disagree:

   If every function had an explicit try and catch for all
   functions it calls, two of the benefits of C++ exceptions (reduced
   coding and testing costs) would be lost.  Errors are commonly handled
   several (ofent many) stack frames above where they are etected.
   Intermediate stack frames normally ignore exceptions they can't
   handle.  [FAQ 251, C++ FAQs by Cline and Lomow]

In addition, an asynchronous exception need not be an error; it could
be a timeout, or it could indicate an external event that terminates a
particular phase of a computation, like "target destroyed."

: Several points I don't understand about this notion of signal handlers that
: throw C++ exceptions into arbitrary (unsuspecting :-) threads:
:
:  - If I use exceptions in my signal handler, are they really going
:    to be sent directly to some thread, or can I use them normally
:    inside signal handling code without breaking other threads?
:
:  - How is the integrity of throw specifications to be maintained?
:    Keep in mind that a signal handler has no authoritative way to
:    know a throw spec of the function at the top of any thread's
:    stack.  (To you Visual C++ users out there:  ask Microsoft to
:    implement the full ANSI-C++ DWP, not just "convenient" parts!)
:
: Maybe I'd feel better about this mechanism if I saw a complete proposal,
: or knew of some systems that had this feature.  What parts of the current
: C++ language are you talking about changing?

It's rude to drop an asynchronous exception, like a bolt from the
blue, onto whatever thread happens to be running on the interrupted
CPU.  Rather, the handler for the interrupt-connected signal must
forward it by signalling the appropriate thread, and the handler for
the secondary signal throws the exception, if warranted.

: > As a point of interest, asynchronous exceptions have similar utility and
: > problems to asynchronously killing a thread:
:
: Well, that's sort of begging the question of whether async exceptions really
: are that useful!  If they're similar, why is a new mechanism desirable?
:
: Question:  when you say "kill" do you mean that in the way that POSIX means
: it, "send a signal to" (allowing handling/continuation)?  Or in the "kill -9"
: sense, more like making the thread execute pthread_exit()?  Cancellation?
: Or something else?  Seems no single standard definition fits your comments.
:
: I can't quite agree (or disagree) without knowing what you mean to say.
: That applies both to defining "asynch exceptions" and "kill".

I prefer the "kill -9" semantics as the primitive form, since one can
easily implement the other forms from it.  This semantics can be
obtained by having the handler, which is asynchronously invoked, throw
an exception that is caught only by the stack's base frame, which then
calls thread_exit().  The point is that the exception invokes the
destructors for thread-local objects, threreby, releasing their
resources.

[...]
: > It seems to me that your function would be fine just as you originally
: > wrote it.  If it doesn't know how to handle a particular asynchronous
: > exception, it let's the exception fall through (unwind) to a
: > previously called function that can handle it.
:
: But this violates the throw specifications, as has been pointed out several
: times before.  Tom, what is your answer to the issue of how to maintain the
: integrity of the throw specifications?  Remember, Bill's example was that
: the function THROWS NO EXCEPTIONS whatever:
:
:  int fn () throw () { ... }  // throw no exceptions
:   - not -
:  int fn () { ... }   // throw ANY exception

It is my (possibly incorrect) impression that one can throw an
exception past a specification that omits it.  Otherwise, the function
needs to run with all signals blocked whose handlers throw unspecified
exceptions.

Tom Payne (thp@cs.ucr.edu)


[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: tony@online.tmx.com.au (Tony Cook)
Date: 1996/05/08
Raw View
David Brownell (brownell@ix.netcom.com) wrote:
: Maybe I'd feel better about this mechanism if I saw a complete proposal,
: or knew of some systems that had this feature.  What parts of the current
: C++ language are you talking about changing?

Another item on this issue which I haven't seen mentioned yet...
what happens if an asynchronous exception was thrown during stack
unwinding, ie. while destructors were running etc.

--
        Tony Cook - tony@online.tmx.com.au
                    100237.3425@compuserve.com


[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: David Brownell <brownell@ix.netcom.com>
Date: 1996/05/08
Raw View
Doug Harrison wrote:
>
> VC++ 2.xx-4.xx already do something like this.

Well, it's "odd exception stuff" (the SEH facility), but I don't see how it
matches "exception thrown by one thread to another".

>  catch(...) will catch Win32
> structured exceptions, such as divide-by-zero, illegal memory reference,

Hmmm ... I've always found VC++ confusing with respect to exceptions.  They
have two distinct notions, "structured" ones (SEH), and "normal" C++ exceptions.

The "structured" exceptions are unlike real ones in several ways:  they're
resumable, don't show up in "throw" specs, and are dispatched by (arbitrary)
predicate rather than type.  Very confusing even to call them "exceptions";
they're not too similar to C++ exceptions, and I don't know what "structured"
is intended to imply.  (It's not like C++ exceptions are unstructured!)

I usually think of SEH and normal exceptions as being quite distinct ... but
you're right, this bug is listed in the VC++ README.  Normally you'd need
to use magic keywords ("__try/__except") and primitives to kick in SEH.

- Dave
---
[ comp.std.c++ is moderated.  To submit articles: Try just posting with your
                newsreader.  If that fails, use mailto:std-c++@ncar.ucar.edu
  comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
  Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
  Comments? mailto:std-c++-request@ncar.ucar.edu
]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/08
Raw View
Tony Cook (tony@online.tmx.com.au) wrote:
:
: Another item on this issue which I haven't seen mentioned yet...
: what happens if an asynchronous exception was thrown during stack
: unwinding, ie. while destructors were running etc.

Disaster!  So, it must be prevented.

Since exceptions are supposed to be exceptional, one would think that
the overhead of blocking certain signals during the processing of an
exception should not be a problem.  But, like most overheads in
real-time programming, the delay can easily become a serious problem,
especially in the case of hard deadlines.

For instance, if the handler for a timer interrupt that is used to
maintain a clock throws a time-out exception, we dare not block it for
more than a clock period, or else the clock will drift.  So, in this
case, rather than masking out the timer signal, the programmer(s) must
set up mechansisms that rule out back-to-back time-out exceptions.

Probably the best approach would be an exception_in_progress flag that
a signal handler could consult to determine wether it would be
acceptable to throw an exception and, if not, take alternative
actions.

Tom Payne (thp@cs.ucr.edu)



[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: ed@odi.com (Ed Schwalenberg)
Date: 1996/05/09
Raw View
In article <319028E7.19C@ix.netcom.com> David Brownell <brownell@ix.netcom.com> writes:

  Hmmm ... I've always found VC++ confusing with respect to exceptions.  They
  have two distinct notions, "structured" ones (SEH), and "normal" C++ exceptions.

  The "structured" exceptions are unlike real ones in several ways:  they're
  resumable, don't show up in "throw" specs, and are dispatched by (arbitrary)
  predicate rather than type.  Very confusing even to call them "exceptions";
  they're not too similar to C++ exceptions, and I don't know what "structured"
  is intended to imply.  (It's not like C++ exceptions are unstructured!)

  I usually think of SEH and normal exceptions as being quite distinct ... but
  you're right, this bug is listed in the VC++ README.  Normally you'd need
  to use magic keywords ("__try/__except") and primitives to kick in SEH.

Here's some stuff I learned years ago at a Microsoft conference for language
developers before NT was released.  It may help reduce your confusion.
  . SEH is an operating system feature, not a language feature.  It's
    supposed to be supported by all Win32 languages, and in particular
    it's supported by C as well as C++.
  . SEH antedates C++ exceptions; you can't accuse them of building a
    nonstandard exception facility when the standard had not yet been
    written.

Other SEH trivia:
  . You can translate an SEH exception into a C++ one via
    _set_se_translator().
  . You can't use both in a single function.

Another difference which you didn't mention is that SEH supports
__try/__finally, which implements what Lisp hackers know as unwind-protect
and what C++ programmers do with destructors of stack-based objects: the
ability to make sure that cleanup actions are performed even if this frame
is exited by a throw.

Our product (ObjectStore) is an object-oriented database that is in one sense
a user-mode virtual memory system; we map unused address space to a database,
and when the address space is touched by user code we field the protection fault
exception via SEH, create the page and read the data from the server, and then
use CONTINUE_EXECUTION to make the user's faulting instruction go on as if the
data had been there all along.

On the other hand, we use C++ exceptions in our API, so we have to play nicely
with both.  It has been and continues to be "interesting".


[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/09
Raw View
In article <4mnpt2$f2o@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> The technique suggested by Chase was conceptually quite simple:
> virtually postpone any signal whose handler might throw an exception
> by continuing execution interpretively (interleaved code included) to
> a point outside any critical sections.  This would require, say, two
> words to delimit each critical section, a modest expansion that need
> not exist, unless it is possibly to be used.

How does the compiler know whether it will be used or not?  Seems to me
that, if the language allowed asychronous exceptions, the compiler must
*always* assume one could occur at an inopportune time.

Interpretive execution could be very slow, thus destroying determinism in
real-time programs.  However, I concede that this only occurs if you have
an exception thrown during such a critical section, but still, I think
there is a soft upper bound on how slow handling an exception can be and
still be commercially viable.

> One might alsocontinue
> past out-of-order assignments, e.g., to the (effective location of
> the) next conditional branch, where the ordering of instructions could
> be expected to be coherent.

Bad assumption.  Speculative execution, code sinking (which moves some of
the prologue and epilogue down into other basic blocks), and other
sophisticated optimizations make this assumption unwarranted.

We once had a version of the gdb debugger that tried to use instruction
analysis to do backtraces, rather than having the compiler tell it how.  It
was kind of a bust, because it was slow, difficult to get right, and broke
every time the compiler generated slightly different code.

> The prolog and epilog need not be critical.  The worst that can happen
> is that the stack pointer gets adjusted, but the signal occurs at a
> point where the values to be restored to certain registers have not
> been stored in their expected locations.  So, if and when the
> exception occurs, you search for a reachable return instruction and
> scan the intervening path to trace the completion of the compuations
> of those values.

Huh?  You're saying the prologue and epilogue are not critical, but I have
to do something special, which implies they *are* critical.  Sorry, I guess
I don't get it.

If you don't so *something* about the prologue and epilogue, then the
runtime's normal exception handling code is going to try to unwind the
stack.  Stack unwinding is often implemented using a range-table sort of
technique, but the table contains information about the stack frame that
allows the runtime to unwind it.  At the least, you need a different table
entry for the prologue and epilogue.  At worst, you need an entry for every
instruction in the prologue and epilogue.

Anyway, I think you're assuming a particular kind of architecture here.
What if my architecture requires more than just a stack-pointer adjustment?
Remember, if this is going to be required by the standard, then it has to
be implementable on any (reasonable) hardware platform.  (I say reasonable
because there are certainly some specialized processors that could not
support C++ today.)

--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL  33309
Bill.Leonard@mail.hcsc.com

These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.

------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.

"Hard work never scared me.  Oh, sure, it has startled me from a distance."
                                                       -- Professor Fishhawk
------------------------------------------------------------------------------


[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: vandevod@cs.rpi.edu (David Vandevoorde)
Date: 1996/05/09
Raw View
>>>>> "T" == Tom Payne <thp@cs.ucr.edu> writes:
[...]
T> In addition, an asynchronous exception need not be an error; it could
T> be a timeout, or it could indicate an external event that terminates a
T> particular phase of a computation, like "target destroyed."
[...]

Wouldn't that make exceptions ``not exceptional''?

 Daveed
---
[ comp.std.c++ is moderated.  To submit articles: Try just posting with your
                newsreader.  If that fails, use mailto:std-c++@ncar.ucar.edu
  comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
  Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
  Comments? mailto:std-c++-request@ncar.ucar.edu
]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/09
Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
: In article <4mnpt2$f2o@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
: > The technique suggested by Chase was conceptually quite simple:
: > virtually postpone any signal whose handler might throw an exception
: > by continuing execution interpretively (interleaved code included) to
: > a point outside any critical sections.  This would require, say, two
: > words to delimit each critical section, a modest expansion that need
: > not exist, unless it is possibly to be used.
:
: How does the compiler know whether it will be used or not?  Seems to me
: that, if the language allowed asychronous exceptions, the compiler must
: *always* assume one could occur at an inopportune time.

Good question.  When I suggested syntactic decoration of such signals
or the handlers that throw them it provoked too much "ah ha!"
Something more subtle is needed, say, something like a standard
library "wrapper" function for signal throwing handlers.  When
installed as the handler of a given signal:
  *  does the interpretive execution as described above,
  *  calls a user-designated handler,
  *  catches whatever exception the user's handler throws,
  *  fixes the stack,
  *  rethrows the exception.

: Interpretive execution could be very slow, thus destroying determinism in
: real-time programs.  However, I concede that this only occurs if you have
: an exception thrown during such a critical section, but still, I think
: there is a soft upper bound on how slow handling an exception can be and
: still be commercially viable.

Your concession isn't fully warranted.  As I described things, it is
the occurrence of the signal, not the exception, that is "virtually"
postponed via interpretive execution.  The overhead goes with all
occurrences of signals that *might* throw an exception.

I agree that this interpretive execution might objectionably increase
latencies for the handling of such signals and has to be limited
somehow -- at the absurd limit one could avoid all coherence problems
by "virtually postponing" such signals to the end of the current
function, even to the end of the program.  The issue here is how long
the critical sections are likely to be, which as you've noted is
problematical, given code reoganization.

In principle, one could replace interpretive execution with native
execution up to a breakpoint, but that brings it's own set of
problems, like what to do with shared code, etc.

: > The prolog and epilog need not be critical.  The worst that can happen
: > is that the stack pointer gets adjusted, but the signal occurs at a
: > point where the values to be restored to certain registers have not
: > been stored in their expected locations.  So, if and when the
: > exception occurs, you search for a reachable return instruction and
: > scan the intervening path to trace the completion of the compuations
: > of those values.
:
: Huh?  You're saying the prologue and epilogue are not critical, but I have
: to do something special, which implies they *are* critical.  Sorry, I guess
: I don't get it.
:
: If you don't so *something* about the prologue and epilogue, then the
: runtime's normal exception handling code is going to try to unwind the
: stack.

I meant not "critical" in the sense that it requires the "virtual"
exclusion of exception throwing signal handlers.  The prolog and
epilog are certainly "critical" in the sense that before throwing a
signal, the implementation must complete their work and put things in
order before returning to the interrupted function's caller.

: Stack unwinding is often implemented using a range-table sort of
: technique, but the table contains information about the stack frame that
: allows the runtime to unwind it.  At the least, you need a different table
: entry for the prologue and epilogue.  At worst, you need an entry for every
: instruction in the prologue and epilogue.
:
: Anyway, I think you're assuming a particular kind of architecture here.
: What if my architecture requires more than just a stack-pointer adjustment?

Regardless of the architecture, along any path from the (virtual) point
of interruption to the function return, appropriate steps will be taken to
put the stack into a coherent state for the return.  Those steps can be
read off that path by traversing it in reverse order to see how the final
value of all critical registers and locations were computed.

: Remember, if this is going to be required by the standard, then it has to
: be implementable on any (reasonable) hardware platform.  (I say reasonable
: because there are certainly some specialized processors that could not
: support C++ today.)

Agreed.

: > One might alsocontinue [virtual postponement via interpretation]
: > past out-of-order assignments, e.g., to the (effective location of
: > the) next conditional branch, where the ordering of instructions could
: > be expected to be coherent.
:
: Bad assumption.  Speculative execution, code sinking (which moves some of
: the prologue and epilogue down into other basic blocks), and other
: sophisticated optimizations make this assumption unwarranted.

Good point!  So, then postpone such signals to the next return. ;-)

Rightly or wrongly, it's not the prolog and epilog that I worry about;
I think that the techniques outlined above give us a way to put the
stack together.  If the exception arrives between out of order updates
of globals that one of the destructors is sensitive to, then it might,
say, release the wrong resource.  I've said previously that, because
such destructors are (de facto) asynchronous code, they would be
subject to certain disclaimers and caveats about the values of
globals, even those that are volatile and/or atomic, and that I don't
know exactly what those disclaimers should be.  The question, of
course, is whether there exists a sufficiently safe set of disclaimers
exists that leaves these destructors enough guarantees to reliably get
their jobs done.

Let me simply say that I agree that precluding such modern
optimization techniques would be unacceptable, and that I don't have
way around the problem that doesn't involve equally unacceptable
increases in latency for such signals -- after all, we could easily
have a loop that needs to be externally aborted and that stays
entirely within a given function and in between two out of order
updates of critical global variables.

Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated.  To submit articles: Try just posting with your
                newsreader.  If that fails, use mailto:std-c++@ncar.ucar.edu
  comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
  Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
  Comments? mailto:std-c++-request@ncar.ucar.edu
]





Author: Doug Harrison <dHarrison@worldnet.att.net>
Date: 1996/05/09
Raw View
On Wednesday, May 08, 1996, David Brownell wrote...
> Doug Harrison wrote:
> >
> > VC++ 2.xx-4.xx already do something like this.
>
> Well, it's "odd exception stuff" (the SEH facility), but I don't see how
it
> matches "exception thrown by one thread to another".

I was remarking on this statement:

> On Tuesday, May 07, 1996, David Brownell wrote...
>
> Several points I don't understand about this notion of signal handlers
that
> throw C++ exceptions into arbitrary (unsuspecting :-) threads:

The fact that VC++ catch(...) will catch divide-by-zero, null pointer
dereference, etc. is very much like having a signal handler which throws
C++ exceptions. I'd be interested to know if any other platform includes
these events in normal EH; it seems in conflict with the DWP and what
Stroustrup discusses in "The Design and Evolution of C++", section 16.7.
---
[ comp.std.c++ is moderated.  To submit articles: Try just posting with your
                newsreader.  If that fails, use mailto:std-c++@ncar.ucar.edu
  comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
  Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
  Comments? mailto:std-c++-request@ncar.ucar.edu
]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/10
Raw View
David Vandevoorde (vandevod@cs.rpi.edu) wrote:
: >>>>> "T" == Tom Payne <thp@cs.ucr.edu> writes:
: [...]
: T> In addition, an asynchronous exception need not be an error; it could
: T> be a timeout, or it could indicate an external event that terminates a
: T> particular phase of a computation, like "target destroyed."
: [...]
:
: Wouldn't that make exceptions ``not exceptional''?

Good question!  I don't have an answer, but

FAQ 241:  "When should a function throw an exception?"
Answer:   "When it can't fulfill its promises."

The function that promised to track the target can't or doesn't need
to track it after it has been destroyed.  Circumstances have changed
that have made its promise irrelevent.  Did it promise only to track
the target until it was destroyed?  How can I tell?

Tom Payen (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: Rob Stewart <stew@datalytics.com>
Date: 1996/05/10
Raw View
David Brownell wrote:
>
> Doug Harrison wrote:
> >
> >        catch(...) will catch Win32
> > structured exceptions, such as divide-by-zero, illegal memory reference,
>
> Hmmm ... I've always found VC++ confusing with respect to exceptions.  They
> have two distinct notions, "structured" ones (SEH), and "normal" C++ exceptions.
>
> The "structured" exceptions are unlike real ones in several ways:  they're
> resumable, don't show up in "throw" specs, and are dispatched by (arbitrary)
> predicate rather than type.  Very confusing even to call them "exceptions";
> they're not too similar to C++ exceptions, and I don't know what "structured"
> is intended to imply.  (It's not like C++ exceptions are unstructured!)
>
> I usually think of SEH and normal exceptions as being quite distinct ... but
> you're right, this bug is listed in the VC++ README.  Normally you'd need
> to use magic keywords ("__try/__except") and primitives to kick in SEH.
>

SEH is an OS-level feature, not a language-level (C++) feature.
You can use SEH in C and other language programs.  You cannot
use C++ exceptions in other languages, of course.  The reason
for the anomalous mixin of SEH with C++ exceptions is that C++
exceptions are implemented using SEH.  Obviously, Microsoft
didn't filter non-C++ SEH exceptions from the C++ ones in "catch
(...)."

--
Robert Stewart  | My opinions are usually my own.
Datalytics, Inc. | stew@datalytics.com
---
[ comp.std.c++ is moderated.  To submit articles: Try just posting with your
                newsreader.  If that fails, use mailto:std-c++@ncar.ucar.edu
  comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
  Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
  Comments? mailto:std-c++-request@ncar.ucar.edu
]





Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/11
Raw View
In article <4mu2ik$ebm@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
> : This would require, say, two
> : > words to delimit each critical section, a modest expansion that need
> : > not exist, unless it is possibly to be used.
> :
> : How does the compiler know whether it will be used or not?  Seems to me
> : that, if the language allowed asychronous exceptions, the compiler must
> : *always* assume one could occur at an inopportune time.
>
> Something more subtle is needed, say, something like a standard
> library "wrapper" function for signal throwing handlers.

This still doesn't explain your comment (see quote above) about "need not
exist, unless it is possibly to be used".  The two words per critical
section have to exist for every section of code that the compiler thinks
*might* be critical, unless there is a way for it to tell more exactly what
code *is* critical.  Unless the compiler knows which functions might be
called in the context of "allowing asychronous exceptions", it must assume
all prologues and epilogues are critical.

> I meant not "critical" in the sense that it requires the "virtual"
> exclusion of exception throwing signal handlers.  The prolog and
> epilog are certainly "critical" in the sense that before throwing a
> signal, the implementation must complete their work and put things in
> order before returning to the interrupted function's caller.

I keep getting the feeling that we're having multiple conversations here,
and I never know which one we're having at any given moment. :-)

Let's agree on some terminology, shall we?  A "critical section" is any
piece of code with the following property: If a signal that might throw an
asychronous exception can occur while executing the code, the "normal"
runtime mechanism for unwinding the stack for said exception will not work
correctly.  That is (at least) what *I* think we're talking about here.

The runtime must be told somehow about every critical section.  A suggested
mechanism is a range-table technique that delineates the boundaries of each
critical section.  Possibly the range-table may even give more information,
such as how to unwind the stack while in this region.

With those definitions, all prologues and epilogues are critical, and thus
need at least one range-table entry, unless the compiler has some way of
knowing that an asychronous exception will *never* occur at that point.  So
far, I've heard no proposal for any way to communicate this information to
a compiler.

> Regardless of the architecture, along any path from the (virtual) point
> of interruption to the function return, appropriate steps will be taken to
> put the stack into a coherent state for the return.  Those steps can be
> read off that path by traversing it in reverse order to see how the final
> value of all critical registers and locations were computed.

That's not always possible.  You cannot, in general, reverse execute code.
Those computations may depend on previous register values that can only be
determined by forward execution.  Did I misunderstand what you said?

Also, I have seen architectures in which there were some instructions that
cannot be emulated; for example, instructions that do atomic "test and set"
kinds of operations cannot always be emulated.  Also problematic are
special instructions for interfacing to special hardware, which can depend
on saving the program counter where the instruction was executed.

> Good point!  So, then postpone such signals to the next return. ;-)

This DEFINITELY would be unacceptable in even a soft real-time system.
This could require interpretive execution for an unbounded length of time.

> Rightly or wrongly, it's not the prolog and epilog that I worry about;
> I think that the techniques outlined above give us a way to put the
> stack together.

But I disagree, in that the techniques are (a) inadequate, or (b) require
excessive overhead.

However, I agree that destructors are definitely a problem.

> I've said previously that, because
> such destructors are (de facto) asynchronous code, they would be
> subject to certain disclaimers and caveats about the values of
> globals, even those that are volatile and/or atomic, and that I don't
> know exactly what those disclaimers should be.  The question, of
> course, is whether there exists a sufficiently safe set of disclaimers
> exists that leaves these destructors enough guarantees to reliably get
> their jobs done.

My question is, how do you identify these de facto asynchronous
destructors?  Doesn't *every* destructor become such, as far as the
programmer is concerned, unless he can *prove* an asynchronous exception
will never be propagated to a block where this destructor will be executed?

Remember, I write the destructor for a *class*, not for a particular
function.  How do I know whether someone will want an object of that class
in a stack frame that might be unwound because of an asynchronous
exception?

Let's see where we stand on this discussion:

  * You are proposing a feature that, as yet, is unproven to meet a
    widespread need.

  * Implementing this feature requires considerable work for the compiler
    and/or runtime (probably both).  It also appears to impose overhead
    for all C++ users, whether they use the feature or not.

  * Writing code that is safe in the presence of asynchronous exceptions
    appears to be impossible, especially for third-party library writers.

Have I missed something or mis-represented the facts?  This appears to me
to be the current state of affairs, but please correct me if I'm wrong.

--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL  33309
Bill.Leonard@mail.hcsc.com

These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.

------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.

"Hard work never scared me.  Oh, sure, it has startled me from a distance."
                                                       -- Professor Fishhawk
------------------------------------------------------------------------------
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]





Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/13
Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
: In article <4mu2ik$ebm@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
: > Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
: > : This would require, say, two
: > : > words to delimit each critical section, a modest expansion that need
: > : > not exist, unless it is possibly to be used.
: > :
: > : How does the compiler know whether it will be used or not?  Seems to me
: > : that, if the language allowed asychronous exceptions, the compiler must
: > : *always* assume one could occur at an inopportune time.
: >
: > Something more subtle is needed, say, something like a standard
: > library "wrapper" function for signal throwing handlers.
:
: This still doesn't explain your comment (see quote above) about "need not
: exist, unless it is possibly to be used".  The two words per critical
: section have to exist for every section of code that the compiler thinks
: *might* be critical, unless there is a way for it to tell more exactly what
: code *is* critical.  Unless the compiler knows which functions might be
: called in the context of "allowing asychronous exceptions", it must assume
: all prologues and epilogues are critical.

The compiler could place the table of delimiters in a separate output
file the would not need to be linked in unless, say, the wrapper function
were linked in.

: > I meant not "critical" in the sense that it requires the "virtual"
: > exclusion of exception throwing signal handlers.  The prolog and
: > epilog are certainly "critical" in the sense that before throwing a
: > signal, the implementation must complete their work and put things in
: > order before returning to the interrupted function's caller.
:
: I keep getting the feeling that we're having multiple conversations here,
: and I never know which one we're having at any given moment. :-)

Certainly, the first of my sentences quoted above is nearly
incomprehensible.  My apologies.

I was attempting to distinguish "critical" in the narrower sense of
requiring mutual exclusion (implemented, say, via virtual postponment)
from critical in the broader sense defined in your next paragraph.

: Let's agree on some terminology, shall we?  A "critical section" is any
: piece of code with the following property: If a signal that might throw an
: asychronous exception can occur while executing the code, the "normal"
: runtime mechanism for unwinding the stack for said exception will not work
: correctly.  That is (at least) what *I* think we're talking about here.

Fine.

: > Regardless of the architecture, along any path from the (virtual) point
: > of interruption to the function return, appropriate steps will be taken to
: > put the stack into a coherent state for the return.  Those steps can be
: > read off that path by traversing it in reverse order to see how the final
: > value of all critical registers and locations were computed.
:
: That's not always possible.  You cannot, in general, reverse execute code.
: Those computations may depend on previous register values that can only be
: determined by forward execution.  Did I misunderstand what you said?

During the reverse traversal, one would build a calculation tree
(expression) for each value that is important to a coherent return,
and then carry out those calculations and store the values wherever
appropriate for a coherent return.

: Also, I have seen architectures in which there were some instructions that
: cannot be emulated; for example, instructions that do atomic "test and set"
: kinds of operations cannot always be emulated.  Also problematic are
: special instructions for interfacing to special hardware, which can depend
: on saving the program counter where the instruction was executed.

So the questions would be
  *  How prevalent are such instructions?
  *  How vital are they to efficient prologues and epilogues?

: > Good point!  So, then postpone such signals to the next return. ;-)
:
: This DEFINITELY would be unacceptable in even a soft real-time system.
: This could require interpretive execution for an unbounded length of time.

I completely agree, hence the winking smiley.

: > Rightly or wrongly, it's not the prolog and epilog that I worry about;
: > I think that the techniques outlined above give us a way to put the
: > stack together.
:
: But I disagree, in that the techniques are (a) inadequate, or (b) require
: excessive overhead.
:
: However, I agree that destructors are definitely a problem.
:
: > I've said previously that, because
: > such destructors are (de facto) asynchronous code, they would be
: > subject to certain disclaimers and caveats about the values of
: > globals, even those that are volatile and/or atomic, and that I don't
: > know exactly what those disclaimers should be.  The question, of
: > course, is whether there exists a sufficiently safe set of disclaimers
: > exists that leaves these destructors enough guarantees to reliably get
: > their jobs done.
:
: My question is, how do you identify these de facto asynchronous
: destructors?  Doesn't *every* destructor become such, as far as the
: programmer is concerned, unless he can *prove* an asynchronous exception
: will never be propagated to a block where this destructor will be executed?

An asynchronous signal is essentially another thread preempting and
blocking the interrutped thread.  A destructor invocation by (an
exception thrown by) the handler of such a signal is an activitity on
behalf of that other thread.  Moreover, the interrupted procedure
might well be a member function of an automatic object, whose
destructor will be invoked by the signal.  So, ALL MEMBER FUNCTIONS OF
ALL AUTOMATIC OBJECTS ARE CRITICAL.

: Let's see where we stand on this discussion:
:
:   * You are proposing a feature that, as yet, is unproven to meet a
:     widespread need.

One could argue that, since C is Turing complete, we don't "need" any
of the extensions of C++.  They are an "asset," the utility of will be
debated for years to come.  (The Java folks even claim that we don't
need pointers.)

There is a well-known and widespread need to be able to asynchronously
abort a phase of a computation in a way that releases the resources
acquired during that phase.  A control-c, with the closing of files
during the subsequent abort, is the extreme case where the aborted
phase is the entire program.  We need a convenient, less extreme
response, where the program goes back to a known mode and resumes from
there.  The standard solution of programmed polling involves
significant:
  --  latency
  --  computational overhead
  -- programming overhead.  The programming overhead is particularly
troublesome, because every line of program maintenance has to be
evaluated in terms of its possibly subtle impact on latencies and
computational overhead.

:   * Implementing this feature requires considerable work for the compiler
:     and/or runtime (probably both).  It also appears to impose overhead
:     for all C++ users, whether they use the feature or not.

Neither of these claims have been established (or refuted).  They are
exactly what this discussion is about.

:   * Writing code that is safe in the presence of asynchronous exceptions
:     appears to be impossible, especially for third-party library writers.

This claim as well has been neither established or refuted.  What is
clear, however, is that programming in the presence of asynchronous
exceptions is a form of MT programming, which is a tedious, but not
impossible, business.

: Have I missed something or mis-represented the facts?

You've probably miss some, but these will do for now.

Tom Payne  (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated.  To submit articles: try just posting with      ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu         ]
[ FAQ:      http://reality.sgi.com/employees/austern_mti/std-c++/faq.html    ]
[ Policy:   http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu                             ]