Topic: Is STL MT-Safe?
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/07 Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
: In article <4mbm0o$4qc@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
: > The issue remains one of quantifying these qualitative terms like
: > "significantly", "unacceptably high", and so on. How big is a range
: > table that delimits each prolog and epilog?
[...]
:
: You might need an entry in this table for *every instruction* of the
: prologue and epilogue in order for the compiler to correctly unwind the
: stack at any point. How small can you make the range table entries? On
: many 32-bit RISC machines, you could end up with 3 words per word of
: instruction for the prologue and epilogue. So we're talking a 300 percent
: expansion.
:
: For non-prologue, non-epilogue code, the expansion probably won't be as
: bad, but it can result in a 100 percent expansion. But now consider that
: some compilers now try to scatter the prologue and epilogue throughout the
: rest of the code, so we're back to having lots of table entries.
The technique suggested by Chase was conceptually quite simple:
virtually postpone any signal whose handler might throw an exception
by continuing execution interpretively (interleaved code included) to
a point outside any critical sections. This would require, say, two
words to delimit each critical section, a modest expansion that need
not exist, unless it is possibly to be used. One might alsocontinue
past out-of-order assignments, e.g., to the (effective location of
the) next conditional branch, where the ordering of instructions could
be expected to be coherent.
The prolog and epilog need not be critical. The worst that can happen
is that the stack pointer gets adjusted, but the signal occurs at a
point where the values to be restored to certain registers have not
been stored in their expected locations. So, if and when the
exception occurs, you search for a reachable return instruction and
scan the intervening path to trace the completion of the compuations
of those values.
Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/07 Raw View
In article <4mdvaa$lih@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> Right! At certain points ("critical regions"), that state is
> indeterminate and we must prevent asynchronous exceptions via, say,
> range-table techniques.
How do you "prevent" asynchronous exceptions via range-tables? I can see
how you could use range-tables to tell the runtime how to unwind the stack,
but sounds like you are implying that the range-tables just tell the
runtime to defer the exception. What I'm missing is how the runtime knows
when to come back and regenerate the exception, unless some code somewhere
is executed to say "I'm out of the critical section now".
So, just to clarify, I think a range-table technique that enables the
runtime to unwind the stack *anytime* is very high overhead and interferes
a lot with optimization. A range-table technique that just defers the
exception suffers the penalty of having to execute some code upon leaving
each critical section, or else there must be some sort of polling going on
(which is what you were trying to avoid).
> Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
> : Unless you can show there is *no* overhead if you don't use the feature,
> : or you can show that the feature is *required* by a large majority of
> : users, I think you've already lost.
>
> No shades of gray? What if the overhead were a 10% increase in code
> size, and the feature were "very useful" in, say, 40% of all real-time
> programs?
I think if 40% of all real-time programs needed such a feature, some language
would have had it years ago. :-)
You didn't mention a performance penalty. What goal would you set for
that?
I think if there is *any* significant performance penalty imposed on all
programs for this feature, then it has to be very useful for a large
majority (much greater than 50%) of *all* programs (not just real-time
programs). You can't expect users to pay a significant penalty for
something they won't use unless they are clearly in the minority. They'll
just use something else.
> It my understanding that the standard trajectory for a proposed feature is:
> * informal discussion (e.g., this thread),
> * followed by experimental implementation (e.g., by a
> reasearch lab rather than a commercial compiler vendor),
> * followed by preliminary adoption by standards committee(s),
> * followed by commercial implementation,
> * and incorporation into a draft of the standard,
> with each step being contingent on the outcome of the previous steps.
Fair enough. The question is, when do you progress from the informal
discussion to experimental implementation, or when do you determine that
there will be no progression (i.e., the issue is dead)?
If you can convince enough users that the penalty will be zero or very
small, that the benefits to a significant segment of users is large, and
that the cost of implementation is reasonable, and you can convince some
compiler vendor (which could be someone working on, say g++, for free) to
do the work, then clearly you can progress to the experimental
implementation. The problem with many issues like this is that they
neither die nor progress. :-)
> : My point is that a function might catch an exception that it thinks it
> : *does* know how to handle because it thinks the only source of that
> : exception is a function it called. Along comes a signal that throws that
> : exception from somewhere else, and the signal just happened to occur while
> : this function is active. Bingo, exception handled in the wrong place.
> It would be up to the programmer(s) to make sure that this did not happen
> in cases where it is an error. The programmer(s) could, for instance, make
> the classes of asynchronous exceptions distinct from the others.
How does a library programmer do this if he doesn't have control over the
code that calls it?
> : The first allows me to write a function that need know only the exceptions
> : thrown by procedures it calls. It need know nothing about the "external
> : environment" set up by functions that call it. In particular, a function
> : that calls no procedures and throws no exceptions need not worry about
> : getting any exceptions at all. This would not be true if signals
> : propagate exceptions. Even in cases where a function calls another
> : function, it need only know the exceptions propagated by that function.
> It seems to me that your function would be fine just as you originally
> wrote it. If it doesn't know how to handle a particular asynchronous
> exception, it let's the exception fall through (unwind) to a
> previously called function that can handle it.
You're still missing the point -- how do I, the programmer, or the compiler
*know* whether an asynchronous exception can occur within a given function?
To know that, I have to know something about the environment at the time of
the call.
> : If a called function contains a throw specification, then the compiler
> : (and the programmer) can tell the set of exceptions that might be thrown
> : and depend on that (with the addition of the "unexpected" exception) being
> : the entire set. This would no longer be true in your world.
> The set would be larger, but still bounded and well defined. The
> programmer and compiler would merely add in the set of exceptions
> thrown from handlers of asynchronous signals (which, alas, would need
> some syntactic demarcation, since the compiler would need to give them
> special treatment).
Ah ha! So now you're going to add some magic new feature that will tell
the compiler about these asynchronous exceptions, eh? Pray elaborate.
> : Another problem is that a function could behave radically differently if a
> : new call is added to it from within a segment where a signal is enabled
> : that the function did not expect to be enabled. This violates many sound
> : principles of software engineering.
> When a function is called at a point where its preconditions are not met,
> one should expect "radically different" behavior.
I'm not violating the preconditions, I'm violating "intermediate
conditions", in that while the function is executing an exception can occur
for which it is unprepared. I don't think every C++ programmer in the
whole world is going to be willing (or prepared) to consider whether the
functions they write behave correctly in the face of asynchronous
exceptions. Just making functions normally exception-safe is hard enough.
> If this feature takes more than, say, a ten percent increase in code
> or a one percent increase in running time, it's probably not worth it.
Any increase in running time is probably unacceptable to those users who
don't want the feature. One of the prerequisites to adding exceptions was
to show that it could be implemented with no runtime overhead.
The increase in code space is more problematic. How much is too much?
Difficult to say.
--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Bill.Leonard@mail.hcsc.com
These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.
------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.
"Hard work never scared me. Oh, sure, it has startled me from a distance."
-- Professor Fishhawk
------------------------------------------------------------------------------
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/04/23 Raw View
David Chase (chase@centerline.com) wrote:
: In article <4l8pud$3t8@galaxy.ucr.edu>, Tom Payne <thp@cs.ucr.edu> wrote:
: >Fixing the stantard so that signal handlers can read global data (with
: >appropriate qualifications on the significance of the result) would be
: >a significant help. The complexity here is only in the wording of the
: >standard.
:
: Nope, there's appalling interactions with constructors and destructors.
The purpose of signal blocking is to control such interactions on
non-atomic shared data. Currently, however, if a signal handler reads
global data, "undefined behavior" ensues. If the specification were
corrected, one could portably implement signal blocking and, with the
aid of signal blocking, correctly share non-atomic data among signal
handlers and the underlying program.
All the specification needs to say is that, if a function reads a
volatile object that is not updated between the previous sequence
point and the succeeding one, the result will be the object's value as
of that previous sequence point. Also, if the object is volatile and
atomic and is updated in that interim, then the result could also be
the value of one of those updates.
(Though too late for the C++ standard, such modification might be an
approrpriate consideration for the upcoming revision of the C
standard.)
: >The second major need is to allow a signal to force an exception in
: >without the program polling for it. Complexity is never "necessarily
: >a good thing," unless the benefits outweigh the costs. The benefits
: >would be
: >
: > * lower latency exception responses to signals
: >
: > * elimination of the CPU overhead of polling
: >
: > * much simpler programming for such situation.
: >
: >The cost would be in the increased complexity of the implementation
: >and CPU overhead caused by the implementation. In a thread on this
: >topic a couple months ago, David Chase gave an implementation strategy
: >based on range tables that seemed to involve no CPU overhead and whose
: >complexity seemed reasonable.
:
: The DISPATCH has reasonable complexity. Unfortunately, unless both
: the compiler and the language are engineered to provide abort/commit
: semantics for every single operation, you end up with unsafe portions
: of code -- if an exception is thrown in one of those portions of code,
: you're potentially hosed (as in, how inconsistent would you like your
: data structures to get?) The destructor/constructor/automatic-object
: semantics already in place make this non-trivial.
One need only "shrink wrap" certain "critical" regions against
exceptions thrown from signal handlers, regions like function calls
and returns (and possibly constructors and destructors). The most
obvious approach involves setting and checking flags at the boundaries
of the critical regions. This approach costs some in terms of run
time, but I've become less impressed by this cost as I've listened to
colleagues complain about all the unused instruction slots in modern
architectures. Your range-table techniques remove this overhead but
require some complexity in rolling back or postponing the handler (or
its exception) out of the critical region, e.g., interpretive
execution of the interrupted code through the balance of the critical
region. (Clever stuff!)
[...]
: And no, I'm not sure that any even slightly popular language does this
: right. I haven't studied Ada's approach to this problem in detail, so
: they might have gotten it right. I think their approach is to allow
: programmers to delimit critical sections during which interrupts are
: suspended (think about implementing that efficiently on your favorite
: machine for a minute).
[...]
I doubt that the Ada programmer is expected to shrink wrap by hand
each function call and return.
Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: chase@centerline.com (David Chase)
Date: 1996/04/25 Raw View
In article 2lt@galaxy.ucr.edu, thp@cs.ucr.edu (Tom Payne) writes:
> David Chase (chase@centerline.com) wrote:
> : Nope, there's appalling interactions with constructors and destructors.
>
> The purpose of signal blocking is to control such interactions on
> non-atomic shared data.
> One need only "shrink wrap" certain "critical" regions against
> exceptions thrown from signal handlers, regions like function calls
> and returns (and possibly constructors and destructors). The most
> obvious approach involves setting and checking flags at the boundaries
> of the critical regions. This approach costs some in terms of run
> time, but I've become less impressed by this cost as I've listened to
> colleagues complain about all the unused instruction slots in modern
> architectures. Your range-table techniques remove this overhead but
> require some complexity in rolling back or postponing the handler (or
> its exception) out of the critical region, e.g., interpretive
> execution of the interrupted code through the balance of the critical
> region. (Clever stuff!)
Thanks for the credit, but it isn't deserved. I learned most of what I
know about range tables from talking to Mick Jordan, who used them in
Acorn's Modula-2+ implementation, and I think he learned them from
someone else before that. I learned the rest, trying to figure out how
to deal with register allocation and exception handling, arguing with
people over the Sparc V9 ABI definition, and actually implementing them
in a compiler back-end. I've heard that CLU used both range tables and
hash tables in its implementation (hash tables are sufficient if
exceptions are synchronous, and a good deal faster, but I'd have to
think for a bit to come up with a good position-independent encoding of
them).
Anyhow, I think range tables are not appropriate for delimiting
critical sections unless signals are extremely rare (whoops, see
below). The Modula-3 rule of thumb (and I dearly wish that standards
committees would commit to such a rule of thumb) was 10,000 to 1 --
that is, if it costs 10,000 cycles in the exceptional case to save 1
cycle in the normal case, that's break-even. In the case of a thrown
exception, the inner loop of a binary search is something like 15
instructions (I recall, for a sensibly encoded table) and 20 iterations
will cover a million entries (seems like a sensible upper bound) so
it's 300 cycles per frame to unwind, assuming that you are executing
code and discarding frames along the way. An exception thrown 20
frames is decently deep (do note that there are various optimizations
which may be applied within a subroutine body, and inlining increases
the opportunity to apply these exceptions), so 6000 cycles is a
pretty-bad-case overhead to use for PC-range-based searches. 6000
cycles is less than 10,000, so it beats flag testing, probably takes
about a cycle (as you note, if the flag were in a register, you could
plausibly argue that it takes only a fraction of a cycle on a modern
machine).
Now, consider a signal -- well, dang, I just did, and realized that for
the special case that you are talking about (atomic/pseudo-atomic
assignment), it might just be worthwhile to lookup the range (300
cycles), and if it is found, do something sensible. I define something
sensible to be -- for small assignments (known, encoded in the table)
interpret the instructions through to the end, then take the signal.
For large assignments, the interface with the compiler says that a flag
is checked after the assignment is complete, and the signal reactivated
as necessary (you won't mind paying the overhead of a flag check for a
sufficiently large assignment). This works because this is not a general-purpose
critical section that can make additional calls -- this is code at the
youngest end of the call chain, and you don't have to look all the way
up the call chain.
This is still imperfect, on account of timer interrupts needing to be
serviced in a hurry (for instance, before the next one arrives) and the
possibility of losing a time-slice in a shared machine (but what does
that mean, anyway? if you lose the slice in 10 microseconds, who's to
say that the interrupt didn't just arrive 10 microseconds later?) Then
again, if you're going to have "atomic" assignments to volatile
variables and you're also going to have hard real-time deadlines,
something's got to give. If you're really time-constrained, there
are other entertaining games you can play -- out-line all volatile
operations into a bunch of code in a particular address range, and record
the exact location of the volatile stuff in that range with a bit-map
(it'll be dense). Thus, if you aren't actually performing a volatile
operation, the signal delivery will be delayed by a couple of range
checks, instead of 300 cycles.
(Note, by-the-way, that everything I'm talking about is across the
implementation line, necessarily non-portable, and not itself
constrained by any silly-ass standards. It is, of course, supposed
to help one efficiently implement some of these standards, or what
might be in one of these standards.)
This also demands some really entertaining code in a signal handler,
but I've written that sort of entertaining code, and it didn't kill me.
However, I recall that I also considered virtualizing the entire signal
delivery system (as in, playing these games, and doing it all in user
mode, so that calls to block and unblock signals would go much faster
than seems usually to be the case -- that's a handy optimization for at
least one benchmark, if I remember correctly) and decided that it was
far too much gunk to tackle as a "fun project" in my spare time.
speaking for myself,
David Chase
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/04/26 Raw View
David Chase (chase@centerline.com) wrote:
: In article 2lt@galaxy.ucr.edu, thp@cs.ucr.edu (Tom Payne) writes:
[...]
: > Your range-table techniques remove this overhead but
: > require some complexity in rolling back or postponing the handler (or
: > its exception) out of the critical region, e.g., interpretive
: > execution of the interrupted code through the balance of the critical
: > region. (Clever stuff!)
:
: Thanks for the credit, but it isn't deserved.
[...]
: Anyhow, I think range tables are not appropriate for delimiting
: critical sections unless signals are extremely rare (whoops, see
: below).
More specifically, not unless signals whose handler (might) throw
an an exception are extremely rare.
: Now, consider a signal -- well, dang, I just did, and realized that for
: the special case that you are talking about (atomic/pseudo-atomic
: assignment), it might just be worthwhile to lookup the range (300
: cycles), and if it is found, do something sensible. I define something
: sensible to be -- for small assignments (known, encoded in the table)
: interpret the instructions through to the end, then take the signal.
: For large assignments, the interface with the compiler says that a flag
: is checked after the assignment is complete, and the signal reactivated
: as necessary (you won't mind paying the overhead of a flag check for a
: sufficiently large assignment). This works because this is not a general-purpose
: critical section that can make additional calls -- this is code at the
: youngest end of the call chain, and you don't have to look all the way
: up the call chain.
My bit about allowing signal handers to read requires nothing of the
implementation except that it not use volatile variables as temps.
I'm merely suggesting that the standard be amended to say that when
you read a volatile atomic object from a signal handler you get the
last value written to it (possibly by hardware or another piece of
code), and that the same hold for a non-atomic provided that nothing
has been written to it since the last sequence point. This amendment
is to allow coordination to be implemented at the program level (and
does not require coordination at the implementation level) .
[...]
: However, I recall that I also considered virtualizing the entire signal
: delivery system (as in, playing these games, and doing it all in user
: mode, so that calls to block and unblock signals would go much faster
: than seems usually to be the case -- that's a handy optimization for at
: least one benchmark, if I remember correctly) and decided that it was
: far too much gunk to tackle as a "fun project" in my spare time.
That's exactly what I have in mind: the standard should allow the writing
of a portable user-mode library to implement locks and signal blocking
(via volatile atomic variables).
Allowing a signal handler to throw an exception and, thereby, force
the unwinding of the stack is yet another matter. Certain critical
operations must be shrink wrapped by the implementation against such
spontaneous stack unwinding. Such critical operations might include:
* invoking or returning from a function and the processing exceptions,
* constructors and destrctors
The point is that the signal-thrown exception might find the stack or
object in an inconsistent state.
It would be interesting to know what techniques are used by Ada and
Modula implementations, and what overheads are observed.
Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: kcline@sun132.spd.dsccc.com (Kevin Cline)
Date: 1996/04/26 Raw View
[Moderator's note: this thread is crossposted. Be careful with follow-ups.]
>The second major need is to allow a signal to force an exception in
>without the program polling for it. Complexity is never "necessarily
>a good thing," unless the benefits outweigh the costs. The benefits
>would be
>
> * lower latency exception responses to signals
>
> * elimination of the CPU overhead of polling
>
> * much simpler programming for such situation.
>
The conversion of signals to exceptions is useful only for synchronous
signals like SIGFPE and SIGSEGV. The real problem is that the
essentially asynchronous signal facility has been perverted to handle
synchronous events. This is a UNIX problem, not a C++ problem.
Asyncronous signals like SIGTERM can not be converted to exceptions in
any useful way; every routine would have to check for them or suffer
abnormal termination.
--
Kevin Cline
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/02 Raw View
In article <4m5idl$qde@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
> : How can I write a function that behaves reasonably
> : in the face of an exception that can happen at any time, for any reason?
>
> By having it catch an object that tells where, why, and by whom the
> exception was thrown.
Unless every exception handler in every function is going to do some
extensive decoding of the exception, I don't see that as a solution.
Anyway, just because I can tell where the exception came from doesn't mean
I can do anything about it. The whole idea behind the C++ exception model
(and other similar models) is that exceptions are handled by functions that
have enough information to do something about them. If you can't predict
where the exception will occur, how can you be sure you will handle it at
the appropriate level?
> Programmers can keep the set of exceptions thrown by handlers of
> asynchronous signals as small as they choose. Moreover, any signal
> can be blocked at points where its occurrence would jeopardize the
> integrity of the program.
My point is that the *compiler* could not do any checking of exception
specifications, because it has no knowledge of this user-imposed limit on
the set of exceptions thrown by signal handlers. Neither could a reader of
the code who didn't know about that set; it isn't expressed in any way by
the language.
> Certainly, exceptions should not be the normal way to handle
> asynchronous events. However, the need to occasionally bail out of a
> particular phase of a program as a result of an external event seems
> quite natural, e.g., a sittuation where, if any of several alarms go
> off, the current computation becomes useless and needs to be aborted.
That sounds like a very special case. I don't think the exception
mechanism should be perverted to support such a special case that, for most
programs, would never occur. This is especially true if the overhead of
supporting such a mechanism is high, which it would be.
> Of course, there are thresholds above which time/space overheads are
> unacceptable, as well as thresholds blow which they are acceptable.
> The issue is how small the overheads can be kept, and what's
> acceptable. (Perhaps, some relevant data is available for other
> languages.)
I know of no other language that has an exception mechanism that supports
this.
> : This is not really true -- at the least it is misleading. Function
> : prologue code, for instance, could typically only generate a SIGSEGV if
> : you've exceeded your stack limits, and you can check for that first.
>
> In cases where even synchronous signals can be ruled out during
> critical sections, all difficulties (and overhead) disappear.
You missed my point. The compiler *can* rule out synchronous signals, but
it can't rule out asynchronous ones. Therefore, it must protect all such
critical sections from signals. Since procedure prologue and epilogue are
such critical sections, this significantly increases the overhead of EVERY
function call.
Besides, currently the language does not even allow synchronous signal
handlers to throw exceptions, so the compiler has lots of freedom for
optimization that it would not have otherwise.
> Why disable optimization of computations that might abort?
Again, you missed the point. The compiler and runtime system must be able
to unwind the stack and find the correct exception handler whenever an
exception is thrown. If asynchronous signal handlers can throw exceptions,
then the system must be prepared to unwind AT ANY TIME. Without this
burden, the system need be prepared to do this only at certain points:
namely, at function calls or throw statements. Such freedom allows the
optimizer to mix parts of the stack frame construction and destruction with
user code.
Another point: Currently, an optimizer could rearrange assignments if it
can determine that you can't tell the difference. That's easy if there's
no intervening function calls and no possible aliasing of the objects being
assigned. However, once you allow signal handlers to throw exceptions, the
optimizer could no longer do this because you *would* be able to tell that
it had done so.
If the optimizer rearranged the assignments anyway, then destructors for
objects would have a tough time figuring out what state the object was in
and how to destroy it. You could never write a safe destructor. That
would be unacceptable, so nobody would ever agree to allow that much
freedom to the optimizer. This could slow down many machines, especially
all the modern RISC processors, by a huge amount.
--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Bill.Leonard@mail.hcsc.com
These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.
------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.
"Hard work never scared me. Oh, sure, it has startled me from a distance."
-- Professor Fishhawk
------------------------------------------------------------------------------
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/03 Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
:
: Unless every exception handler in every function is going to do some
: extensive decoding of the exception, I don't see that as a solution.
Most functions don't catch most exceptions, especially those they don't
know how to handle.
: Anyway, just because I can tell where the exception came from doesn't mean
: I can do anything about it. The whole idea behind the C++ exception model
: (and other similar models) is that exceptions are handled by functions that
: have enough information to do something about them. If you can't predict
: where the exception will occur, how can you be sure you will handle it at
: the appropriate level?
The fact that a signal is asynchronous implies that you can't predict
exactly where it will occur, but doesn't mean that the programmer has
no control on its occurrences and/or its ability to throw of
exceptions.
: My point is that the *compiler* could not do any checking of exception
: specifications, because it has no knowledge of this user-imposed limit on
: the set of exceptions thrown by signal handlers. Neither could a reader of
: the code who didn't know about that set; it isn't expressed in any way by
: the language.
As things now stand, a function could see any exception thrown by any
function it calls, however indirectly (including those passed as
parameters). As things would stand, a function could see any of those
exceptions plus any thrown by the handler of an asynchronous signal
that is enabled at that point in the code. So, why is the second set
less checkable than the first?
: > Certainly, exceptions should not be the normal way to handle
: > asynchronous events. However, the need to occasionally bail out of a
: > particular phase of a program as a result of an external event seems
: > quite natural, e.g., a sittuation where, if any of several alarms go
: > off, the current computation becomes useless and needs to be aborted.
:
: That sounds like a very special case. I don't think the exception
: mechanism should be perverted to support such a special case that, for most
: programs, would never occur. This is especially true if the overhead of
: supporting such a mechanism is high, which it would be.
If the overhead is too high, of course, the mechanism isn't worth it.
The issue is how high the overhead would be, and how high is "too
high." Remember, however, that the alternative is to poll for the
occurrence of those external events, which involves overhead as well.
: I know of no other language that has an exception mechanism that supports
: this.
I've been told, perhaps incorrectly, that in Ada a "protected
interrupt procedure" (which is supposed to correspond to a signal
handler) can cause a stack unwinding exception.
: You missed my point. The compiler *can* rule out synchronous signals, but
: it can't rule out asynchronous ones. Therefore, it must protect all such
: critical sections from signals. Since procedure prologue and epilogue are
: such critical sections, this significantly increases the overhead of EVERY
: function call.
The issue remains one of quantifying these qualitative terms like
"significantly", "unacceptably high", and so on. How big is a range
table that delimits each prolog and epilog? Are we talking about a
one, a ten, or a hundred percent expansion in code size?
: > Why disable optimization of computations that might abort?
[...]
:
: Another point: Currently, an optimizer could rearrange assignments if it
: can determine that you can't tell the difference. That's easy if there's
: no intervening function calls and no possible aliasing of the objects being
: assigned. However, once you allow signal handlers to throw exceptions, the
: optimizer could no longer do this because you *would* be able to tell that
: it had done so.
:
: If the optimizer rearranged the assignments anyway, then destructors for
: objects would have a tough time figuring out what state the object was in
: and how to destroy it. You could never write a safe destructor. That
: would be unacceptable, so nobody would ever agree to allow that much
: freedom to the optimizer. This could slow down many machines, especially
: all the modern RISC processors, by a huge amount.
Excellent point. I agree that ruling out such optimization would be
unacceptable. Note also that, even without optimization, the
destructors and catchers invoked by such an asynchronous exception
would face the same caveats and disclaimers regarding the
indeterminacy of values of static objects as signal handler itself,
e.g., the signal could occur and thow the exception in the middle of a
non-atomic write to a static object. The key issue is that the
destructors have to be able to release resources and to coherently
update the global objects describing the state of those resources. It
should suffice to make those objects volatile and to protect their
updates with signal blocking.
Tom Payne (thp@cs.ucr.edu)
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 1996/05/03 Raw View
In article 4qc@galaxy.ucr.edu, thp@cs.ucr.edu (Tom Payne) writes:
>
>The fact that a signal is asynchronous implies that you can't predict
>exactly where it will occur, but doesn't mean that the programmer has
>no control on its occurrences and/or its ability to throw of
>exceptions.
But the problem is that given a stretch of code, the compiler cannot
know (by definition) where or whether an asynchronous signal can
occur. If ansynchronous signals are allowed to throw exceptions,
the compiler must make the most pessimistic of assumptions at every
decision point, and guard every critical region of code where it
cannot prove that a signal cannot occur.
>If the overhead is too high, of course, the mechanism isn't worth it.
>The issue is how high the overhead would be, and how high is "too
>high." Remember, however, that the alternative is to poll for the
>occurrence of those external events, which involves overhead as well.
Let me point out yet again that if it makes sense to do so on a platform,
the implementor is free to provide defined semantics for exceptions
thrown from signal handlers, and to add additional extensions that
may be required to make good use of them. If you work in an environment
where that would be a useful extension, ask your vendor to provide it.
The issue is whether the standard should require that all implementations
always provide that capability, and thus require all programs and all
programmers to accept the overhead.
The current situation is that programs that want to deal with signals
have some overhead (whether via polling or via exceptions thrown from
signal handlers). Programs that don't need to deal with signals do not
have any of that overhead.
Making semantics "undefined" is the standard's way of providing a hook
for implementors to add extensions. If it turns out that some extension
is broadly useful and inexpensive, it can be added to the next version of
the standard. In the mean time, other implementations will begin to supply
it anyway.
I do not believe that throwing an exception from a signal handler will fall
into that category, but perhaps it will.
---
Steve Clamage, stephen.clamage@eng.sun.com
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/03 Raw View
Steve Clamage (clamage@Eng.Sun.COM) wrote:
:
: But the problem is that given a stretch of code, the compiler cannot
: know (by definition) where or whether an asynchronous signal can
: occur. If ansynchronous signals are allowed to throw exceptions,
: the compiler must make the most pessimistic of assumptions at every
: decision point, and guard every critical region of code where it
: cannot prove that a signal cannot occur.
It appears that critical regions can be guarded with no running-time
overhead via range-table techniques (though the claim has been made
that the resulting tables would be "unacceptably large.") To avoid
imposing an undue burden of pessimisn on the compiler, the standard
would have to include appropriate disclaimers about the indeterminacy
of objects shared among asynchronous activities. I think I know what
those disclaimers should be for the signal handlers themselves, but
I'm less certain about the destructors and and the catchers that would
be activated by an asynchronous exception.
: Let me point out yet again that if it makes sense to do so on a platform,
: the implementor is free to provide defined semantics for exceptions
: thrown from signal handlers, and to add additional extensions that
: may be required to make good use of them. If you work in an environment
: where that would be a useful extension, ask your vendor to provide it.
Of course! But, a major objective of the standardization process is
to constrain featurism and product differentiation that inhibits
program portability.
: The issue is whether the standard should require that all implementations
: always provide that capability, and thus require all programs and all
: programmers to accept the overhead.
Agreed! So, the issue becomes the utility of the feature and the
amount of overhead it would require, which is what this subthread
is about.
: ... If it turns out that some extension
: is broadly useful and inexpensive, it can be added to the next version of
: the standard.
Without question, it is too late for such an extension to the first
version of the standard. So, comp.std.c++ is the appropriate forum to
discuss the desirability of this possible extension and the mechanisms
and cost of its implementation.
Tom Payne
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/03 Raw View
[Note: I deleted the cross-post to comp.programming.threads, since this
discussion no longer has much to do with threads.]
In article <4mdbf6$fc5@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> It appears that critical regions can be guarded with no running-time
> overhead via range-table techniques (though the claim has been made
> that the resulting tables would be "unacceptably large.") To avoid
> imposing an undue burden of pessimisn on the compiler, the standard
> would have to include appropriate disclaimers about the indeterminacy
> of objects shared among asynchronous activities. I think I know what
> those disclaimers should be for the signal handlers themselves, but
> I'm less certain about the destructors and and the catchers that would
> be activated by an asynchronous exception.
First, it is not just indeterminacy of objects we are talking about, it is
also the indeterminacy of the machine state that the compiler/runtime
system must keep consistent in order to be able to properly unwind the
stack. If that state becomes indeterminate, then the compiler/runtime
cannot proceed to execute any part of the program with any guarantee of
success. A core dump is just moments away. In other words, your program
has entered the realm of undefined behavior, period, which is what the
standard now says.
> Of course! But, a major objective of the standardization process is
> to constrain featurism and product differentiation that inhibits
> program portability.
True. But another major objective is to make the language as widely
available as possible, and that entails avoiding features that are
difficult to implement on common architectures. Up to now, the standards
committee has done a remarkably good job at making sure that you don't pay
overhead for a feature you don't use; I think that is an excellent
philosophy. Exceptions to this rule (pun intended) are made for features
that are deemed useful to a large majority of C++ users. This would be a
feature that everyone would pay for whether they use it or not, and very
few would want or need it.
> Agreed! So, the issue becomes the utility of the feature and the
> amount of overhead it would require, which is what this subthread
> is about.
Unless you can show there is *no* overhead if you don't use the feature, or
you can show that the feature is *required* by a large majority of users, I
think you've already lost.
> Without question, it is too late for such an extension to the first
> version of the standard. So, comp.std.c++ is the appropriate forum to
> discuss the desirability of this possible extension and the mechanisms
> and cost of its implementation.
I don't believe in adding major new features to a language if they haven't
been tried at all. No amount of "discussion" would convince me to approve
of this new feature. Show us a language, or an implementation of C++, that
has this feature so we can see what it costs. If they don't exist now,
then get a C++ vendor willing to produce it as an extension. If the
extension becomes popular, and/or is low cost, it's a legitimate candidate
for standardization. (IMHO, a good measure of the popularity of a feature
is whether a compiler vendor is willing to spend real money to implement it
as an extension.)
--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Bill.Leonard@mail.hcsc.com
These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.
------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.
"Hard work never scared me. Oh, sure, it has startled me from a distance."
-- Professor Fishhawk
------------------------------------------------------------------------------
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/05/04 Raw View
In article <4mbm0o$4qc@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> Most functions don't catch most exceptions, especially those they don't
> know how to handle.
My point is that a function might catch an exception that it thinks it
*does* know how to handle because it thinks the only source of that
exception is a function it called. Along comes a signal that throws that
exception from somewhere else, and the signal just happened to occur while
this function is active. Bingo, exception handled in the wrong place.
> The fact that a signal is asynchronous implies that you can't predict
> exactly where it will occur, but doesn't mean that the programmer has
> no control on its occurrences and/or its ability to throw of
> exceptions.
Steve Clamage did an excellent job of responding to this, so I won't repeat
it.
> As things now stand, a function could see any exception thrown by any
> function it calls, however indirectly (including those passed as
> parameters). As things would stand, a function could see any of those
> exceptions plus any thrown by the handler of an asynchronous signal
> that is enabled at that point in the code. So, why is the second set
> less checkable than the first?
The first allows me to write a function that need know only the exceptions
thrown by procedures it calls. It need know nothing about the "external
environment" set up by functions that call it. In particular, a function
that calls no procedures and throws no exceptions need not worry about
getting any exceptions at all. This would not be true if signals propagate
exceptions. Even in cases where a function calls another function, it need
only know the exceptions propagated by that function.
If a called function contains a throw specification, then the compiler (and
the programmer) can tell the set of exceptions that might be thrown and
depend on that (with the addition of the "unexpected" exception) being the
entire set. This would no longer be true in your world.
Another problem is that a function could behave radically differently if a
new call is added to it from within a segment where a signal is enabled
that the function did not expect to be enabled. This violates many sound
principles of software engineering.
> If the overhead is too high, of course, the mechanism isn't worth it.
> The issue is how high the overhead would be, and how high is "too
> high." Remember, however, that the alternative is to poll for the
> occurrence of those external events, which involves overhead as well.
But only for those programs that need it -- a very small set, I expect.
> The issue remains one of quantifying these qualitative terms like
> "significantly", "unacceptably high", and so on. How big is a range
> table that delimits each prolog and epilog? Are we talking about a
> one, a ten, or a hundred percent expansion in code size?
You might need an entry in this table for *every instruction* of the
prologue and epilogue in order for the compiler to correctly unwind the
stack at any point. How small can you make the range table entries? On
many 32-bit RISC machines, you could end up with 3 words per word of
instruction for the prologue and epilogue. So we're talking a 300 percent
expansion.
For non-prologue, non-epilogue code, the expansion probably won't be as
bad, but it can result in a 100 percent expansion. But now consider that
some compilers now try to scatter the prologue and epilogue throughout the
rest of the code, so we're back to having lots of table entries.
I'll counter by asking how big is too big? Obviously, that depends on
your customer base.
We use a range table technique for our compilers both for exception
handling and to give debuggers enough information to unwind stacks. The
size increment varies, of course, but it is already near the limit of
acceptability for us. Making it larger is not really an option.
--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Bill.Leonard@mail.hcsc.com
These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.
------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.
"Hard work never scared me. Oh, sure, it has startled me from a distance."
-- Professor Fishhawk
------------------------------------------------------------------------------
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/06 Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
:
: First, it is not just indeterminacy of objects we are talking about, it is
: also the indeterminacy of the machine state that the compiler/runtime
: system must keep consistent in order to be able to properly unwind the
: stack.
Right! At certain points ("critical regions"), that state is
indeterminate and we must prevent asynchronous exceptions via, say,
range-table techniques.
: Unless you can show there is *no* overhead if you don't use the feature, or
: you can show that the feature is *required* by a large majority of users, I
: think you've already lost.
No shades of gray? What if the overhead were a 10% increase in code
size, and the feature were "very useful" in, say, 40% of all real-time
programs?
: I don't believe in adding major new features to a language if they haven't
: been tried at all. No amount of "discussion" would convince me to approve
: of this new feature. Show us a language, or an implementation of C++, that
: has this feature so we can see what it costs. If they don't exist now,
: then get a C++ vendor willing to produce it as an extension. If the
: extension becomes popular, and/or is low cost, it's a legitimate candidate
: for standardization. (IMHO, a good measure of the popularity of a feature
: is whether a compiler vendor is willing to spend real money to implement it
: as an extension.)
It my understanding that the standard trajectory for a proposed feature is:
* informal discussion (e.g., this thread),
* followed by experimental implementation (e.g., by a
reasearch lab rather than a commercial compiler vendor),
* followed by preliminary adoption by standards committee(s),
* followed by commercial implementation,
* and incorporation into a draft of the standard,
with each step being contingent on the outcome of the previous steps.
Tom Payne
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/05/06 Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
: In article <4mbm0o$4qc@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
:
: My point is that a function might catch an exception that it thinks it
: *does* know how to handle because it thinks the only source of that
: exception is a function it called. Along comes a signal that throws that
: exception from somewhere else, and the signal just happened to occur while
: this function is active. Bingo, exception handled in the wrong place.
It would be up to the programmer(s) to make sure that this did not happen
in cases where it is an error. The programmer(s) could, for instance, make
the classes of asynchronous exceptions distinct from the others.
: > As things now stand, a function could see any exception thrown by any
: > function it calls, however indirectly (including those passed as
: > parameters). As things would stand, a function could see any of those
: > exceptions plus any thrown by the handler of an asynchronous signal
: > that is enabled at that point in the code. So, why is the second set
: > less checkable than the first?
:
: The first allows me to write a function that need know only the exceptions
: thrown by procedures it calls. It need know nothing about the "external
: environment" set up by functions that call it. In particular, a function
: that calls no procedures and throws no exceptions need not worry about
: getting any exceptions at all. This would not be true if signals propagate
: exceptions. Even in cases where a function calls another function, it need
: only know the exceptions propagated by that function.
It seems to me that your function would be fine just as you originally
wrote it. If it doesn't know how to handle a particular asynchronous
exception, it let's the exception fall through (unwind) to a
previously called function that can handle it.
: If a called function contains a throw specification, then the compiler (and
: the programmer) can tell the set of exceptions that might be thrown and
: depend on that (with the addition of the "unexpected" exception) being the
: entire set. This would no longer be true in your world.
The set would be larger, but still bounded and well defined. The
programmer and compiler would merely add in the set of exceptions
thrown from handlers of asynchronous signals (which, alas, would need
some syntactic demarcation, since the compiler would need to give them
special treatment).
: Another problem is that a function could behave radically differently if a
: new call is added to it from within a segment where a signal is enabled
: that the function did not expect to be enabled. This violates many sound
: principles of software engineering.
When a function is called at a point where its preconditions are not met,
one should expect "radically different" behavior.
: You might need an entry in this table for *every instruction* of the
: prologue and epilogue in order for the compiler to correctly unwind the
: stack at any point. How small can you make the range table entries? On
: many 32-bit RISC machines, you could end up with 3 words per word of
: instruction for the prologue and epilogue. So we're talking a 300 percent
: expansion.
If this feature takes more than, say, a ten percent increase in code
or a one percent increase in running time, it's probably not worth it.
: For non-prologue, non-epilogue code, the expansion probably won't be as
: bad, but it can result in a 100 percent expansion. But now consider that
: some compilers now try to scatter the prologue and epilogue throughout the
: rest of the code, so we're back to having lots of table entries.
Frankly, it it the non-prolog, non-epilog part that troubles me the
most. A signal could occur inside an inline constructor for an object
in a block nested several levels deep inside the function. One must
complete the construction and, then, destroy that object and the other
objects created so far in this and the surrounding blocks, before
getting to the epilog. (It should be possible, however, to do this by
finding a path to a return instruction and analyzing it.)
As a point of interest, asynchronous exceptions have similar utility and
problems to asynchronously killing a thread:
* Both are ways to bail out of a portion of a computation as a result
of an externally detected event.
* Both can occur at a time when the stack is in an inconsistent state.
* Both require invoking destructors for objects local to the deleted
(portion of the) stack to release any global resources they hold.
Tom Payne (thp@cs.ucr.edu)
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/04/29 Raw View
Kevin Cline (kcline@sun132.spd.dsccc.com) wrote:
[...]
: The conversion of signals to exceptions is useful only for synchronous
: signals like SIGFPE and SIGSEGV.
I consider it useful to be able to throw an exception in response to a
time-out, generated by the handler of an asynchronous signal. On the
other hand, the location SIGFPE can become somewhat obscure in a
highly parallel processor, which makes it somewhat like an
asynchronous signal.
: Asyncronous signals like SIGTERM can not be converted to exceptions in
: any useful way; every routine would have to check for them or suffer
: abnormal termination.
The need for checking (polling) is what I'm proposing to avoid.
Currently, the signal handler can only set a global flag, which the
program polls and throws the exception on finding it set. We could
eliminate that overhead, latency, and programming tedium by letting
the handler throw the exception for itself.
It appears to be about as difficult to implement signal throwing from
handlers of synchronous signals as if is to implement signal throwing
from for all handlers. The key is that certain critical sections,
such as
* function invocations and returns and
* constructors and destructors
must be protected from such exceptions, and it seems possible to
generate a synchronous signal (e.g., SIGSEGV) during any of these.
Tom Payne (thp@cs.ucr.edu)
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: bill@amber.ssd.hcsc.com (Bill Leonard)
Date: 1996/04/30 Raw View
In article <4m0vbj$il7@galaxy.ucr.edu>, thp@cs.ucr.edu (Tom Payne) writes:
> Kevin Cline (kcline@sun132.spd.dsccc.com) wrote:
> : Asyncronous signals like SIGTERM can not be converted to exceptions in
> : any useful way; every routine would have to check for them or suffer
> : abnormal termination.
>
> The need for checking (polling) is what I'm proposing to avoid.
What Kevin is saying is that all you would succeed in doing is moving the
polling somewhere else. How can I write a function that behaves reasonably
in the face of an exception that can happen at any time, for any reason?
Exception specifications, if they were used, would become totally
meaningless, because a function that otherwise thinks it can receive no
exceptions (or a small, well-defined set of them) can get any old exception
from a signal handler completely unknown to that function.
So you end up with every function having to be prepared to handle any
possible exception at every single instruction.
Also, I wonder what is the point of throwing an exception out of a signal
handler when you have no idea if it will be handled at all and, if so,
whether it will be handled by the proper handler?
Using a call-tree or stack based scheme (exception handling) doesn't seem
like the right mechanism for handling an asychronous event.
> Currently, the signal handler can only set a global flag, which the
> program polls and throws the exception on finding it set. We could
> eliminate that overhead, latency, and programming tedium by letting
> the handler throw the exception for itself.
In exchange, you would impose considerable overhead on ALL programs and ALL
compilers and ALL function invocations. Throwing exceptions from signal
handlers is not that widespread a need that it would justify this overhead.
By the way, the range tables technique mentioned by David Chase does not
remove the imposed overhead, it merely makes the implementation possible.
The tables would either have to be prohibitively large, or else many very
desirable optimizations would have to be prohibited in order to make the
tables manageable.
> It appears to be about as difficult to implement signal throwing from
> handlers of synchronous signals as if is to implement signal throwing
> from for all handlers.
Where did you get that idea?
> The key is that certain critical sections,
> such as
> * function invocations and returns and
> * constructors and destructors
> must be protected from such exceptions, and it seems possible to
> generate a synchronous signal (e.g., SIGSEGV) during any of these.
This is not really true -- at the least it is misleading. Function
prologue code, for instance, could typically only generate a SIGSEGV if
you've exceeded your stack limits, and you can check for that first. (Our
Ada implementation does that.) That's a lot cheaper than protecting the
prologue from *all* signals in case they propagated an exception.
As far as other code goes, the compiler generally knows when it generates
code that can cause a synchronous signal. In most cases, the compiler only
needs to bound the range of program-counter locations where the signal may
occur, and it can do so by generating synchronization instructions if
necessary. If asychronous signals are allowed to propagate an exception,
then the compiler must be prepared for any instruction (from its point of
view) to cause a signal, which would basically completely disable
optimization.
--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Bill.Leonard@mail.hcsc.com
These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Computer Systems Corporation.
------------------------------------------------------------------------------
There's something wrong with an industry in which amazement is a common
reaction to things going right.
"Hard work never scared me. Oh, sure, it has startled me from a distance."
-- Professor Fishhawk
------------------------------------------------------------------------------
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/04/30 Raw View
Bill Leonard (bill@amber.ssd.hcsc.com) wrote:
: How can I write a function that behaves reasonably
: in the face of an exception that can happen at any time, for any reason?
By having it catch an object that tells where, why, and by whom the
exception was thrown.
: Exception specifications, if they were used, would become totally
: meaningless, because a function that otherwise thinks it can receive no
: exceptions (or a small, well-defined set of them) can get any old exception
: from a signal handler completely unknown to that function.
: So you end up with every function having to be prepared to handle any
: possible exception at every single instruction.
Programmers can keep the set of exceptions thrown by handlers of
asynchronous signals as small as they choose. Moreover, any signal
can be blocked at points where its occurrence would jeopardize the
integrity of the program.
: Also, I wonder what is the point of throwing an exception out of a signal
: handler when you have no idea if it will be handled at all and, if so,
: whether it will be handled by the proper handler?
: Using a call-tree or stack based scheme (exception handling) doesn't seem
: like the right mechanism for handling an asychronous event.
Certainly, exceptions should not be the normal way to handle
asynchronous events. However, the need to occasionally bail out of a
particular phase of a program as a result of an external event seems
quite natural, e.g., a sittuation where, if any of several alarms go
off, the current computation becomes useless and needs to be aborted.
: > Currently, the signal handler can only set a global flag, which the
: > program polls and throws the exception on finding it set. We could
: > eliminate that overhead, latency, and programming tedium by letting
: > the handler throw the exception for itself.
:
: In exchange, you would impose considerable overhead on ALL programs and ALL
^^^^^^^^^^^^^^^^^^^^^
: compilers and ALL function invocations. Throwing exceptions from signal
: handlers is not that widespread a need that it would justify this overhead.
:
: By the way, the range tables technique mentioned by David Chase does not
: remove the imposed overhead, it merely makes the implementation possible.
: The tables would either have to be prohibitively large, or else many very
^^^^^^^^^^^^^^^^^^^
: desirable optimizations would have to be prohibited in order to make the
: tables manageable.
Of course, there are thresholds above which time/space overheads are
unacceptable, as well as thresholds blow which they are acceptable.
The issue is how small the overheads can be kept, and what's
acceptable. (Perhaps, some relevant data is available for other
languages.)
: > It appears to be about as difficult to implement signal throwing from
: > handlers of synchronous signals as if is to implement signal throwing
: > from for all handlers.
[...]
: > The key is that certain critical sections,
: > such as
: > * function invocations and returns and
: > * constructors and destructors
: > must be protected from such exceptions, and it seems possible to
: > generate a synchronous signal (e.g., SIGSEGV) during any of these.
:
: This is not really true -- at the least it is misleading. Function
: prologue code, for instance, could typically only generate a SIGSEGV if
: you've exceeded your stack limits, and you can check for that first.
In cases where even synchronous signals can be ruled out during
critical sections, all difficulties (and overhead) disappear.
: If asychronous signals are allowed to propagate an exception,
: then the compiler must be prepared for any instruction (from its point of
: view) to cause a signal, which would basically completely disable
: optimization.
Why disable optimization of computations that might abort?
Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: "Tribble, Louis E" <letribble@msmail4.hac.com>
Date: 1996/04/30 Raw View
Two comments:
1) With respect to converting asynchronous signals to exceptions: Scary
thought! I'll never be able to reason about the semantics of any statement...
In Ada, the common way to avoid polling when using signals is to designate an
entry point of a task to be activated when the signal occurs. Except for the
stack space used by the signal handler (minimal), the active task at the time
of the signal is unaffected (except that if its priority is lower than the
"handler" task, the run-time system will immediately suspend it).
2) With respect to MT-safe containers: the "MT-safe" components (such as many
of the Ada Booch components) have been of limited use in writing
multi-threaded Ada software ("real-time" simulators, network protocol stacks,
and such). I almost always wind up using the sequential (i.e. MT-unaware)
versions.
This happens because a single component rarely captures the complete state of
an abstraction (and if it does, it may be because some of the state is implied
by incidental state of the component, which has its own problems). So, an
atomic operation on an abstraction requires that a single critical region
encompass the coordinated update of _every_ component which supports the
abstraction. Otherwise, the implementation is open to the nastiest sort of
bugs.
Something like an MT-aware queue has been useful for tranferring data between
tasks (threads). However, by the time it combined some of the mutators and
accessors (to avoid race conditions) and added the ability to control blocking
on empty and full conditions, it was no longer an abstract queue. In
retrospect, it would probably have been better named a "pipe" or something
like that.
Louis Tribble
Hughes Aircraft Company
letribble@msmail4.hac.com
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: rsalz@osf.org (Rich Salz)
Date: 1996/04/20 Raw View
In <4l12rf$q11@galaxy.ucr.edu> thp@cs.ucr.edu (Tom Payne) writes:
>The standards process is the appropriate forum for such vendor
>agreement.
Standards bodies seem to work best when they have a good base document
to work from that is pragmatic, practical, and deployed.
/r$
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: David Brownell <brownell@ix.netcom.com>
Date: 1996/04/21 Raw View
Tom Payne wrote:
>
> The second major need is to allow a signal to force an exception in
> without the program polling for it.
I guess we'll just disagree on this ... I see signals as basically
asynchronous, and exceptions as wholly synchronous. Plus, exactly
which exceptions are legal will vary depending on the state of the
thread's execution (i.e. the "throw" specification of the function
being executed). So it doesn't make sense to me to mix the two.
If you want signals to cause exceptions, it's simple enough to code
the handoff between signal handling thread and the exception-raising
one ... either sigwait() and pthread_cond_signal(), or else use the
older asynchronous signal handlers and a semaphore. The thread that
blocked on the synchronization variable (sema_t or pthread_cond_t)
raises whatever exception it wants.
> In a thread on this
> topic a couple months ago, David Chase gave an implementation strategy
> based on range tables that seemed to involve no CPU overhead and whose
> complexity seemed reasonable.
But if the exceptions aren't synchronous, the complexity of generating
those tables can become extremely high ... particularly in conjunction
with the code motion performed as part of most optimizations. As was
pointed out to me by David himself.
--
David Brownell
http://www.netcom.com/~brownell
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: chase@centerline.com (David Chase)
Date: 1996/04/21 Raw View
In article <4l8pud$3t8@galaxy.ucr.edu>, Tom Payne <thp@cs.ucr.edu> wrote:
>Fixing the stantard so that signal handlers can read global data (with
>appropriate qualifications on the significance of the result) would be
>a significant help. The complexity here is only in the wording of the
>standard.
Nope, there's appalling interactions with constructors and destructors.
>The second major need is to allow a signal to force an exception in
>without the program polling for it. Complexity is never "necessarily
>a good thing," unless the benefits outweigh the costs. The benefits
>would be
>
> * lower latency exception responses to signals
>
> * elimination of the CPU overhead of polling
>
> * much simpler programming for such situation.
>
>The cost would be in the increased complexity of the implementation
>and CPU overhead caused by the implementation. In a thread on this
>topic a couple months ago, David Chase gave an implementation strategy
>based on range tables that seemed to involve no CPU overhead and whose
>complexity seemed reasonable.
The DISPATCH has reasonable complexity. Unfortunately, unless both
the compiler and the language are engineered to provide abort/commit
semantics for every single operation, you end up with unsafe portions
of code -- if an exception is thrown in one of those portions of code,
you're potentially hosed (as in, how inconsistent would you like your
data structures to get?) The destructor/constructor/automatic-object
semantics already in place make this non-trivial.
Personally, I think this might be a cool thing, but it's a little late
for that now (when I write code that I want to work, which is most of
the time, I work in the abort-commit style -- either the operation
succeeds, or it virtually/semantically does "nothing", leaving the
data structures in a valid state. Carried down to the load/store
level, this can be a little much). It would almost certainly require
the addition of garbage collection to the language, else people would
certainly lose their minds (and you could *never* be sure that your
code was really leak-tight -- run-time leak detectors will merely tell
you about a single run of a program, and these leaks will be
timing-dependent).
And no, I'm not sure that any even slightly popular language does this
right. I haven't studied Ada's approach to this problem in detail, so
they might have gotten it right. I think their approach is to allow
programmers to delimit critical sections during which interrupts are
suspended (think about implementing that efficiently on your favorite
machine for a minute). Another approach is to allow the execution of
roll-back code if an operation is interrupted, but then you've got to
write the roll-back code. I suspect that this is the most efficient
route to take, but it is really non-trivial at the programmer level
(compilers can do this with stack frames, but compilers are very
patient animals).
speaking for myself,
David Chase
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: chase@centerline.com (David Chase)
Date: 1996/04/23 Raw View
In article 2B89@ix.netcom.com, David Brownell <brownell@ix.netcom.com> writes:
> But if the exceptions aren't synchronous, the complexity of generating
> those tables can become extremely high ... particularly in conjunction
> with the code motion performed as part of most optimizations. As was
> pointed out to me by David himself.
It occurred to me, while sitting here testing, that though this is
true, it is not completely true. The tables ought to be generated in
such a way that they are completely position-independent, and thus
dont't count against your working-set size in the usual case (Sun does
this, for instance) Conceivably, they could never ever be mapped, and
the exception dispatcher could work out of a file. This is not very
useful for code in a ROM, but then code in a ROM is probably compiled
differently anyway.
This is probably irrelevant to whether exception-handling should or
could be changed in C++, but I thought it would be nice to get the
pros and cons as nearly correct as possible.
speaking for myself,
David Chase
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: jcoffin@rmii.com (Jerry Coffin)
Date: 1996/04/17 Raw View
In article <4ku297$ou6@usc.edu>, Hauck@mizar.usc.edu says...
> Hi there,
>
> I wonder if the C++ Standard Template Library (STL) is multi-thread
> safe? Does anybody have experience with STL in a multi-threaded
> environment?
Hmm...that's a bit like asking whether "cars are fast". Some are, some
aren't, and of course "fast" is a bit subjective.
Likewise with STL: the classes are defined in such as way that it should be
fairly easy to make them reentrant, and they don't require the use of a lot of
global data that would have to be made thread-local.
However, that doesn't imply that every implementation of STL will be safe for
multithreaded use. The standard says nothing about threads, so it says
equally little about the classes being safe for use with threads.
Later,
Jerry.
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: David Brownell <brownell@ix.netcom.com>
Date: 1996/04/17 Raw View
Tom Payne wrote:
>
> David Brownell (brownell@ix.netcom.com) wrote:
> :
> : Someone else mentioned Rogue Wave. Does anyone have URLS or something
> : through which the different "MT-enhanced" APIs could be compared? I've
> : been organizing a collection of MT/C++ issues; one of the gaps is where
> : the standard C++ library interfaces need to be "MT-safed" according to
> : some useful and consistent policy.
>
> Is there agreement on what is meant by the term "thread safe." (I have
> seen definitions that seemed quite inadequate.)
IMHO "Thread-safe" is a misleading goal. You actually want an API that's
natural to use in a threaded environment ... in some cases, that means an
API that's OK to use from concurrent threads (e.g. add to containers),
but in other cases it's reasonable to have objects that are only usable
from a single thread (e.g. iterators).
Consider for example POSIX.1c "putc_unlocked()" ... not thread-safe by itself,
but very usable for high performance character I/O since it's always used in
conjunction with "flockfile()" (in non-erroneous programs).
(See my writeup at http://www.netcom.com/~brownell/pthreads++.html for some
discussion of the "policy" that a useful MT-safe version of the standard
C++ library ought to follow. It also covers a bunch of other C++/MT issues
that come up in the POSIX.1c/C++ environment. Comments, please!)
> : Eventually, the vendors ought to agree on one way to do threaded C++,
> : but we're not there yet! I don't know how much commonality there is
> : between different MT/C++ environments but I suspect it's not lots.
>
> The standards process is the appropriate forum for such vendor
> agreement. Unfortunately, the standards bodies have failed to provide
> the necessary leadership in the areas involving concurrency and
> asynchrony. Perhaps, these bodies simply have other fish to fry --
> getting the current standard completed is an monumental task.
Well, POSIX.1c happened; it's the ANSI/ISO C++ team that's said such
issues are "out of scope" for now. I don't think it's realistic to
expect POSIX to produce a C++ binding at this time, given some of the
issues raised in my writeup above. Nor do I think vendors should be
gratuitously diverging, even lacking a formal standards framework.
> I
> detect, however, some reluctance to address matters of concurrency and
> asynchrony for fear of:
Hmmm, "concurrency" and "asynchrony" are broader issues than threading.
Many people use threading to _avoid_ asynchrony, for example.
> ...
>
> Although each of these fears has some arguments for it, the portable
> implementations of Modula and Ada offer evidence that the difficulties
> are surmountable. Meanwhile, a C++ program (with defined behavior)
> cannot deal efficiently with some of the simplest asynchrony, e.g., a
> hardware-detected exception cannot generate a program exception,
> unless the program explicitly polls for it, thus, requiring
> unacceptable overhead both in running time and programming effort.
Some of us don't think that it's necessarily a good thing to complicate
C++ exception processing further -- it's a synchronous mechanism now.
I'd hope that the current mechanism for asynchrony (signals) would just
be fixed to address your issues!
--
David Brownell
http://www.netcom.com/~brownell
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/04/19 Raw View
David Brownell (brownell@ix.netcom.com) wrote:
: Tom Payne wrote:
[...]
: >
: > The standards process is the appropriate forum for such vendor
: > agreement. Unfortunately, the standards bodies have failed to provide
: > the necessary leadership in the areas involving concurrency and
: > asynchrony. Perhaps, these bodies simply have other fish to fry --
: > getting the current standard completed is an monumental task.
:
: Well, POSIX.1c happened; it's the ANSI/ISO C++ team that's said such
: issues are "out of scope" for now. I don't think it's realistic to
: expect POSIX to produce a C++ binding at this time, given some of the
: issues raised in my writeup above. Nor do I think vendors should be
: gratuitously diverging, even lacking a formal standards framework.
Unfortunately, competition begets for product differentiation, and
vendors must compete. Standardization is the responsibility of
standards bodies, who, I realize, must separate and prioritize their
concerns. Nevertheless ...
: > ... a C++ program (with defined behavior)
: > cannot deal efficiently with some of the simplest asynchrony, e.g., a
: > hardware-detected exception cannot generate a program exception,
: > unless the program explicitly polls for it, thus, requiring
: > unacceptable overhead both in running time and programming effort.
:
: Some of us don't think that it's necessarily a good thing to complicate
: C++ exception processing further -- it's a synchronous mechanism now.
: I'd hope that the current mechanism for asynchrony (signals) would just
: be fixed to address your issues!
Fixing the stantard so that signal handlers can read global data (with
appropriate qualifications on the significance of the result) would be
a significant help. The complexity here is only in the wording of the
standard.
The second major need is to allow a signal to force an exception in
without the program polling for it. Complexity is never "necessarily
a good thing," unless the benefits outweigh the costs. The benefits
would be
* lower latency exception responses to signals
* elimination of the CPU overhead of polling
* much simpler programming for such situation.
The cost would be in the increased complexity of the implementation
and CPU overhead caused by the implementation. In a thread on this
topic a couple months ago, David Chase gave an implementation strategy
based on range tables that seemed to involve no CPU overhead and whose
complexity seemed reasonable.
Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated. To submit articles: Try just posting with your
newsreader. If that fails, use mailto:std-c++@ncar.ucar.edu
comp.std.c++ FAQ: http://reality.sgi.com/austern/std-c++/faq.html
Moderation policy: http://reality.sgi.com/austern/std-c++/policy.html
Comments? mailto:std-c++-request@ncar.ucar.edu
]
Author: Hauck@mizar.usc.edu (Thomas Hauck)
Date: 1996/04/15 Raw View
Hi there,
I wonder if the C++ Standard Template Library (STL) is multi-thread
safe? Does anybody have experience with STL in a multi-threaded
environment?
Thanks
Thomas
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: David Brownell <brownell@ix.netcom.com>
Date: 1996/04/16 Raw View
> I believe Modena or one of the other commercial STL library vendors has
> added a certain degree of thread-awareness to their implementation. I
> might be wrong, but I have a feeling that it is still not entirely
> thread-safe.
Someone else mentioned Rogue Wave. Does anyone have URLS or something
through which the different "MT-enhanced" APIs could be compared? I've
been organizing a collection of MT/C++ issues; one of the gaps is where
the standard C++ library interfaces need to be "MT-safed" according to
some useful and consistent policy.
Eventually, the vendors ought to agree on one way to do threaded C++,
but we're not there yet! I don't know how much commonality there is
between different MT/C++ environments but I suspect it's not lots.
--
David Brownell
http://www.netcom.com/~brownell
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]
Author: thp@cs.ucr.edu (Tom Payne)
Date: 1996/04/17 Raw View
David Brownell (brownell@ix.netcom.com) wrote:
: > I believe Modena or one of the other commercial STL library vendors has
: > added a certain degree of thread-awareness to their implementation. I
: > might be wrong, but I have a feeling that it is still not entirely
: > thread-safe.
:
: Someone else mentioned Rogue Wave. Does anyone have URLS or something
: through which the different "MT-enhanced" APIs could be compared? I've
: been organizing a collection of MT/C++ issues; one of the gaps is where
: the standard C++ library interfaces need to be "MT-safed" according to
: some useful and consistent policy.
Is there agreement on what is meant by the term "thread safe." (I have
seen definitions that seemed quite inadequate.)
: Eventually, the vendors ought to agree on one way to do threaded C++,
: but we're not there yet! I don't know how much commonality there is
: between different MT/C++ environments but I suspect it's not lots.
The standards process is the appropriate forum for such vendor
agreement. Unfortunately, the standards bodies have failed to provide
the necessary leadership in the areas involving concurrency and
asynchrony. Perhaps, these bodies simply have other fish to fry --
getting the current standard completed is an monumental task. I
detect, however, some reluctance to address matters of concurrency and
asynchrony for fear of:
* limiting the range of architectures on which the langauge
in efficiently implmentable,
* introducing incomapatibilities with vendor-established
extensions,
* commitment to one or another particular model for
concurrent programming.
Although each of these fears has some arguments for it, the portable
implementations of Modula and Ada offer evidence that the difficulties
are surmountable. Meanwhile, a C++ program (with defined behavior)
cannot deal efficiently with some of the simplest asynchrony, e.g., a
hardware-detected exception cannot generate a program exception,
unless the program explicitly polls for it, thus, requiring
unacceptable overhead both in running time and programming effort.
There have been suggestions that such matters should be addressed by
the C Standards Committees and possibly incorporated by reference into
the C++ Standard by reference. While it is important to maintain as
much commonality as possible, there are considerations that are unique
to C++, e.g., interaction with exceptions.
Tom Payne (thp@cs.ucr.edu)
---
[ comp.std.c++ is moderated. To submit articles: try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ FAQ: http://reality.sgi.com/employees/austern_mti/std-c++/faq.html ]
[ Policy: http://reality.sgi.com/employees/austern_mti/std-c++/policy.html ]
[ Comments? mailto:std-c++-request@ncar.ucar.edu ]