Topic: Uncaught exceptions
Author: chase@centerline.com (David Chase)
Date: 1995/08/01 Raw View
> > someone wrote :
> > > > I cannot see any logical argument why one of
> > > > a) unwinding the whole stack or
> > > > b) just terminating without unwinding it
> > > > should be prefered by this principle.
> >Unless the draft standard specifies that a call to
> >"terminate" may ONLY be used for debugging purposes, it is
> >both legal and likely for it to do something else, such as
> >printing a message before exiting.
Etay Bogner <ebogner@ird.scitex.com> writes:
> I really can't see what's the connection between your message and example to the issue here.
> The original question ( I believe ) was why it is un-specified in the standard whether the stack
> is unwind when terminate is called.
> I gave a simple answer to this Q.
You gave a simple reply. It was not an answer, at least not
to me. Any ambiguity in the standard must be well-justified,
and "you can dump core on Unix, but not on PC's" is not an
answer.
> You can write whatever terminate/unexpected function you like.
No, you cannot. The draft specifies (last time I looked) that
it is not allowed to "return" in the normal fashion. However,
it is possible to write a terminate (I can imagine doing such)
that would depend on whether or not the stack had unwound
(whether destructors had been run).
> It's a simple fact that if the standard commitee enforced "a", some
> people wouldn't be happy since they couldn't debug their programs as
> they used to.
Then perhaps the standard committee should enforce "b". They
should enforce *something*. If there are reasons to call a
terminate-like function both before and after the stack has
unwound, then perhaps they should specify one for before, and
another for after. I can think of one excellent reason to call
terminate BEFORE the stack has unwound, namely that we've
already got the ability to detect uncaught exceptions AFTER the
stack has been mostly unwound, by writing:
main() { try { /* ... */ } catch (...) { my_terminate(); } }
Note that this does not catch problems in static constructors.
> The sample code you wrote just indicate ( detect ) whether or not the
> detructors are called, which is NOT that useful ...
It's an example. You would prefer, perhaps, that I post the
source to a debugging interpreter, or a window manager? The
point is that this procedure CAN detect this situation. So can
other, more complicated, and more useful replacements for
terminate. The example I posted is also very useful in that it
could be incorporated into a language verification suite, IF a
choice were made.
> And all this doesn't say that terminate "may ONLY be used for debugging
> purposes". A programmer can choose to exit that way from his program,
> if he likes that.
Exactly my point. If a programmer chooses to exit from his
program in this way, does this have portable behavior?
(currently, no) If not, why not? Ambiguity in standards is bad,
and should not exist unless there is some other benefit for the
programmer derived from leaving the ambiguity in. What benefit
does the programmer derive from this ambiguity?
speaking for myself,
David Chase
Author: Etay Bogner <ebogner@ird.scitex.com>
Date: 1995/08/01 Raw View
> someone wrote :
> > > I cannot see any logical argument why one of
> > > a) unwinding the whole stack or
> > > b) just terminating without unwinding it
> > > should be prefered by this principle.
>Unless the draft standard specifies that a call to
>"terminate" may ONLY be used for debugging purposes, it is
>both legal and likely for it to do something else, such as
>printing a message before exiting.
>Unless the standard is worded in such a way as to disallow
>the function passed to "set_terminate" below, I believe I
>have here an example of a program that could be used to
>detect such a difference in implementation. Hence, one
>choice or another code be enforced (I have encoded my
>preferences into this test program's choice of PASS and
>FAIL).
I really can't see what's the connection between your message and example to the issue here.
The original question ( I believe ) was why it is un-specified in the standard whether the stack
is unwind when terminate is called.
I gave a simple answer to this Q.
You can write whatever terminate/unexpected function you like. It really doesn't answer the Q.
It's a simple fact that if the standard commitee enforced "a", some people wouldn't be happy since
they couldn't debug their programs as they used to.
>I believe I have here an example of a program that could be used to
>detect such a difference in implementation
The sample code you wrote just indicate ( detect ) whether or not the detructors are called, which
is NOT that useful ...
This Q should be addressed by compiler vendors, when supplying a compiler + debugger. The commitee
gave them the right to choose, just for implmenting a debugger.
And all this doesn't say that terminate "may ONLY be used for debugging purposes". A programmer
can choose to exit that way from his program, if he likes that.
-- Etay Bogner,
-- ebogner@ird.scitex.com ( OLD ),
-- Etay_Bogner@mail.stil.scitex.com,
-- Scitex Corp.
-- Israel.
Author: chase@centerline.com (David Chase)
Date: 1995/07/31 Raw View
Etay Bogner <ebogner@ird.scitex.com> writes:
> I'm getting into this debate in the middle :-)
> someone wrote :
> > > I cannot see any logical argument why one of
> > > a) unwinding the whole stack or
> > > b) just terminating without unwinding it
> > > should be prefered by this principle.
> The C++ commitee left the issue open because they wanted to
> continue to support the "dump core" debugging method.
> On the other hand, some operating systems do NOT support this
> debugging feature ( Mac's and PC's for example ) so the
> implemenatation can choose.
Unless the draft standard specifies that a call to
"terminate" may ONLY be used for debugging purposes, it is
both legal and likely for it to do something else, such as
printing a message before exiting.
> So, they couldn't enforce any method ( a or b ).
Unless the standard is worded in such a way as to disallow
the function passed to "set_terminate" below, I believe I
have here an example of a program that could be used to
detect such a difference in implementation. Hence, one
choice or another code be enforced (I have encoded my
preferences into this test program's choice of PASS and
FAIL).
---------------------------------------------------------------
#include <stdio.h>
typedef void (*PFV)();
extern PFV set_unexpected(PFV);
int flag = 0;
struct C { C() {}; ~C() {flag = 1;} };
void f() {
C c;
throw(1);
}
void check_conformance() {
/* If flag set, destructor for c (in f, above) has already run. */
printf(flag ? "FAIL\n" : "PASS\n");
exit(1); /* Draft says: "cannot return", last I checked. */
}
main() {
(void) set_unexpected(check_conformance);
f();
}
---------------------------------------------------------------
P.S. Fergus Henderson pointed out a typo (fortunately, an
obvious one) in the division code in my previous contribution
to this thread. The line
az = udiv(x,y)
is wrong, and should read instead
az = udiv(ax,ay)
Note also, that the corresponding code for remainder takes
the sign (in the conversion from absolute values) from the
dividend alone. Note further that simple shifts and masks
do not suffice for constant division and remainder
operations, but compilers deal with this (efficiently) every
day. Gcc is one widely available example of a compiler that
should get this right, and Sun's compilers are another.
David
Author: chase@centerline.com (David Chase)
Date: 1995/07/28 Raw View
In article <3v2vj6$kha@metro.ucc.su.OZ.AU>, maxtal@Physics.usyd.edu.au (John Max Skaller) writes:
> That may be so (IMHO) in some cases but I don't think this
> in one of them: throwing an exception for which there is no
> handler may be considered a programmer error. I see no strong
> reason to require a particular behaviour here: the rule
> "an exception unwinds the stack up to the closest handler"
> doesn't say what should happen if there is NO handler.
But there are other rules specifying that "terminate" is called.
There should also be rules specifying WHEN terminate is called,
since I gather that nobody has bothered to measure the benefits
obtained by leaving this unspecified. In fact, since I've studied
this problem, and designed exception dispatch interfaces, I know
that the costs should be low (it is possible to design a dispatch
interface where they would not be low, but that does not mean it
is intrinsic to the problem, merely to a particular dispatcher).
> I cannot see any logical argument why one of
> a) unwinding the whole stack or
> b) just terminating without unwinding it
> should be prefered by this principle.
Even if I didn't care, I think a choice should still be made. The
worst choice here is to not choose. By the way, has the standards
committee figured out what "-3/2" is equal to? (OTHER languages
manage to define this -- why can't C++? Has nobody else thought
about this enough to figure out how to simulate one sort of
division with another? (*) Since division is well-known to be a
rare operation, there are no performance justifications for this
ambiguity -- it is completely gratuitous. After all, it's up to
the implementors to make life easier for the users.)
> >they are unnecessary, why make life harder for writers of portable
> >code?
> It doesn't. Portable code shouldn't terminate abnormally :-)
Good that you put the smiley there, because as you should well
know, portable code should (by definition) merely comply with the
relevant standards on the platforms across which it is intended to
be ported. IF the C++ standard defined unambiguous behavior for
the call to terminate, THEN portable code could call set_terminate
to change its behavior (within limits) when no handler is found.
(*) Here's how to simulate round-to-zero signed division
with unsigned division. This way, nobody can claim that
they don't know how, or that it is too hard.
Given unsigned division of numbers in two's complement
representation, you can get round-to-zero division with
the following:
#define WSM1 31 /* Bits per word, minus one */
int sdiv(int x, int y) {
int z; /* result */
int sx, sy, sz; /* signs, expressed as a smeared sign bit */
unsigned int ax, ay, az; /* abs values of x,y,z */
sx = x >> WSM1;
sy = y >> WSM1;
ax = (x ^ sx) - sx;
ay = (x ^ sy) - sy;
az = udiv(x,y);
sz = sx ^ sy;
z = (az ^ sz) - sz;
return z;
}
That's nine instructions added to an unsigned division, on machines
with which I am familiar. Furthermore, there's a decent amount of
available parallelism here, so it should only take six additional
cycles over the cost of division on a superscalar machine.
Division itself often takes more than that -- for example, on the
PPC 601 division has a 36-cycle latency, and I think the latency is
8 cycles on a SuperSparc, and something like 16 (32?) on a
MicroSparc. The overhead is thus comparable to the cost of the
division itself, which is obviously (based on the cycle counts for
two chips mentioned above) not judged to be critical by
chip-builders.
In the absence of overflow, this returns the correct
(round-to-zero) result. The only overflows possible on a two's
complement machine are non-zero/zero and minimum-signed/-1. Any
zero-division overflow signaled by udiv is preserved, but the other
sort of overflow is not signaled (assuming it is desired). The
answer that it provides would be correct if it were interpreted as
an unsigned number, so this seems at least plausible. However, we
are talking about an actual undefined condition here, so it is ok
if an implementation does not signal the overflow.
In the event that someone really wants to signal the minsigned/-1
error, the following code needs to be inserted:
if ( (int) (sy & az) < 0) {
/* abs(result) and divisor sign are both negative,
which means the calculation must have been
minsigned/-1, which is an overflow condition. */
}
Note that this calculation can be scheduled in parallel with the
final calculation of z, so on a superscalar machine this should
incur little or no additional cost.
Since we know that there is no portable code (C or C++) that
depends on the behavior of signed division, and now I've provided a
not-too-costly way to simulate round-to-zero division given
unsigned division, I'd be interested in hearing why the behavior of
this operation should still be left unspecified (in this case, to
be round-to-zero, which is what I think most implementations do
anyway. It's also what Fortran does, if I remember correctly.)
And, of course, whatever "reason" emerges, I'll do my best to club
it to death, too :-).
And, yes, I realize that I'm being incredibly obnoxious about this,
but I think I am completely justified in this obnoxiousness. Well-
defined languages are easier for both users and implementors. I've
never seen a language ambiguity where the difference in
implementation difficulty was larger than the difficulty of
figuring out the better choice.
speaking for myself,
David Chase
Author: Etay Bogner <ebogner@ird.scitex.com>
Date: 1995/07/29 Raw View
I'm getting into this debate in the middle :-)
someone wrote :
> > I cannot see any logical argument why one of
> > a) unwinding the whole stack or
> > b) just terminating without unwinding it
> > should be prefered by this principle.
The C++ commitee left the issue open because they wanted to
continue to support the "dump core" debugging method.
As most of us know, on Unix machines, a "core" is dumped when
a program has quited abnormaly. Then the programmer could
start debugging using that dumped "core" to see where things
went wrong.
So on Unix machines, the stack shouldn't be unwind (
otherwise, no stack info could be saved :-) )
On the other hand, some operating systems do NOT support this
debugging feature ( Mac's and PC's for example ) so the
implemenatation can choose.
So, they couldn't enforce any method ( a or b ).
HTH.
-- Etay Bogner,
-- ebogner@ird.scitex.com ( OLD ),
-- Etay_Bogner@mail.stil.scitex.com,
-- Scitex Corp.
-- Israel.
Author: fjh@munta.cs.mu.OZ.AU (Fergus Henderson)
Date: 1995/07/24 Raw View
chase@centerline.com (David Chase) writes:
>smeyers@netcom.com (Scott Meyers) writes:
>> I see in section 15.3.7 of the DWP the following about exception handling:
>
>> If no matching handler is found in a program, the function terminate()
>> (_except.terminate_) is called. Whether or not the stack is unwound
>> before calling terminate() is implementation-defined.
>>
>> [...] Does anybody
>> know why there is no guarantee of stack unwinding when an uncaught
>> exception is thrown?
>
>I believe that this was intended to allow for both the needs of debuggers and
>the sloppy/spare habits of optimized code.
Yes, that's basically correct.
>From the point of view of someone
>debugging a program, it makes lots of sense to set a breakpoint in terminate,
>and hope that it will get called while there is still some context to analyze.
>Of course, I hope that there is another reason for this, because the
>reason I just described is brain-dead. Optimized and debuggable code
>should have the same semantics to the greatest degree possible, and
>what I just described does NOT have the same semantics -- one has run
>destructors before calling terminate, the other has not.
Programming language design unfortunately involves a lot of difficult
trade-offs.
When I first discovered this rule, my reaction was the same as yours.
However, after discussing it with Bjarne Stroustrup and other committee
members at the Valley Forge meeting in November 94, I was convinced
that this decision was reasonable -- not the same trade-off
I would have made, but certainly not brain-dead.
>It is also
>unnecessary to make explicit provision for this in the standard,
>because this is a quality-of-debugger-implementation issue -- any
>debugger worth using will use a run-time library which checks for
>uncaught exception first, and then trap to the debugger in the same way
>that debuggers (under Unix, for example) intercept "bus error" or
>"segmentation violation".
Many existing implementations allow programmers to debug optimized
code, and in particular many existing implementations allow post-mortem
debugging of programs from a core file. In this situation, it is not
possible for the debugger to get control until after the program has
died, and so for useful debugging it is essential that the program
not unwind the stack before dying.
It was felt that the ability to get useful core dumps and the ability
to get efficient code were both useful, and that neither implementation
should be precluded by the standard. Instead the compiler should
be required to document which of the two possible semantics supported.
>Or, it could be for another brain-dead reason, namely one vendor does it
>one way, the other does it another way, and neither intends to budge.
That was not the reason.
--
Fergus Henderson
fjh@cs.mu.oz.au
http://www.cs.mu.oz.au/~fjh
PGP key fingerprint: 00 D7 A2 27 65 09 B6 AC 8B 3E 0F 01 E7 5D C4 3F
Author: chase@centerline.com (David Chase)
Date: 1995/07/25 Raw View
fjh@munta.cs.mu.OZ.AU (Fergus Henderson) writes:
> >smeyers@netcom.com (Scott Meyers) writes:
> >> I see in section 15.3.7 of the DWP the following about exception handling:
> >> If no matching handler is found in a program, the function terminate()
> >> (_except.terminate_) is called. Whether or not the stack is unwound
> >> before calling terminate() is implementation-defined.
> Many existing implementations allow programmers to debug optimized
> code, and in particular many existing implementations allow post-mortem
> debugging of programs from a core file. In this situation, it is not
> possible for the debugger to get control until after the program has
> died, and so for useful debugging it is essential that the program
> not unwind the stack before dying.
> It was felt that the ability to get useful core dumps and the ability
> to get efficient code were both useful, and that neither implementation
> should be precluded by the standard. Instead the compiler should
> be required to document which of the two possible semantics supported.
Has it been demonstrated that useful core dumps preclude efficient code,
or is this merely a case of relaxing the rules for lazy implementors? If
doing this precludes optimization (in fact, there's one that it makes a
lot harder, though not impossible), how much speed improvement is
usually lost? I *know* that anyone doing something as important as
finishing the design of an already-widely-used language would certainly
know better than to push premature optimization all the way up to the
language level, so I'm sure there are measurements, right? People have
numbers to back this decision up, don't they?
I'd god-awful tired of writing defensive code to deal with places where
the language was left implementation-defined (and the implementations
also vary, of course).
speaking for myself,
David Chase
Author: maxtal@Physics.usyd.edu.au (John Max Skaller)
Date: 1995/07/25 Raw View
In article <3ua4bq$im4@wcap.centerline.com>,
David Chase <chase@centerline.com> wrote:
>In article <3u7e1h$s5@engnews2.Eng.Sun.COM>, ball@cygany.Eng.Sun.COM (Mike Ball) writes:
>
>> Basically, different environments have different requirements.
>
>I admire your diplomacy, but it sounds to me like the standard was
>weakened to cater to lazy language vendors.
That may be so (IMHO) in some cases but I don't think this
in one of them: throwing an exception for which there is no
handler may be considered a programmer error. I see no strong
reason to require a particular behaviour here: the rule
"an exception unwinds the stack up to the closest handler"
doesn't say what should happen if there is NO handler.
I cannot see any logical argument why one of
a) unwinding the whole stack or
b) just terminating without unwinding it
should be prefered by this principle.
>they are unnecessary, why make life harder for writers of portable
>code?
It doesn't. Portable code shouldn't terminate abnormally :-)
--
JOHN (MAX) SKALLER, INTERNET:maxtal@suphys.physics.su.oz.au
Maxtal Pty Ltd,
81A Glebe Point Rd, GLEBE Mem: SA IT/9/22,SC22/WG21
NSW 2037, AUSTRALIA Phone: 61-2-566-2189
Author: ball@cygany.Eng.Sun.COM (Mike Ball)
Date: 1995/07/15 Raw View
In article <3u0o82$gg4@wcap.centerline.com> chase@centerline.com (David Chase) writes:
>
>However, I am certain that I am completely wrong in all these suppositions,
>and some wise and diplomatic member of the C++ committee will explain why
>this is really so.
>
The reasons are simple.
1. C programmers who are used to getting a core dump from an abort are
unlikely to be happy with an empty stack just because there is an
uncaught exception in a piece of C++ code in some library. Some
vendors, like us, consider this important.
2. Some vendors felt that it would be too inefficient to have to search for
an exception before handling it. This isn't true for our implementation,
but might be true for other implementations.
3. If you want to ensure that the stack is walked back you can do that by
including "catch(...) in main(), While there is no way to ensure the
opposite situation unless that vendor is allowed to provide it.
Basically, different environments have different requirements.
-Mike Ball-
SunSoft Developer Products
Author: chase@centerline.com (David Chase)
Date: 1995/07/16 Raw View
In article <3u7e1h$s5@engnews2.Eng.Sun.COM>, ball@cygany.Eng.Sun.COM (Mike Ball) writes:
> 1. C programmers who are used to getting a core dump from an abort are
> unlikely to be happy with an empty stack just because there is an
> uncaught exception in a piece of C++ code in some library. Some
> vendors, like us, consider this important.
I agree. Sounds to me like the call should be made before unwinding
the stack.
> 2. Some vendors felt that it would be too inefficient to have to search for
> an exception before handling it. This isn't true for our implementation,
> but might be true for other implementations.
I was pretty well aware of this, but I think it is a shame that the
bozos get to weaken the standard. How-to-do-it-right is all public,
and published information (I helped design the exception dispatcher
to which Mike Ball refers, and subsequently wrote two articles
discussing the topic for the JCLT. This is not rocket science,
it's not secret, and it's not patented.)
> 3. If you want to ensure that the stack is walked back you can do that by
> including "catch(...) in main(), While there is no way to ensure the
> opposite situation unless that vendor is allowed to provide it.
> Basically, different environments have different requirements.
I admire your diplomacy, but it sounds to me like the standard was
weakened to cater to lazy language vendors. I bet they didn't get
around to coming up with a single definition for division of/by
negative numbers, either (as if the efficiency of signed division
really mattered, especially since portable programs CAN'T USE IT
without parameterizing for machine behavior or avoiding negative
numbers.)
Perhaps I had too much formal training, but these things seem like
gratuitous ambiguities to me. I'm a language implementor (at
times), and *I* can get these things right (and have done so (*)),
so in my judgement, these ambiguities are not necessary. Since
they are unnecessary, why make life harder for writers of portable
code? There's a lot more of them than there are of me, so a little
extra effort on my part (or on the part of someone like me) is
repaid many times over.
(*) and from various conversations with colleagues in industry,
I know that DEC and IBM (at minimum) both employ people who know
how to do these things "right", and that these techniques are
widespread in the Ada implementation community. Whether any of
these people work on C++ compilers, I do not know, but that's
simply a matter of resource allocation.
flaming for myself,
David Chase
Author: chase@centerline.com (David Chase)
Date: 1995/07/12 Raw View
In article <smeyersDBLDnH.1z7@netcom.com>, smeyers@netcom.com (Scott Meyers) writes:
> I see in section 15.3.7 of the DWP the following about exception handling:
> If no matching handler is found in a program, the function terminate()
> (_except.terminate_) is called. Whether or not the stack is unwound
> before calling terminate() is implementation-defined.
> I was under the impression that an uncaught exception would unwind the
> stack before calling terminate. Clearly, I was mistaken.
> Implementations are free to treat a throw not contained within any try
> block as a call to abort (modulo calls to set_terminate). Does anybody
> know why there is no guarantee of stack unwinding when an uncaught
> exception is thrown?
I believe that this was intended to allow for both the needs of debuggers and
the sloppy/spare habits of optimized code. From the point of view of someone
debugging a program, it makes lots of sense to set a breakpoint in terminate,
and hope that it will get called while there is still some context to analyze.
Author: smeyers@netcom.com (Scott Meyers)
Date: 1995/07/12 Raw View
I see in section 15.3.7 of the DWP the following about exception handling:
If no matching handler is found in a program, the function terminate()
(_except.terminate_) is called. Whether or not the stack is unwound
before calling terminate() is implementation-defined.
I was under the impression that an uncaught exception would unwind the
stack before calling terminate. Clearly, I was mistaken.
Implementations are free to treat a throw not contained within any try
block as a call to abort (modulo calls to set_terminate). Does anybody
know why there is no guarantee of stack unwinding when an uncaught
exception is thrown?
Thanks,
Scott