Topic: Throwing Exceptions in library routines is BAD.


Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Sun, 20 Feb 1994 19:06:19 GMT
Raw View
In article <CJyDHK.Euv@megatest.com> djones@megatest.com (Dave Jones) writes:
>
>I've been away from comp.lang.c++ for a while...

Me too.

>I don't want library routines that throw exceptions, and I ceratinly don't want
>language features that throw exceptions...

Well I do.

In particular, it would be very helpful if the evaluation of an expression
such as (INT_MAX+1) would trigger some sort of integer overflow exception.

--

-- Ron Guilmette, Sunnyvale, CA ---------- RG Consulting -------------------
---- domain addr: rfg@netcom.com ----------- Purveyors of Compiler Test ----
---- uucp addr: ...!uunet!netcom!rfg ------- Suites and Bullet-Proof Shoes -




Author: maxtal@physics.su.OZ.AU (John Max Skaller)
Date: Sun, 20 Feb 1994 22:37:07 GMT
Raw View
In article <rfgCLJEEJ.83@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>In article <CJyDHK.Euv@megatest.com> djones@megatest.com (Dave Jones) writes:
>>
>>I've been away from comp.lang.c++ for a while...
>
>Me too.
>
>>I don't want library routines that throw exceptions, and I ceratinly don't want
>>language features that throw exceptions...
>
>Well I do.
>
>In particular, it would be very helpful if the evaluation of an expression
>such as (INT_MAX+1) would trigger some sort of integer overflow exception.

 You can always roll your own class INTEGER. :-)

--
        JOHN (MAX) SKALLER,         INTERNET:maxtal@suphys.physics.su.oz.au
 Maxtal Pty Ltd,      CSERVE:10236.1703
        6 MacKay St ASHFIELD,     Mem: SA IT/9/22,SC22/WG21
        NSW 2131, AUSTRALIA




Author: kanze@us-es.sel.de (James Kanze)
Date: 08 Feb 1994 16:53:07 GMT
Raw View
In article <JSS.94Feb1103937@summit.lucid.com> jss@summit.lucid.com
(Jerry Schwarz) writes:

|> For the record, my tendency is to reserve throwing exceptions for
|> situations in which there is a genuine unanticipated problem from
|> which the only other plausible recovery is to abort.  Examples are
|> when internal library data structures have been trashed, or
|> constructors can't allocate needed memory.  This isn't always a black
|> and white criteria.  Running out of disk, for example, tends to be a
|> gray area.  End-of-file, on the other hand, may be "rare" but it
|> should never be unanticipated.

I would agree with the basic criteria.  However, I would prefer the
other alternative (abort) as the default behavior, with a mechanism
for the user to state that he wants such and such an exception.
(Call-back function, probably, with the responsibility of throwing the
desired exception entirely in the hands of the user.)

I have two reasons for this preference:

1.  The choice of what to throw is very application dependent.

2.  An abort, with a reasonable error message, and *no* stack
walk-back before core dump, will give the user who forgot about error
handling more information to figure out what is going wrong (but
couldn't).

I can very well live with an exception, though, *if* there is some
possibility for the application to control *what* is thrown.
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: pete@borland.com (Pete Becker)
Date: Tue, 8 Feb 1994 17:33:55 GMT
Raw View
In article <KANZE.94Feb8175307@slsvhdt.us-es.sel.de>,
James Kanze <kanze@us-es.sel.de> wrote:
>
>I would agree with the basic criteria.  However, I would prefer the
>other alternative (abort) as the default behavior, with a mechanism
>for the user to state that he wants such and such an exception.
>(Call-back function, probably, with the responsibility of throwing the
>desired exception entirely in the hands of the user.)
>
>I have two reasons for this preference:
>
>1.  The choice of what to throw is very application dependent.

 Well, maybe. But if I'm providing a library that uses another
commercial library, how do I write my code to handle exceptions if the
application can pull them out from under me? This approach, that the
application designer knows best, makes it very hard to write code that can be
used in multiple applications. I suppose it would be possible for every library
function that expects to handle exceptions to plug in its own exception
definitions, and unplug them when it returns, but this seems rather intrusive.
Seems to me that its better to make the change at the point where it matters.
If it's important for the application to only see certain exceptions then the
place to make the changes is when exceptions are thrown across the interface
to the library:

 void ApplicationSpecific()
 {
 try {
     LibraryFunction();
     }
 catch( LibraryException1 ex )
     {
     throw AppException1(ex);
     }
 catch( LibraryException2 ex )
     {
     throw AppException2(ex);
     }
 catch(...)
     {
     throw AppGeneralException;
     }
 }

 This permits the library writer to program with a well-defined
interface rather than having to deal with the exception-of-the-moment as
imposed by the application.
 -- Pete





Author: pjnagel@dos-lan.cs.up.ac.za (Pieter Nagel)
Date: Mon, 31 Jan 1994 06:31:04 GMT
Raw View
>Are you saying that I should not have the right to ignore the error?

You have the right.

>One of my personal idioms is to execute the following loop immediately
>after a fork (Unix, of course):

>        for ( int i = 3 ; i < MAX_OPENFILES ; i ++ )
>            close( i ) ;

Use the following:

   for (int i = 3; i < MAX_OPENFILES; i++)
   {
      try
         close(i);
      catch(...)
         continue;
   }

This makes it explicit that you are ignoring the error, if that is what you
really want to do.

It seems that most posters on this thread have forgotten that one can
acually *catch* exceptions.

Which do you prefer: ignoring errors unless you explicitly respond to them,
or repsonding to errors unless you explicitly ignore them?



    ,_
    /_)               /|  /
  /    i e t e r    /  |/  a g e l       pjnagel@rkw-risc.cs.up.ac.za






Author: pascual@peggy.tid.es (Pascual Juan)
Date: Tue, 1 Feb 1994 10:47:58 GMT
Raw View
In article <CKCEzA.ss@tempel.research.att.com>, ark@tempel.research.att.com (Andrew Koenig) writes:
|>
|> Or you would have writen
|>
|>  void softclose(int i) { try { close(i); } catch(...) { } }
|>
|> and then used softclose instead of close in your loop.
|>
|> Meanwhile, people who accidentally closed the wrong file in their programs
|> would be finding out about it promptly instead of wondering why they
|> eventually ran out of file descriptors.
|> --
|>     --Andrew Koenig
|>       ark@research.att.com

I don't know if it could be possible, but there it goes:

Will ANSI libraries offer PRE_ANSI compatibility? I don't want to touch my
hundreds of thousands code lines to keep the actual behaviour. Could the
ANSI headers of actual libraries inline your method in a standard namespace?

It could be something like that: (Excuse the syntaxis failures)

namespace PRE_ANSI
{
  inline int close(int i)
  {
    try
    {
      using ANSI;
      return ANSI::close();
    }
    catch(...)
    {
      return -1;
    }
  }
}

So, in my code I only have to use the apropiate namespace to get the
desired behaviour.

It could be even easier with our old #ifdef :

#ifdef PRE_ANSI
  inline int close(int i)
  {
    try
    {
      return _close();
    }
    catch(...)
    {
      return -1;
    }
  }
#else // of PRE_ANSI
  inline int close(int i)
  {
    return _close();
  }
#endif // of PRE_ANSI

Where _close is the internal function of the library who throws the exception.
Last solution is the cheaper because it doesn't mess the namespace and the
old code only have to be recompiled with -DPRE_ANSI to work in the way it
was concived.

--
                              _V_
                            _(,|,)`   _
                           | ___ ')  _))
                           |_|`__/___//
                              /    --'  _,
                             /------) q(_)
----------------------------( /--( /----^--------------------------------------
    __     _            |    / ) / )          Pascual Juan
   /  )/| ( ) /, | /    |   (_/ (_/   E-mail: pascual@gonzo.tid.es
  /__//_|  \ /<  |/     |
 /   /  |(_//  ) /      |                Phone: +34-1-337-47-04
                        |                fax:   +34-1-337-42-22
-------------------------------------------------------------------------------




Author: mjg@ktbush.ogo.dec.com (Michael J. Grier)
Date: 1 Feb 1994 17:06:45 GMT
Raw View
In article <CKCEzA.ss@tempel.research.att.com>, ark@tempel.research.att.com (Andrew Koenig) writes:
  [example of bad technique using a close() call which throws exceptions]
|>
|>Or you would have writen
|>
|> void softclose(int i) { try { close(i); } catch(...) { } }
|>
|>and then used softclose instead of close in your loop.
|>
|>Meanwhile, people who accidentally closed the wrong file in their programs
|>would be finding out about it promptly instead of wondering why they
|>eventually ran out of file descriptors.
|>--
|>    --Andrew Koenig
|>      ark@research.att.com
|>

   I believe there are two fundamental problems/issues going on here:

 - fork() is non-modular; exceptions are modular.  If I'm writing some
   nice embedded module which thinks it knows what's going on in the
   world, especially with respect to possibly hidden assumed external
   state, fork() just mungs me behind my back.  (i.e. what happens
   the next time your code calls a function in my library and I suddenly
   find my channel to the logging server closed, for example.)  The
   immediate issue at hand is a hack which uses a large
   sledge-hammer to try to close these open channels.  This practice
   is unsafe, period.  It's always a challenge to fit together
   modular techniques which allow for abstraction hiding and non-modular
   techniques.

 - What if close() really had something bad to say about itself?
   I find this idea of a close which ignores exceptions as offensive
   as status codes.  What's needed is a way to check if the fd was
   open in the first place, then you could safely write:

    for (int i=3; i<NFILES; i++)
       if (isopen(i))
          close (i);

   without losing any sleep that exceptions which should not be raised
   will be raised, and that any exceptions which should have been
   raised will be ignored.

   This also touches on the issue of exceptions vs. "alternate success
   statuses".  Basically, exceptions work best if you assume that
   properly working software will never catch them.  I.e. don't use
   exceptions to detect when you've read past the end of a file if
   instead you can have a test for whether you are currently at the
   end of the file.  If you follow this guideline you'll find that
   your success code reads clearly, and 99% of your error paths will be
   concerned only with cleaning up any state changes which were in
   transition when the exception occurred.  The 1% of your code which
   does catch the error is probably a user interface, or the top-loop
   of a persistent server.

------------------------------------------------------------------------
I'm saying this, not Digital.  Don't hold them responsibile for it!

Michael J. Grier                           Digital Equipment Corporation
(508) 496-8417                             mjg@ktbush.ogo.dec.com
Stow, Mass, USA                            Mailstop OGO1-1/E16




Author: jss@summit.lucid.com (Jerry Schwarz)
Date: 01 Feb 1994 18:39:37 GMT
Raw View
>   Will ANSI libraries offer PRE_ANSI compatibility? I don't want to touch my
>   hundreds of thousands code lines to keep the actual behaviour. Could the
>   ANSI headers of actual libraries inline your method in a standard namespace?


People have been using "close" as an example of a function that might
throw an exception.  There is nothing wrong with that. It is a
perfectly plausible example.

But let me assure you that the C++ committee has agreed that the
default behavior of all iostream functions will be to report errors as
they do now, and not throw exceptions.  There is provision to cause
iostream's to throw exceptions under certain conditions, but the
program will have to requst it.

Further the committee has agreed not to change the behavior of the
functions in the C library (including stdio) at all, so they will also
continue to report errors as before.

We are currently discussing whether there should be a general policy
that allows throwing "out of memory" from any library function.  I
favor such a policy, but it isn't settled.

For the record, my tendency is to reserve throwing exceptions for
situations in which there is a genuine unanticipated problem from
which the only other plausible recovery is to abort.  Examples are
when internal library data structures have been trashed, or
constructors can't allocate needed memory.  This isn't always a black
and white criteria.  Running out of disk, for example, tends to be a
gray area.  End-of-file, on the other hand, may be "rare" but it
should never be unanticipated.

  -- Jerry Schwarz(jss@lucid.com)






Author: swf@tdat.ElSegundoCA.NCR.COM (Stan Friesen)
Date: Tue, 1 Feb 94 11:31:43 PST
Raw View
In article <1994Feb1.104758.18208@tid.es>, pascual@peggy.tid.es (Pascual Juan) writes:
|>
|> I don't know if it could be possible, but there it goes:
|>
|> Will ANSI libraries offer PRE_ANSI compatibility? I don't want to touch my
|> hundreds of thousands code lines to keep the actual behaviour. ...

Not necessary.

You seem to have missed out on one point:
 *Nobody* is proposing changing 'close()', or any other libc
function to throw exceptions.  What is proposed is that *iostream*s
throw exception on certain classes of errors.


Furthermore, it will certainly be defined to allow the use of a
'set_handler' style call to replace the exception thrwoing with
user-defined behavior.  This would only need to be called *once*
per application, and would thus require very little source change.

--
swf@elsegundoca.ncr.com  sarima@netcom.com

The peace of God be with you.




Author: jbn@lulea.trab.se (Johan Bengtsson)
Date: 3 Feb 94 12:31:55 GMT
Raw View
Jerry Schwarz (jss@summit.lucid.com) wrote:

: >   Will ANSI libraries offer PRE_ANSI compatibility?

: But let me assure you that the C++ committee has agreed that the
: default behavior of all iostream functions will be to report errors as
: they do now, and not throw exceptions.  There is provision to cause
: iostream's to throw exceptions under certain conditions, but the
: program will have to requst it.

Will "iostream in error state at destruction time" be one of them?
IMHO iostreams should have preemptive behaviour (terminate() or exception)
when unacknowledged errors are present during iostream destruction.
That guarantees that iostream errors are not silently ignored.

--
-------------------------------------------------------------------------
| Johan Bengtsson, Telia Research AB, Aurorum 6, S-977 75 Lulea, Sweden |
| Johan.Bengtsson@lulea.trab.se; Voice:(+46)92075471; Fax:(+46)92075490 |
-------------------------------------------------------------------------




Author: ross@utopia.druid.com (Ross Ridge)
Date: Wed, 26 Jan 1994 13:57:44 GMT
Raw View
ross@utopia.druid.com (Ross Ridge) writes:
>Speaking for myself, I don't bother checking returns of library
>functions when there isn't much that can be done if an error occurs.

swf@tdat.ElSegundoCA.NCR.COM writes:
>How about printing an error message so the user knows something
>went wrong?  The only 'errors' not worth checking for are those that
>till allow a program to fulfill its real purpose.  (For instance,
>failure to print an *informative* meassage to the user does not
>impact the real function of the program, and can be reasonably ignored),

This is an example of the case I'm talking about.

>The problem is that many, even most, programmers do not check writes
>even of *critical* output (like the actual archive file in an archiving
>program - or the binary in compiler).

My problem is that's been suggested that I/O exceptions are good
because all I/O is critical, and I disagree with that.

>[Actually, this is not just true of fclose - on UNIX the write()
>system call can return before the data is actually on disk, so
>only close() can tell you for sure that the data was successfully
>written].

Not on the Unix systems I use.  The only value errno can be if
close() fails is EBADF (bad file descriptor).

      Ross Ridge





Author: philip@algorithmics.com (Philip Koop)
Date: Wed, 26 Jan 1994 16:35:43 GMT
Raw View
rch.att.com (Andrew Koenig) writes:
>I submit as evidence the following well-known C program:
>  main() { printf("hello world\n"); }
>This is an example of a program that uses a library routine without checking
>for an error indication.  If, for example, we run this program with its
>standard output connected to a file on a disk that is completely full,
>the program will fail and no diagnostic message will ever appear.

>The point of exceptions is to ensure that when the library detects error
>conditions and the user doesn't do anything about them, they are not
>quietly ignored.

ross@utopia.druid.com (Ross Ridge) replies:
>In the case of the your example this would suck rocks.  An uncaught
>"disk full" exception would likely result in the programme trying print
>a cryptic diagnostic to the full disk, and then call abort which would
>try create a core file on the same full disk.

Then in what way will a program with exceptions behave differently from
one without? None, except for the crucial difference that when the
assumptions under which the program have been written are violated, the
program will terminate. This is sometimes the wrong thing to do, but far
more often it is the safest course. Also, it has the advantage of being
defined behavior.

I don't think that sucks rocks at all.

Besides, in real life the cryptic diagnostic will be written to stderr,
not stdout, so it will quite likely be displayed, whereas the original
program will terminate silently.
--
Phil Koop     philip@algorithmics.com,  pkk@io.org
         I do not represent Algorithmics.




Author: bobkf@news.delphi.com (BOBKF@DELPHI.COM)
Date: 27 Jan 1994 02:57:33 -0500
Raw View
kanze@us-es.sel.de (James Kanze) writes:

>More generally, the hierarchy of exceptions is typically application
>specific.  From a practical point of view, I will have to 'wrap' every
>single library function to catch the library exception, and map it
>into my application exception.

This is the practical point of view? I'd expect you might want to
rethink your application-specific exception hierarchy in light of
the standard. The exception proposal has been floating for a number
of years, now. I and many others have been using macro-based versions
of it for some time. I certainly don't foresee heartbreak when the
standard libraries throw exceptions - just more robust applications.

Perhaps you have an example you would like to share that illustrates
the pain you anticipate?

Bob Foster
objfactory@aol.com




Author: ark@tempel.research.att.com (Andrew Koenig)
Date: Fri, 28 Jan 1994 14:01:57 GMT
Raw View
In article <KANZE.94Jan27173037@slsvhdt.us-es.sel.de> kanze@us-es.sel.de (James Kanze) writes:

> |> The point of exceptions is to ensure that when the library detects error
> |> conditions and the user doesn't do anything about them, they are not
> |> quietly ignored.

> Are you saying that I should not have the right to ignore the error?
> One of my personal idioms is to execute the following loop immediately
> after a fork (Unix, of course):

>  for ( int i = 3 ; i < MAX_OPENFILES ; i ++ )
>      close( i ) ;

> Call it defensive programming; I don't want the child inheriting
> things it cannot handle.  If 'close' throws an exception, this breaks.

Any time the behavior of a library routine changes without also changing
the name, code breaks.  This general observation is a strong argument
against changing the behavior of existing library routines, exceptions or not.

But if close had been designed from the beginning to throw an exception
on failure, then you would always have written

 for ( int i = 3; i < MAX_OPENFILES; i ++ )
     try { close( i ) ; } catch (...) {  }

and probably would not have felt anything worth complaining about.

Or you would have writen

 void softclose(int i) { try { close(i); } catch(...) { } }

and then used softclose instead of close in your loop.

Meanwhile, people who accidentally closed the wrong file in their programs
would be finding out about it promptly instead of wondering why they
eventually ran out of file descriptors.
--
    --Andrew Koenig
      ark@research.att.com




Author: kanze@us-es.sel.de (James Kanze)
Date: 27 Jan 1994 16:30:37 GMT
Raw View
In article <CJzJ4z.C5z@tempel.research.att.com>
ark@tempel.research.att.com (Andrew Koenig) writes:

|> As a more general, and more important example: how many people actually check
|> for an error code from close or fclose?

|> The point of exceptions is to ensure that when the library detects error
|> conditions and the user doesn't do anything about them, they are not
|> quietly ignored.

Are you saying that I should not have the right to ignore the error?
One of my personal idioms is to execute the following loop immediately
after a fork (Unix, of course):

 for ( int i = 3 ; i < MAX_OPENFILES ; i ++ )
     close( i ) ;

Call it defensive programming; I don't want the child inheriting
things it cannot handle.  If 'close' throws an exception, this breaks.

|> > I like the idea of the user being able to specify an exception-handler routine,
|> > the way operator new lets you specify a "new_handler" routine.
|> > If the user wants it to set an error bit somewhere and return, fine.
|> > If he wants it to throw an exception, fine. He can simply write it that
|> > way, to throw whichever exception he wants it to. But when library
|> > routines throw exceptions, they impose flow-of-control design
|> > decisions unnecessarily on the programmer. If, (heaven forbid), an exception
|> > throw is *added* to an existing library routine, it imposes design
|> > constraints *after the fact*, quite possibly breaking perfectly good code.

|> Indeed.  So does adding any kind of new behavior to a library routine.

|> Some libraries I've seen offer ways of making exceptions optional.
|> The idea is that you can tell the library: `Whenever an error of type X
|> occurs, return an error code instead of throwing an exception.'
|> I am willing to bet that the number of people who claim such a facility
|> is useful is much larger than the number who would actually use it.

I'd be happy just to be able to tell the library which exception to
throw.  I want the exceptions to integrate into my application design.
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: pete@borland.com (Pete Becker)
Date: Thu, 27 Jan 1994 17:23:08 GMT
Raw View
In article <KANZE.94Jan27173037@slsvhdt.us-es.sel.de>,
James Kanze <kanze@us-es.sel.de> wrote:
>In article <CJzJ4z.C5z@tempel.research.att.com>
>ark@tempel.research.att.com (Andrew Koenig) writes:
>
>|> As a more general, and more important example: how many people actually check
>|> for an error code from close or fclose?
>
>|> The point of exceptions is to ensure that when the library detects error
>|> conditions and the user doesn't do anything about them, they are not
>|> quietly ignored.
>
>Are you saying that I should not have the right to ignore the error?

 Not at all. The suggestion is that this shouldn't happen by accident.
It's easy to write code to catch an exception and continue executing. To use
the example from the original posting:

 for( int i = 3; i < MAX_OPENFILES; i++ )
  {
  try {
      close(i);
      }
  catch(...) {}
  }

IMPORTANT NOTE: there is no current proposal that close() throw an exception.
Please don't post any flames about how inappropriate you think this is. It is
only an example.
 -- Pete




Author: sjc@netcom.com (Steven Correll)
Date: Thu, 27 Jan 1994 17:40:19 GMT
Raw View
In article <879424.76580.12412@kcbbs.gen.nz> craiga@kcbbs.gen.nz (Craig Anderson) writes:
>...on almost any networked file system a write() error is often
>not reported until close()...
>Note that since fclose() must be explicitly called, use of exception
>throwing libraries isn't enough to keep people from writing bad
>programs.  One has to know that fclose(stdout) or it's exception
>wrapped equivalent must be called before exit()ing.

SVR4 promises that exit() has the effect of calling fclose() on all open
stdio file descriptors. I'm away from my copy of Standard C, but I believe
that it concurs. Exit cannot return an error to the caller; the Unix
implementation I'm using appears to tamper with the program's error
status if an error occurs inside exit(), so even in the absence of the
ability to throw an exception, the error does not go unreported.

Nevertheless, it is generally true that exceptions have the fortunate property
that the programmer must take trouble to conceal errors, whereas with function
returns the programmer must take trouble to report them.
--
Steven Correll == PO Box 66625, Scotts Valley, CA 95067 == sjc@netcom.com




Author: rmartin@rcmcon.com (Robert Martin)
Date: Sun, 23 Jan 1994 17:11:46 GMT
Raw View
djones@megatest.com (Dave Jones) writes:

>From article <MATT.94Jan8133115@physics2.berkeley.edu>, by matt@physics2.berkeley.edu (Matt Austern):
>>[...]
>> (Particularly since the language is going to start throwing exceptions
>> on its own.)

>I've been away from comp.lang.c++ for a while. This is news to me. It
>certainly seems like an "exceptionally" BAD IDEA. I'll bet it has something
>to do with the add-on RTTI business. Kluges beget kluges. Am I right?

Yes and no.  RTTI does not force the inclusion of exceptions.  However
some of the mechanisms that support RTTI are necessary for exceptions.
The inclusion of exceptions in the language makes RTTI simpler to
implement.

In either case, it is not a matter of kludge propagation, but of
kludge reduction.  These two new tools will "allow a wider range of
concepts to be conveniently expressed."  Places where code is
currently kludged, due to the absence of RTTI or exceptions, can now
be repaired.    B-)

>I think a routine should only throw an exception if the calling
>routine has "asked for it" somehow.

Good, because this is how exceptions work.  You ask to be informed of
exception by using a try/catch block.  If an exception occurs that you
did not ask for, it will propogate upwards through the program until
it finds a try/catch block that DOES ask for it, or until it leave the
program with an abort.

>I can not emphasize that enough.
>The writer of the calling routine presumably knows the design of the
>flow of control of the program. The writer of a library routine does
>not.

This is precicely why exceptions are a good thing.  Libraries can be
designed to throw exceptions when things go wrong, thus decoupling the
library from the flow of control of the client program.  The client
can catch the exceptions he things he can handle.  If the library
undergoes changes, and new exceptions are added, or old ones are
deleted, the client code does not need to change unless it wants
access to the new exceptions.

--
Robert Martin       | Design Consulting   | Training courses offered:
Object Mentor Assoc.| rmartin@rcmcon.com  |   Object Oriented Analysis
2080 Cranbrook Rd.  | Tel: (708) 918-1004 |   Object Oriented Design
Green Oaks IL 60048 | Fax: (708) 918-1023 |   C++




Author: swf@tdat.ElSegundoCA.NCR.COM (Stan Friesen)
Date: Fri, 28 Jan 94 15:26:34 PST
Raw View
In article <KANZE.94Jan27173037@slsvhdt.us-es.sel.de>, kanze@us-es.sel.de (James Kanze) writes:
|>
|> Are you saying that I should not have the right to ignore the error?

No.  He is saying you should *think* about each instance where you
wish to ignore an error.  In short, the *default* behavioir should be
to exit with an error status.  If you want something else, you should
have to arrange for that explicitly.

And you *can* ignore errors with exceptions, as follows:

 try {
  cout << "notice this may not work\n");
     }
 catch (ostr_err) {}  // assumes ostreams throw ostr_error

In short, you must be *explicit about it.

This avoids the all to frequent situation of a program failing to
error on writing its *real* output.

[More programs seem to ignore write errors than check them, even
amoung programs that *produce* something - like compilers and
archivers].

Our C coding standard here says error returns must be checked
except in some very specific situations for this very reason.

|> One of my personal idioms is to execute the following loop immediately
|> after a fork (Unix, of course):
|>
|>  for ( int i = 3 ; i < MAX_OPENFILES ; i ++ )
|>      close( i ) ;
|>
|> Call it defensive programming; I don't want the child inheriting
|> things it cannot handle.  If 'close' throws an exception, this breaks.

Not if you wrap the close in a try block that catches and ignores the
exception.

[Actually, I doubt that low-level stuff like close() will throw exceptions,
since that involves changing the behavior of the portion of C++ inherited
from C - which is a no-no; close() is defined in the *C* standard to
return an error status, not to throw an exception].

--
swf@elsegundoca.ncr.com  sarima@netcom.com

The peace of God be with you.




Author: ross@utopia.druid.com (Ross Ridge)
Date: Mon, 24 Jan 1994 15:06:34 GMT
Raw View
ark@tempel.research.att.com (Andrew Koenig) writes:
>Many years of experience with C have evolved the following paradigm,
>which is what most C programmers actually do -- regardless of what they
>might say is ideal:
>
> 1. The library routines carefully check for error conditions
>    and return them to the caller if they find them.
>
> 2. The people who use the library routines never check for the
>    error conditions, assuming the library routine will work.

Speaking for myself, I don't bother checking returns of library
functions when there isn't much that can be done if an error occurs.

> 3. When an error actually occurs, the program misbehaves--
>    sometimes quietly, sometimes noisily.

>I submit as evidence the following well-known C program:

> main() { printf("hello world\n"); }

>This is an example of a program that uses a library routine without checking
>for an error indication.  If, for example, we run this program with its
>standard output connected to a file on a disk that is completely full,
>the program will fail and no diagnostic message will ever appear.

Which is the best course of action for this programme.  This programme
is designed to print a welcoming message to the user.  Trying to tell
the user that error occured while trying to tell him something is very
likely to be futile and could just compound the problem.

>As a more general, and more important example: how many people actually check
>for an error code from close or fclose?

Checking the return value of close isn't very useful and checking the
return value of fclose is only useful when you're writting data.

>The point of exceptions is to ensure that when the library detects error
>conditions and the user doesn't do anything about them, they are not
>quietly ignored.

In the case of the your example this would suck rocks.  An uncaught
"disk full" exception would likely result in the programme trying print
a cryptic diagnostic to the full disk, and then call abort which would
try create a core file on the same full disk.

       Ross Ridge





Author: bill@amber.csd.harris.com (Bill Leonard)
Date: 24 Jan 1994 19:16:13 GMT
Raw View
In article <CJyDHK.Euv@megatest.com>, djones@megatest.com (Dave Jones) writes:
> From article <MATT.94Jan8133115@physics2.berkeley.edu>, by matt@physics2.berkeley.edu (Matt Austern):
>
> I don't want library routines that throw exceptions, and I ceratinly
> don't want language features that throw exceptions. I think a routine
> should only throw an exception if the calling routine has "asked for it"
> somehow.  I can not emphasize that enough. The writer of the calling
> routine presumably knows the design of the flow of control of the
> program. The writer of a library routine does not.

The "presumably" exposes the flaw in this argument.  At least in my
applications, direct callers of library routines are usually fairly
low-level routines and seldom know the flow of control of the application.
I don't *want* them to know.  And with object-oriented design, the goal is
to make objects that respond to requests no matter who makes them or why,
so I wouldn't want to build knowledge of the entire application into those
member functions.

> I like the idea of the user being able to specify an exception-handler routine,
> the way operator new lets you specify a "new_handler" routine.

The problem I find with such mechanisms is their global nature.  Having one
global "handler" for each type of error makes it difficult to encapsulate
calls to the library routines, because each separate piece of the application
has to know what "state" the global handler is in when it gets called.

Besides, the problem I have that I think exceptions will solve is that I
seldom know what to do with library errors at the point of call.  As I said,
calls to library routines are usually buried down in fairly low-level parts
of my application, which doesn't have enough information to know what to do
about the error.  But the high-level parts, which *do* know what to do,
don't know what library routines the low-level bits are going to call, so
they couldn't know how to set up the global handlers.

Exceptions are a nice solution to this problem: the low-level bits can
intercept the library exceptions if they want to, or pass them on to higher
levels.  (One reason for intercepting them is to turn them into exceptions
that the higher levels are expecting, which should be documented in the
interfaces to the low-level parts.  Thus, the high-level routines don't
need to know that a library routine was invoked, nor which one(s).)

As if that weren't enough, surely we don't need 14 bazillion different
global error-handler routines?  All different, all unique to a particular
library or library routine!

> If the user wants it to set an error bit somewhere and return, fine.
> If he wants it to throw an exception, fine. He can simply write it that
> way, to throw whichever exception he wants it to. But when library
> routines throw exceptions, they impose flow-of-control design
> decisions unnecessarily on the programmer. If, (heaven forbid), an exception
> throw is *added* to an existing library routine, it imposes design
> constraints *after the fact*, quite possibly breaking perfectly good code.

But other error-handling mechanisms are even worse at this.  One problem
that I am continually bumping into is this: Routine A returns errors as an
enumeration value.  A calls B, who returns a different enumeration (because
it is a different library or subsystem), so A has to translate B's errors
into its own statuses.  B calls C, who also has a different enumeration for
errors.  Now someone adds a status to C's enumeration, necessitating
changes to A and B solely so they can propagate the error up the call
chain.  That's a waste of time.

--
Bill Leonard
Harris Computer Systems Division
2101 W. Cypress Creek Road
Fort Lauderdale, FL  33309
bill@ssd.csd.harris.com

These opinions and statements are my own and do not necessarily reflect the
opinions or positions of Harris Corporation.

------------------------------------------------------------------------------
"If brains was lard, he wouldn't grease a very big pan."
                                             - Jed Clampett
------------------------------------------------------------------------------




Author: matt@physics16.berkeley.edu (Matt Austern)
Date: 24 Jan 1994 20:03:40 GMT
Raw View
In article <2i16pt$hkg@travis.csd.harris.com> bill@amber.csd.harris.com (Bill Leonard) writes:

> The problem I find with such mechanisms is their global nature.  Having one
> global "handler" for each type of error makes it difficult to encapsulate
> calls to the library routines, because each separate piece of the application
> has to know what "state" the global handler is in when it gets called.

Agreed.  I don't like global state; this is one reason I almost never
use set_new_handler().  (This is also a problem with formatting flags
in cin and cout.)

Solutions that require someone to keep track of global state really
ought to be avoided.  Exceptions are a much nicer solution.
--
Matthew Austern                       Never express yourself more clearly
matt@physics.berkeley.edu             than you think.    ---N. Bohr




Author: kanze@us-es.sel.de (James Kanze)
Date: 25 Jan 1994 18:54:40 GMT
Raw View
In article <CJyDHK.Euv@megatest.com> djones@megatest.com (Dave Jones)
writes:

|> From article <MATT.94Jan8133115@physics2.berkeley.edu>, by matt@physics2.berkeley.edu (Matt Austern):

|> > The obvious corollary [to careful exception design (dj)] is that the C++
|> > Standard Class Library should include a hierarchy of exception classes.

|> > (Particularly since the language is going to start throwing exceptions
|> > on its own.)

|> I've been away from comp.lang.c++ for a while. This is news to me. It
|> certainly seems like an "exceptionally" BAD IDEA. I'll bet it has something
|> to do with the add-on RTTI business. Kluges beget kluges. Am I right?

Not completely.  Just about everything in the library can throw an
exception for some reason or another.  Even when RTTI is not involved.
It seems to largely be a question of, here is a new feature, let's use
it.

|> I don't want library routines that throw exceptions, and I ceratinly don't want
|> language features that throw exceptions. I think a routine should only
|> throw an exception if the calling routine has "asked for it" somehow.
|> I can not emphasize that enough. The writer of the calling routine presumably
|> knows the design of the flow of control of the program. The writer of
|> a library routine does not.

More generally, the hierarchy of exceptions is typically application
specific.  From a practical point of view, I will have to 'wrap' every
single library function to catch the library exception, and map it
into my application exception.

|> I like the idea of the user being able to specify an exception-handler routine,
|> the way operator new lets you specify a "new_handler" routine.
|> If the user wants it to set an error bit somewhere and return, fine.
|> If he wants it to throw an exception, fine. He can simply write it that
|> way, to throw whichever exception he wants it to. But when library
|> routines throw exceptions, they impose flow-of-control design
|> decisions unnecessarily on the programmer. If, (heaven forbid), an exception
|> throw is *added* to an existing library routine, it imposes design
|> constraints *after the fact*, quite possibly breaking perfectly good code.

This is the solution I would recommend, too.  An error calls a
call-back function.  I think in most cases, the default call-back
function could just abort with an error.  In most cases, however, the
user will replace it with one which throws an application specific
exception (or sets a global flag, or does some clean-up and exits,
or...).
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: swf@tdat.ElSegundoCA.NCR.COM (Stan Friesen)
Date: Tue, 25 Jan 94 10:44:20 PST
Raw View
In article <CJzJ4z.C5z@tempel.research.att.com>, ark@tempel.research.att.com (Andrew Koenig) writes:
|>
|> I submit as evidence the following well-known C program:
|>
|>  #include <stdio.h>
|>
|>  main()
|>  {
|>   printf("hello world\n");
|>  }
|>
|> This is an example of a program that uses a library routine without checking
|> for an error indication.  If, for example, we run this program with its
|> standard output connected to a file on a disk that is completely full,
|> the program will fail and no diagnostic message will ever appear.

And this is not a rare problem at all.  We have run into it in several
commercially available products here (including some standard UNIX
utilities, like tar - or is that ar).

We have even been forced to write wrappers for some of these products
to make the check - we find running out of disk space is not always
that rare.

|> As a more general, and more important example: how many people actually check
|> for an error code from close or fclose?

This really amounts to the same thing, since a write may not report the
error until the close (due to buffering).

|> The point of exceptions is to ensure that when the library detects error
|> conditions and the user doesn't do anything about them, they are not
|> quietly ignored.

Exactly.  Fewer programs that quietly do nothing on a full disk.

--
swf@elsegundoca.ncr.com  sarima@netcom.com

The peace of God be with you.




Author: swf@tdat.ElSegundoCA.NCR.COM (Stan Friesen)
Date: Tue, 25 Jan 94 11:00:24 PST
Raw View
In article <1994Jan24.150634.28093@utopia.druid.com>, ross@utopia.druid.com (Ross Ridge) writes:
|>
|> Speaking for myself, I don't bother checking returns of library
|> functions when there isn't much that can be done if an error occurs.

How about printing an error message so the user knows something
went wrong?  The only 'errors' not worth checking for are those that
till allow a program to fulfill its real purpose.  (For instance,
failure to print an *informative* meassage to the user does not
impact the real function of the program, and can be reasonably ignored),

We have run into a disturbingly large number of commercial apps that
simply *ignore* disk-full errors in constructing their main output file!

For instance, part of our internal mechanism for releasing software from
development to quality control involves making archive files.  For
awhile we were occasionally *quietly* (NO error indication) getting
empty archives.  This was unacceptable, so we had to write a wrapper
to the archive program to check that the archive had actually been
written.

If the archive program had been correctly written to begin with,
we would not have needed to do this.  (I believe it was fixed in
a later release of the OS, but it was still disturbing).

|> >I submit as evidence the following well-known C program:
|>
|> > main() { printf("hello world\n"); }
|>
|> >... standard output connected to a file on a disk that is completely full,
|> >the program will fail and no diagnostic message will ever appear.
|>
|> Which is the best course of action for this programme.  This programme
|> is designed to print a welcoming message to the user.  Trying to tell
|> the user that error occured while trying to tell him something is very
|> likely to be futile and could just compound the problem.

But that is about the *only* situation where this is so.

The problem is that many, even most, programmers do not check writes
even of *critical* output (like the actual archive file in an archiving
program - or the binary in compiler).

How would *you* like it if your compiler reported 'success' and you found
an empty .o file?   We have had that problem - from a commercial compiler.
|>
|> Checking the return value of close isn't very useful and checking the
|> return value of fclose is only useful when you're writting data.

In fact it is *critical* then, since writes can be buffered.
[Actually, this is not just true of fclose - on UNIX the write()
system call can return before the data is actually on disk, so
only close() can tell you for sure that the data was successfully
written].

|> In the case of the your example this would suck rocks.  An uncaught
|> "disk full" exception would likely result in the programme trying print
|> a cryptic diagnostic to the full disk, and then call abort which would
|> try create a core file on the same full disk.

Only if the user redirected cerr to the *same* file as cout.
If cerr is left attached to the terminal, as it often is,
you will get the error message.

And even in the worst case scenario you mention above, *at *least*
the program exits with a non-zero exit status, and you know it 'failed'
even if you do not know why.   When `make' says a make has successfully
completed, it would be nice to be able to believe it.  If a program
called by make quietly exits with a zero status, this is not possible.
A non-zero exit status here is critical to proper reliablility.

[Again, we have had to put wrappers around commercial compilers to
check that the object file was actually created].

--
swf@elsegundoca.ncr.com  sarima@netcom.com

The peace of God be with you.




Author: craiga@kcbbs.gen.nz (Craig Anderson)
Date: 25 Jan 94 21:16:20 GMT
Raw View
: Checking the return value of close isn't very useful and checking the
: return value of fclose is only useful when you're writting data.

Sorry, but on almost any networked file system a write() error is often
not reported until close().  The reasons for this are obvious, having
every write() wait for a reply back from the server would be soooooo
slow.  Testing close() is important - and this means going as far as
doing an "if (fclose(stdout) == EOF) ..." before exit()ing.

: In the case of the your example this would suck rocks.  An uncaught
: "disk full" exception would likely result in the programme trying print
: a cryptic diagnostic to the full disk, and then call abort which would
: try create a core file on the same full disk.

Yes if both stdout and stderr where redirected to the same filesystem as
the current working directory or all filesystems were full (but that's often
not the case).

The important part here is the program's return code.  A classic mistake
would be a simple program that prints out say a list of filesystems.
The programmer didn't test the return code from printf() nor fclose().
So who cares?  Now say i use this program (call it fsl) in a shell script
used to backup all filesystems:
 #!/bin/sh
 fsl > /tmp/bback.fsl.$$
 if [ $? -ne 0 ]; then
  echo "bback: can't get a list of filesystems" >&2
  exit 2
 fi
 awk '{ print $1 }' < /tmp/bback.fsl.$$ | cpio -ocvB > /dev/rmt0
 exit $?

Now a program not testing the return code from printf() nor fclose()
is likely to only do a partial backup, or no backup, or even backup the
wrong filesystems if the /tmp filesystem is full.  No errors, no
warning nothing, just the a _bad_ backup.  A program using an
exception throwing library and not testing return codes will be ok and
the script will fail with an appropriate message and return code.

Note that since fclose() must be explicitly called, use of exception
throwing libraries isn't enough to keep people from writing bad
programs.  One has to know that fclose(stdout) or it's exception
wrapped equivalent must be called before exit()ing.

-Craig Anderson
craiga@kcbbs.gen.nz




Author: djones@megatest.com (Dave Jones)
Date: Thu, 20 Jan 1994 23:56:22 GMT
Raw View


Author: donne@imec.be (Werner Donne)
Date: Fri, 21 Jan 1994 10:05:45 GMT
Raw View
Dave Jones (djones@megatest.com) wrote:
: From article <MATT.94Jan8133115@physics2.berkeley.edu>, by matt@physics2.berkeley.edu (Matt Austern):

: >
: > The obvious corollary [to careful exception design (dj)] is that the C++
: > Standard Class Library should include a hierarchy of exception classes.
: >
: > (Particularly since the language is going to start throwing exceptions
: > on its own.)

: I've been away from comp.lang.c++ for a while. This is news to me. It
: certainly seems like an "exceptionally" BAD IDEA. I'll bet it has something
: to do with the add-on RTTI business. Kluges beget kluges. Am I right?

: I don't want library routines that throw exceptions, and I ceratinly don't want
: language features that throw exceptions. I think a routine should only
: throw an exception if the calling routine has "asked for it" somehow.

This only makes sense for existing libraries that have to be compatible with
the previous version. I don't see why a new library should provide more then
one way of reporting a problem.

: I can not emphasize that enough. The writer of the calling routine presumably
: knows the design of the flow of control of the program. The writer of
: a library routine does not.

Throwing exceptions in a library doesn't impose the flow of control of the
program. You can have exactly the same flow of control as when using tests on
return values. There could be a performance penalty when using a rather stupid
compiler, but that's not the point here.
At least you have the choice of not doing it at each call level.

: I like the idea of the user being able to specify an exception-handler routine,
: the way operator new lets you specify a "new_handler" routine.
: If the user wants it to set an error bit somewhere and return, fine.
: If he wants it to throw an exception, fine. He can simply write it that
: way, to throw whichever exception he wants it to. But when library
: routines throw exceptions, they impose flow-of-control design
: decisions unnecessarily on the programmer. If, (heaven forbid), an exception
: throw is *added* to an existing library routine, it imposes design
: constraints *after the fact*, quite possibly breaking perfectly good code.
--
Werner Donne'




Author: ark@tempel.research.att.com (Andrew Koenig)
Date: Fri, 21 Jan 1994 15:02:59 GMT
Raw View
In article <CJyDHK.Euv@megatest.com> djones@megatest.com (Dave Jones) writes:

> I've been away from comp.lang.c++ for a while. This is news to me. It
> certainly seems like an "exceptionally" BAD IDEA. I'll bet it has something
> to do with the add-on RTTI business. Kluges beget kluges. Am I right?

No, you're wrong.

> I don't want library routines that throw exceptions, and I ceratinly don't want
> language features that throw exceptions. I think a routine should only
> throw an exception if the calling routine has "asked for it" somehow.
> I can not emphasize that enough. The writer of the calling routine presumably
> knows the design of the flow of control of the program. The writer of
> a library routine does not.

Many years of experience with C have evolved the following paradigm,
which is what most C programmers actually do -- regardless of what they
might say is ideal:

 1. The library routines carefully check for error conditions
    and return them to the caller if they find them.

 2. The people who use the library routines never check for the
    error conditions, assuming the library routine will work.

 3. When an error actually occurs, the program misbehaves--
    sometimes quietly, sometimes noisily.

I submit as evidence the following well-known C program:

 #include <stdio.h>

 main()
 {
  printf("hello world\n");
 }

This is an example of a program that uses a library routine without checking
for an error indication.  If, for example, we run this program with its
standard output connected to a file on a disk that is completely full,
the program will fail and no diagnostic message will ever appear.

As a more general, and more important example: how many people actually check
for an error code from close or fclose?

The point of exceptions is to ensure that when the library detects error
conditions and the user doesn't do anything about them, they are not
quietly ignored.

> I like the idea of the user being able to specify an exception-handler routine,
> the way operator new lets you specify a "new_handler" routine.
> If the user wants it to set an error bit somewhere and return, fine.
> If he wants it to throw an exception, fine. He can simply write it that
> way, to throw whichever exception he wants it to. But when library
> routines throw exceptions, they impose flow-of-control design
> decisions unnecessarily on the programmer. If, (heaven forbid), an exception
> throw is *added* to an existing library routine, it imposes design
> constraints *after the fact*, quite possibly breaking perfectly good code.

Indeed.  So does adding any kind of new behavior to a library routine.

Some libraries I've seen offer ways of making exceptions optional.
The idea is that you can tell the library: `Whenever an error of type X
occurs, return an error code instead of throwing an exception.'
I am willing to bet that the number of people who claim such a facility
is useful is much larger than the number who would actually use it.
--
    --Andrew Koenig
      ark@research.att.com