Topic: The new new


Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1999/06/28
Raw View
James Kuyper wrote:
>
> Christopher Eltschka wrote:
> ...
> > To make the point clear: On a system with memory overcommitment,
> > the following function may wait indefinitely, kill random
> > processes or end the program depending on the OS and the
> > compiler implementation, no matter if you are using new or malloc,
> > or if you are using C++ or C, if called on uncommitted memory:
>                              ^
> > void foo(int& i)
>               ^
> > {
> >   i=0;
> > }
>
> Not quite: if you're using C, it's illegal. I'll assume that you
> actually meant to use a pointer rather than a reference in the C
> version.

You're right, of course. Changing it to use a pointer
makes it legal C as well. Indeed, I didn't think about
this point, when I wrote the example function; I just
wanted the function to look as innocent as possible.

However, it _can_ be legal C, if there are the following
macro definitions are preceding that code :-)

#undef void   // I want to be sure ;-)
#undef int
#undef p
#define foo(x) foo(int* p)
#define i (*p)
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: "John Hickin" <hickin@nortelnetworks.com>
Date: 1999/06/25
Raw View
Christopher Eltschka wrote:
>

> And if you encounter low memory on the stack, I don't think
> _any_ implementation will do something useful (though C++
> implementations at least have a mechanism to do so: they
> could define an stack_overflow exception which is thrown
> in that case - in C you're out of luck entirely).
>

Win32 does have a mechanism to detect and deal with this situation but,
AFAIK, it isn't 100% foolproof. In essence, your thread is handed an
access violation (this isn't a C++ exception; it is a _structured_
exception) which causes a walkback of the stack to the nearest
(structured exception) handler. The handler may choose to restart or
abort the faulted instruction (if the latter, the stack is unwound to
the point of the active handler and C++ objects are properly destroyed),
or it may pass the buck.

Because this isn't a C++ exception (in Microsoft's implementation a C++
exception is a special type of structured exception) it will unwind the
stack of a C++ function which has a throw() specification (presumably)
without causing unexpected() to be called. The implications of this are
difficult to sort out and one just might conclude that nothing _useful_
is being done in this case. Certainly whatever actions one may take are
not at all portable.

Regards, John.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1999/06/25
Raw View
Roland Booker wrote:
>
> > I disagree. I expect the new new to throw bad_alloc _everywhere_
> > where a traditional allocation function had returned NULL.
>
> Yours is the third post that seems to have missed my meaning in the
> earlier posts; I apologize for not being clearer.
>
> Actually I understand and agree most of this (Harvey Taylor's) thread.
> Our mutual understanding of the function in this area around new is the same,
> as the remainder of your post indicates (to me at least). Where we differ is
> in our viewpoints. I believe you are most interested in the success of the
> approach. I am looking at it where I expect it to fail, (based on my past
> work in testing systems and processes at memory boundaries as a QA test
> engineer).
>
> My original post was a query whether memory limit tests were viable for
> C++ programs.  Steve Clamage confirmed that they were.  As you point out,
> the old kinds of problems at the memory boundary are not going away.
> What I am trying to get across now is that, as I understand from what
> other posters have written, it looks like there are at least three more
> ways to encounter difficulty if new functions as documented.
>
> 1. Trying to make other dynamically allocated memory available creates a
>    system level race as to who gets it.  In the best possible result, it
>    *should* affect only the process owning the new.  There is a real chance
>    that the effect will escape if (for example) at some level new/delete is
>    shared among processes, or if the delete actually frees memory to the
>    whole system rather than the current practice of keeping it for the local
>    process, or as pointed out earlier in the thread for some other
>    implementation-specific reason.

The same is true for malloc, if you decide to do so.
The only difference is that with malloc, if you decide
to do such a design, you have to check for NULL after
malloc returns, while with new, you can do it through
a new handler. That is, the way to implement it is a
little bit different, but the results are the same.
Especially, in both cases it's your desicion - if it
does affect the system negatively, it's your fault.
The default new handler should just throw bad_exception.

>
> 2. Depending on new to return in a determinate time period that meets Real
>    Time criteria after looking  for freshly available allocatable memory
>    is problematic.  This seems to be implementation-specific.

The same is true for malloc. Indeed, an implementation with
memory overcommitment may cause a considerable delay for just
dereferencing a pointer, if memory gets low (that is, if
the page was not committed). So nothing new with new.

To make the point clear: On a system with memory overcommitment,
the following function may wait indefinitely, kill random
processes or end the program depending on the OS and the
compiler implementation, no matter if you are using new or malloc,
or if you are using C++ or C, if called on uncommitted memory:

void foo(int& i)
{
  i=0;
}

BTW, if you need committed memory, you might overload
operator new with

class committed_t {} committed;

void* operator new(committed_t, size_t size)
{
  // get committed memory somehow
}

Of course this assumes that your implementation offers this
possibility (I guess calloc will generally commit memory,
since it must clear it).

>
> 3. Upgrading or enhancing legacy code to this new and the libraries depending
>    on it means either attempting to get away with nothrow or having to
>    redesign the memory allocation algorithms.  (Other ideas, like keeping
>    older compilers, have to become untenable.)  Also implementation-specific?

Ok, that's a completely different, valid point. You have
to modify your algorithms because of the new error reporting
mechanism. However that doesn't mean that the new semantics
of new is actually worse; it's an incompatibility between
old code and new compilers. New written code will not suffer
from new new. Old code can be converted by simply replacing
new by new(nothrow), and then gradually updating to the new
behaviour, if desired (it may be a good idea, since most of
it will be making the code exception safe; this will pay off
as soon as another exception is thrown as well).
Note that you'd have to change your code as well if new had
changed from the current to the old behaviour (though
admittedly the changes would have been less, since exception
safe code doesn't hurt in the absence of exceptions).

And no, this is not implementation specific. Every
implementation must support the nowthrow version, and
every implementation must throw bad_alloc for normal new.

>
> The term "implemention-specific" means to me that I will probably be able
> to find a bug when someone is trying to write in standard C++.

By this definition, everything is implementation specific,
since it is well known that every program of considerable
size has at least one bug ;-)

>  OK, I would
> look anyway, but with just a few changes, I would be less likely to
> find one.
>
> 1. The "new" mechanism would be a lot simpler and safer to use if all it did
>    was allocate memory.  Let the delete mechanism handle the garbage
>    collection.

The new mechanism indeed does just allocate memory (and call
constructors; but I doubt that this is your concern).
It allows you to modify this behaviour if desired, but after
all, with C you'd be able to do

  void* myalloc(size_t size)
  {
    void* p=malloc(size);
    if (p) return p;
    try_to_free_memory();
    return malloc(size);
  }

and use myalloc instead of malloc.

The effect would be the same as

  set_new_handler(try_to_free_memory);

with new.

>
> 2. Nothrow should be viable without the necessity to mix bad_alloc and nothrow
>    function because of library limitations.  This seems to me to be just as
>    bad as mixing new and malloc usage.

I don't understand what you want to say here.

>
> However, since I was reading this newsgroup last year right through the
> closing date for the spec, I don't advocate even this simple change.
> A deprecation period would be more possible, but maybe not necessary as
> real C++ compilers will not likely appear for a while yet.
>
> Still, right now it looks to me like a compiler could be compliant with the
> C++ Standard as new is documented there, and these weaknesses would remain.

The only weakness I see is temporary (upgrading to new interface).
As soon as all used code is adapted (if only by adding (nothrow)
to each new), there's no problem to resolve.

Maybe it would have been a good idea to require empty parentheses
after every "normal" new, so a compiler could easily spot the
points where problems might occur.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: James Kuyper <kuyper@wizard.net>
Date: 1999/06/26
Raw View
Christopher Eltschka wrote:
...
> To make the point clear: On a system with memory overcommitment,
> the following function may wait indefinitely, kill random
> processes or end the program depending on the OS and the
> compiler implementation, no matter if you are using new or malloc,
> or if you are using C++ or C, if called on uncommitted memory:
                             ^
> void foo(int& i)
              ^
> {
>   i=0;
> }

Not quite: if you're using C, it's illegal. I'll assume that you
actually meant to use a pointer rather than a reference in the C
version.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1999/06/23
Raw View
Roland Booker wrote:
>
> > Roland Booker wrote:
> >>
> >> >If your application relies on detecting failed allocations and
> >> >dealing with them, you are out of luck.
> >>
> >> If a standard full memory test were run during stress testing on a
> >> system all of whose processes had been coded in standard C++, what
> >> would be the expected result?
>
> James Kuyper wrote:
>
> > Something. Your question seems under-specified. It depends upon the
> > implementation and the program and where the lack of memory is detected.
> > With luck, a program might run out of memory during dynamic allocation,
> > rather than static or automatic allocation. If so, and a new_handler has
> > been defined or std::bad_alloc is caught, it might be able to do
> > something program-defined to cope with the situation gracefully.
> > Otherwise, something implementation-specific happens.
>
> Suppose the debugger tracks to a new that doesn't return; then another
> process, possibly even a system process crashes.  Suppose this has been
> happening for more than a month?  Suppose the product gets shipped
> anyway because it only happens once a day and the machine boots back up
> in about 4 minutes?  Or suppose the process just hangs on the new,
> like processes already do if the network is not responding?
>
> Murphy's Law just seems anecdotal.
>
> The delay while new shuffles for memory is not a robustness measure,
> it is one more place where interaction between processes can cause
> system-wide difficulties; and it is introduced into the environment by
> this attribute of the C++ standard.
>
> Real time designers now have to be more careful with the use of new.
>
> The challenge of dealing with(out) the return of new is one more place
> that will make it difficult to move into the dialect of the C++ standard
> from its predecessors, eg, almost all the *alloc routines return NULL
> on allocation failure.
>
> At some point in the future, I may have the opportunity to write
> test programs that demonstrate these problems; but not yet, as there
> isn't a conforming compiler.
>
> This is a serious change in a sensitive area of system function.
> A period of deprecation of the older practice would seem in order,
> rather than just saying new doesn't have to return null and
> incorporating that design into the libraries.

I disagree. I expect the new new to throw bad_alloc _everywhere_
where a traditional allocation function had returned NULL.
Of course you can change that deliberately by using
set_new_handler, which may free up some memory in your
own program (f.ex. cache memory which is used for performance
reasons, but not really necessary for your program).
However, even with traditional malloc, some real systems start
killing random processes on low memory - and not using C++
doesn't save you from this; even using the system allocation
routines from assembler can result in such behaviour.
What happens in the case of full memory is mostly a decision
of the operating system; only if the operating system tells
you about that problem, you have the chance to pass that info
on to your program.

Of course a C++ implementation can decide to pre-allocate a fixed
heap, and only allocate from that. Then you'll never affect
other processes after your program has started - at the cost of
not being able to use more memory than preallocated at the
beginning. For certain applications, this might be a good
solution.

However, C++ has the same opportunities as every other language
with dynamic memory allocation. The only difference is *how*
the error is reported (if at all).

And if you encounter low memory on the stack, I don't think
_any_ implementation will do something useful (though C++
implementations at least have a mechanism to do so: they
could define an stack_overflow exception which is thrown
in that case - in C you're out of luck entirely).

I think your mistake is that you interpret "may not return NULL"
as "may not fail". This is wrong. new _may_ fail - just that
it reports the failure by throwing an exception instead of
by returning NULL.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: James.Kanze@dresdner-bank.com
Date: 1999/06/24
Raw View
In article <37708F18.3D524499@physik.tu-muenchen.de>,
  Christopher Eltschka <celtschk@physik.tu-muenchen.de> wrote:

> And if you encounter low memory on the stack, I don't think
> _any_ implementation will do something useful (though C++
> implementations at least have a mechanism to do so: they
> could define an stack_overflow exception which is thrown
> in that case - in C you're out of luck entirely).

In C++ as well.  Your suggestion will run into problems in functions
declared with a throw() -- including delete!

--
James Kanze                         mailto:
James.Kanze@dresdner-bank.com
Conseils en informatique orient   e objet/
                        Beratung in objekt orientierter
Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany  Tel. +49 (069) 63 19 86
27


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Roland Booker <rbooker@baynetworks.com>
Date: 1999/06/24
Raw View

> I disagree. I expect the new new to throw bad_alloc _everywhere_
> where a traditional allocation function had returned NULL.

Yours is the third post that seems to have missed my meaning in the
earlier posts; I apologize for not being clearer.

Actually I understand and agree most of this (Harvey Taylor's) thread.
Our mutual understanding of the function in this area around new is the same,
as the remainder of your post indicates (to me at least). Where we differ is
in our viewpoints. I believe you are most interested in the success of the
approach. I am looking at it where I expect it to fail, (based on my past
work in testing systems and processes at memory boundaries as a QA test
engineer).

My original post was a query whether memory limit tests were viable for
C++ programs.  Steve Clamage confirmed that they were.  As you point out,
the old kinds of problems at the memory boundary are not going away.
What I am trying to get across now is that, as I understand from what
other posters have written, it looks like there are at least three more
ways to encounter difficulty if new functions as documented.

1. Trying to make other dynamically allocated memory available creates a
   system level race as to who gets it.  In the best possible result, it
   *should* affect only the process owning the new.  There is a real chance
   that the effect will escape if (for example) at some level new/delete is
   shared among processes, or if the delete actually frees memory to the
   whole system rather than the current practice of keeping it for the local
   process, or as pointed out earlier in the thread for some other
   implementation-specific reason.

2. Depending on new to return in a determinate time period that meets Real
   Time criteria after looking  for freshly available allocatable memory
   is problematic.  This seems to be implementation-specific.

3. Upgrading or enhancing legacy code to this new and the libraries depending
   on it means either attempting to get away with nothrow or having to
   redesign the memory allocation algorithms.  (Other ideas, like keeping
   older compilers, have to become untenable.)  Also implementation-specific?

The term "implemention-specific" means to me that I will probably be able
to find a bug when someone is trying to write in standard C++.  OK, I would
look anyway, but with just a few changes, I would be less likely to
find one.

1. The "new" mechanism would be a lot simpler and safer to use if all it did
   was allocate memory.  Let the delete mechanism handle the garbage
   collection.

2. Nothrow should be viable without the necessity to mix bad_alloc and nothrow
   function because of library limitations.  This seems to me to be just as
   bad as mixing new and malloc usage.

However, since I was reading this newsgroup last year right through the
closing date for the spec, I don't advocate even this simple change.
A deprecation period would be more possible, but maybe not necessary as
real C++ compilers will not likely appear for a while yet.

Still, right now it looks to me like a compiler could be compliant with the
C++ Standard as new is documented there, and these weaknesses would remain.

The opinions are mine.  The good ideas belong to others.




[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Pete Becker <petebecker@acm.org>
Date: 1999/06/22
Raw View
Howard Hinnant wrote:
>
> <not intended to be a flame>

<not taken as one>.

Besides, having had to cope with the impact on the standard library of
MFC's #define of new to essentially new(__FILE__, __LINE__), it's not a
benign thing to do. So I withdraw my weak approval.

--
Pete Becker
Dinkumware, Ltd.
http://www.dinkumware.com
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: James Kuyper <kuyper@wizard.net>
Date: 1999/06/22
Raw View
Roland Booker wrote:
>
> >If your application relies on detecting failed allocations and
> >dealing with them, you are out of luck.
>
> If a standard full memory test were run during stress testing on a
> system all of whose processes had been coded in standard C++, what
> would be the expected result?

Something. Your question seems under-specified. It depends upon the
implementation and the program and where the lack of memory is detected.
With luck, a program might run out of memory during dynamic allocation,
rather than static or automatic allocation. If so, and a new_handler has
been defined or std::bad_alloc is caught, it might be able to do
something program-defined to cope with the situation gracefully.
Otherwise, something implementation-specific happens.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: James.Kanze@dresdner-bank.com
Date: 1999/06/22
Raw View
In article <37696DF7.6542@despam.pangea.ca>,
  Harvey Taylor <het@despam.pangea.ca> wrote:
>
>  I see with some perturbation in this month's C++ Report the
article
>  by Michael Ball & Steve Clamage on dealing with the transition
to
>  the ISO standard new & delete, specifically the exception(s)
raised
>  when memory allocation fails.
>
>  I had (foolishly) believed this transition would be painless
>  because a long, long time ago it was possible under MSVC to
define
>  a new_handler which returned NULL and thus get the effect of
>  std::nothrow without changing N thousands, (millions?) of lines
>  of code.
>
>  Alas it seems this is not the case, but I would like to verify
>  the situation.

It is not the case.

>  Under an olde MSVC the new handler was:
>   typedef int (__cdecl * _PNH)( size_t );
>
>  Under the CD2 the new handler was:
>   typedef void (*new_handler)();
>
>  The new handler seems to be the same under the ISO standard.
>  ie. typedef void (*new_handler)();
>
>  If you are in an organization with a ton of legacy code which
>  expects to see a NULL returned on memory allocation failure,
>  is it true these are the only options?
>   1) change N lines of code.

This is the *only* option.  I repeat, the only one.

>   2) ignore memory allocation failures.

Which results in an incorrect program.  If you can accept an incorrect
program, "int main() { return 0 ; }" avoids the problem completely (and
is doubtlessly significantly faster than your current implementation).
Its semantics may not meet your requirements specifications, but neither
do programs which terminate unexpectedly.

>   3) replace the global new/delete functions.

Fine, but the replacements must fulfill the contract specified in the
standard.  And that contract says: never return a null pointer.

This is a damded if you do, damded if you don't situation.  The rest of
the libraries which came with the new compiler (iostream, etc.) have
been written under the supposition that new never returns null, and
don't test for null.  If new returns null, they will crash.  And your
existing code has been written under the assumption that new never
throws an exception, and are doubtlessly not exception safe.  Depending
on how you write code, making them exception safe may mean a major
design effort.

As another poster pointed out, your best bet is to stick with the old
compiler at least until the next major rewrite.

>  BTW, if the new_handler takes void, it doesn't know how much
>  memory to free up, assuming it is doing garbage collection,
>  which means it could go round the "fail allocation, call
>  new_handler, free a bit of memory, return" loop N times, which
>  should make for some _interesting_ performance issues.

The general solution is for the new_handler to free up everything it
can.  And if it can't free up anything, to throw the exception.

--
James Kanze                         mailto:
James.Kanze@dresdner-bank.com
Conseils en informatique orientie objet/
                        Beratung in objekt orientierter
Datenverarbeitung
Ziegelh|ttenweg 17a, 60598 Frankfurt, Germany  Tel. +49 (069) 63 19 86
27


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: James.Kanze@dresdner-bank.com
Date: 1999/06/22
Raw View
In article <3769B7D7.DC4AE63A@acm.org>,
  Pete Becker <petebecker@acm.org> wrote:

> Greg Colvin wrote:
> >

> > That's one way.  If you are really desparate you can

> >    #define new new(nothrow)

> > although I expect I'll get flamed for suggesting it.

> Yes, and deservedly so. After all, this results in code that is
written
> in a language that is entirely different from C++.

> Seriously, if this is what's needed, then it's what's needed. No
grounds
> for flames here.

The real problem with the above is program maintenance.  What happens if
the macro is buried in some header file (probably the case), and I write
code which uses it without knowing it?  Or should I systematically check
for null, just on the odd chance that someone might have defined such a
macro.

The problem is even worse with libraries.  I'm willing to bet that the
implementor of the standard library isn't prepared for this if the macro
happens to be active when I instantiate one of his templates.  Not to
mention something called the one definition rule when the library
contains object files which use the template, and were compiled by the
vendor before delivery to you (obviously without your special macro).

--
James Kanze                         mailto:
James.Kanze@dresdner-bank.com
Conseils en informatique orient   e objet/
                        Beratung in objekt orientierter
Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany  Tel. +49 (069) 63 19 86
27


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Roland Booker <rbooker@baynetworks.com>
Date: 1999/06/23
Raw View
> Roland Booker wrote:
>>
>> >If your application relies on detecting failed allocations and
>> >dealing with them, you are out of luck.
>>
>> If a standard full memory test were run during stress testing on a
>> system all of whose processes had been coded in standard C++, what
>> would be the expected result?

James Kuyper wrote:

> Something. Your question seems under-specified. It depends upon the
> implementation and the program and where the lack of memory is detected.
> With luck, a program might run out of memory during dynamic allocation,
> rather than static or automatic allocation. If so, and a new_handler has
> been defined or std::bad_alloc is caught, it might be able to do
> something program-defined to cope with the situation gracefully.
> Otherwise, something implementation-specific happens.

Suppose the debugger tracks to a new that doesn't return; then another
process, possibly even a system process crashes.  Suppose this has been
happening for more than a month?  Suppose the product gets shipped
anyway because it only happens once a day and the machine boots back up
in about 4 minutes?  Or suppose the process just hangs on the new,
like processes already do if the network is not responding?

Murphy's Law just seems anecdotal.

The delay while new shuffles for memory is not a robustness measure,
it is one more place where interaction between processes can cause
system-wide difficulties; and it is introduced into the environment by
this attribute of the C++ standard.

Real time designers now have to be more careful with the use of new.

The challenge of dealing with(out) the return of new is one more place
that will make it difficult to move into the dialect of the C++ standard
from its predecessors, eg, almost all the *alloc routines return NULL
on allocation failure.

At some point in the future, I may have the opportunity to write
test programs that demonstrate these problems; but not yet, as there
isn't a conforming compiler.

This is a serious change in a sensitive area of system function.
A period of deprecation of the older practice would seem in order,
rather than just saying new doesn't have to return null and
incorporating that design into the libraries.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: "John D. Hickin" <hickin@Hydro.CAM.ORG>
Date: 1999/06/18
Raw View
Harvey Taylor wrote:
>

>         Under an olde MSVC the new handler was:
>                 typedef int (__cdecl * _PNH)( size_t );

This seems to be a strictly Microsoft invention. Indeed, there are two
functions, set_new_handler and _set_new_handler that MSVC implements.
_set_new_handler uses function pointers of the form void (*)().
Moreover, the Annotated C++ Reference Manual (1990) defines
set_new_handler to use pointers of this type.

>
>         Under the CD2 the new handler was:
>                 typedef void (*new_handler)();
>

which, as noted above, is the same as it was for _ARM_ C++.

>         The new handler seems to be the same under the ISO standard.
>         ie.     typedef void (*new_handler)();
>
>         If you are in an organization with a ton of legacy code which
>         expects to see a NULL returned on memory allocation failure,
>         is it true these are the only options?
>                 1) change N lines of code.
>                 2) ignore memory allocation failures.
>                 3) replace the global new/delete functions.

4) Save your old compilers. I'm not trying to be facetious here.
Changing compilers should be viewed no differently than changing
anything else in a project; you should go through the entire test plan
even if the code compiled without so much as a warning. Where I work we
have some systems whose test plans take a few man-months to execute;
saving the old compiler is definitely a cost-effective alternative.

>
>         BTW, if the new_handler takes void, it doesn't know how much
>         memory to free up, assuming it is doing garbage collection,
>         which means it could go round the "fail allocation, call
>         new_handler, free a bit of memory, return" loop N times, which
>         should make for some _interesting_ performance issues.
>

In single-threaded code a global variable suffices to communicate the
number of bytes desired. In MT code there is a definite chance that some
other thread might grab any storage that you freed so you might as well
free up a large amount.

Regards, John.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: "Greg Colvin" <address.suppressed@by.request>
Date: 1999/06/18
Raw View
Harvey Taylor <het@despam.pangea.ca> wrote in message
news:37696DF7.6542@despam.pangea.ca...
>
> Greetings,
> I see with some perturbation in this month's C++ Report the article
> by Michael Ball & Steve Clamage on dealing with the transition to
> the ISO standard new & delete, specifically the exception(s) raised
> when memory allocation fails.
>
> I had (foolishly) believed this transition would be painless
> because a long, long time ago it was possible under MSVC to define
> a new_handler which returned NULL and thus get the effect of
> std::nothrow without changing N thousands, (millions?) of lines

That's more calls to new than I have ever heard of in one place.
Are you really checking for null on every single one of them?

> of code.
>
> Alas it seems this is not the case, but I would like to verify
> the situation.

The situation is as you fear, and worse.  Mea culpa, in part,
but we all discussed this at great length, and I still think we
made the right choice.  There just wan't any truly painless
alternative.

We found the set_new_handler approach to be a fragile hack, and
we feared the code bloat of the compiler having to check for both
null returns and bad_alloc exceptions at every call to operator
new.  Some people wanted to eliminate the returning null altogether,
others hated throwing an exception, but we decided in the end to
support both approaches with a distinct syntax.  So be thankful,
it could have been worse.

> Under an olde MSVC the new handler was:
> typedef int (__cdecl * _PNH)( size_t );

And in the current one I think.  If your code doesn't need to port
away from Microsoft compilers you may be able to replace this handler
rather than the standard one and get the behavior you wanted.

> Under the CD2 the new handler was:
> typedef void (*new_handler)();
>
> The new handler seems to be the same under the ISO standard.
> ie. typedef void (*new_handler)();
>
> If you are in an organization with a ton of legacy code which
> expects to see a NULL returned on memory allocation failure,
> is it true these are the only options?
> 1) change N lines of code.

That's one way.  If you are really desparate you can

   #define new new(nothrow)

although I expect I'll get flamed for suggesting it.

It may be that you can leave most of your existing code untouched
and just place some catch(bad_alloc&) statements at a few strategic
places, but it all depends...

> 2) ignore memory allocation failures.

That's another.  I have never succeeded in getting a Microsoft OS
to run out of memory without crashing anyway, and I'm told that
when AIX runs out of memory it just starts randomly killing
processes.  This is a deliberate design decision: malloc in AIX
only maps some virtual memory, which rarely fails -- it doesn't
commit the memory until it is actually addressed.  I'll probably
get flamed for suggesting being an ostrich, but on Windows or AIX
or any similar system I'd just let the operator new throw, if ever
it does, and try to write your program so that is dies a reasonably
painless death.

> 3) replace the global new/delete functions.

Nope.  The non-nothrow versions of global new are not allowed to
return null, and the compiler is not required to check for null.

Also, I have found that replacing global operator new doesn't
work anyway if you are using DLLs and the Microsoft C++ library.

> BTW, if the new_handler takes void, it doesn't know how much
> memory to free up, assuming it is doing garbage collection,
> which means it could go round the "fail allocation, call
> new_handler, free a bit of memory, return" loop N times, which
> should make for some _interesting_ performance issues.

Yep.  The new handler is rarely all that useful, except as a last
ditch attempt to avoid dying.  As I hinted above, on some systems
it might just as well call abort.
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: clamage@eng.sun.com (Steve Clamage)
Date: 1999/06/18
Raw View
Harvey Taylor <het@despam.pangea.ca> writes:

> I see with some perturbation in this month's C++ Report the article
> by Michael Ball & Steve Clamage on dealing with the transition to
> the ISO standard new & delete, specifically the exception(s) raised
> when memory allocation fails.

> I had (foolishly) believed this transition would be painless
> because a long, long time ago it was possible under MSVC to define
> a new_handler which returned NULL and thus get the effect of
> std::nothrow without changing N thousands, (millions?) of lines
> of code.

During the development of the standard, the committe wrestled
long and hard with the problem of breaking user code. For a
time, drafts of the standard allowed the throw/nothrow
decision to be a property of the new-handler, and even had
a footnote recommending that practice. Some C++ vendors,
seeking to keep current with a changing specification, released
libraries that worked that way.

It was soon realized that such a specification made it impossible
for a user program to know whether operator new would throw an
exception or return a null pointer on allocation failure. A
portable program would either have to install and uninstall a
new-handler with known properties around every new-expression,
or write code that accepted both behaviors:
 try {
     T* p = new T;
     if( p == NULL ) {
  ... handle failure
     }
 }
 catch( const bad_alloc& ) {
     ... handle failure
 }

In no case could an existing program expect to continue to
work unchanged.

The only rational choices seemed to be to leave new-expressions
unchanged from the ARM, which the committee did not want to
do, or adopt the solution we have now.

> Alas it seems this is not the case, but I would like to verify
> the situation.

> If you are in an organization with a ton of legacy code which
> expects to see a NULL returned on memory allocation failure,
> is it true these are the only options?
>  1) change N lines of code.
>  2) ignore memory allocation failures.
>  3) replace the global new/delete functions.

Your number 3 is not an option in a standard-conforming program.
You can replace the global new and delete, but they MUST
have the same exception behavior as the standard versions.
If not, the results of your program are undefined. It is possible
that the C++ runtime library will quit working, since it will
use new/delete internally, and assume standard behavior.

The C++ Report column (June 1999, "Dark Corners") lists all the
options we could think of, along with sample code revisions.
We were unable to think of any proactive solution that did not
require touching every use of the global operator new.

We explain in the article that you might consider ignoring the
failures. In many common situations, there isn't anything you
can do about a failed allocation, and attempting to recover
gracefully might be worse than just letting the program abort.

If your application relies on detecting failed allocations and
dealing with them, you are out of luck.

--
Steve Clamage, stephen.clamage@sun.com


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Pete Becker <petebecker@acm.org>
Date: 1999/06/18
Raw View
Greg Colvin wrote:
>
>
> That's one way.  If you are really desparate you can
>
>    #define new new(nothrow)
>
> although I expect I'll get flamed for suggesting it.
>

Yes, and deservedly so. After all, this results in code that is written
in a language that is entirely different from C++.

Seriously, if this is what's needed, then it's what's needed. No grounds
for flames here.

--
Pete Becker
Dinkumware, Ltd.
http://www.dinkumware.com


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Darron Shaffer <darron.shaffer@beasys.com>
Date: 1999/06/18
Raw View
Harvey Taylor <het@despam.pangea.ca> writes:

> Greetings,
>  I see with some perturbation in this month's C++ Report the article
>  by Michael Ball & Steve Clamage on dealing with the transition to
>  the ISO standard new & delete, specifically the exception(s) raised
>  when memory allocation fails.
>
>  I had (foolishly) believed this transition would be painless
>  because a long, long time ago it was possible under MSVC to define
>  a new_handler which returned NULL and thus get the effect of
>  std::nothrow without changing N thousands, (millions?) of lines
>  of code.
>
>  Alas it seems this is not the case, but I would like to verify
>  the situation.
>
(snip)
>
>  If you are in an organization with a ton of legacy code which
>  expects to see a NULL returned on memory allocation failure,
>  is it true these are the only options?
>   1) change N lines of code.

This fixes your problem.

>   2) ignore memory allocation failures.

This MAY fix your problem, depending on your OS & application.

>   3) replace the global new/delete functions.
>

It is illegal to return NULL from ::operator new(size_t).  If you do
this, and your compiler generates code that doesn't call constructors
on NULL pointers, you will still be in danger from other libraries that
expect new to throw on allocation failures.

Third party libraries are the reason that a global throw/NULL setting
is unworkable.

Solution (4) catch memory exceptions at strategic places in your code
is another possibility.

--
 __  __  _    Enterprise Middleware Solutions Darron J Shaffer
 _ ) ___ _\   BEA Systems Inc.   Sr. Software Engineer
 __) __    \  4965 Preston Park Blvd, Ste 500 darron.shaffer@beasys.com
              Plano, TX 75093     Voice: (972) 943-5137
              http://www.beasys.com  Fax:   (972) 943-5111



[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: hinnant@_anti-spam_metrowerks.com (Howard Hinnant)
Date: 1999/06/19
Raw View
In article <3769B7D7.DC4AE63A@acm.org>, Pete Becker <petebecker@acm.org> wrote:

> Greg Colvin wrote:
> >
> >
> > That's one way.  If you are really desparate you can
> >
> >    #define new new(nothrow)
> >
> > although I expect I'll get flamed for suggesting it.
> >
>
> Yes, and deservedly so. After all, this results in code that is written
> in a language that is entirely different from C++.
>
> Seriously, if this is what's needed, then it's what's needed. No grounds
> for flames here.

One should be careful.
Consider placement new:

#define new new(nothrow)
....
T a;
new (&a) T(x);

It preprocesses to:

T a;
new(nothrow) (&a) T(x);

It'll probably break a few standard headers.  That may or may not be
acceptable for your code.

-Howard

<not intended to be a flame>
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: Harvey Taylor <het@despam.pangea.ca>
Date: 1999/06/19
Raw View
Pete Becker wrote:
> Greg Colvin wrote:
> >
> > That's one way.  If you are really desparate you can
> >    #define new new(nothrow)
> > although I expect I'll get flamed for suggesting it.
> >
>
> Yes, and deservedly so. After all, this results in code that is written
> in a language that is entirely different from C++.
>
> Seriously, if this is what's needed, then it's what's needed. No grounds
> for flames here.
>

 I considered adding this sort of a hack solution to my original
 post but decided not to; not for any reasons of coding purity,
 but because using it would preclude easily combining old (expects NULL)
 style code and new style (nothrow) code.
<ciao>
-het


--
 "Simplicity is the ultimate sophistication."  - old Apple logo

   Harvey Taylor     het@despam.pangea.ca
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: Roland Booker <rbooker@baynetworks.com>
Date: 1999/06/19
Raw View
>If your application relies on detecting failed allocations and
>dealing with them, you are out of luck.

If a standard full memory test were run during stress testing on a
system all of whose processes had been coded in standard C++, what
would be the expected result?
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: clamage@eng.sun.com (Steve Clamage)
Date: 1999/06/20
Raw View
Roland Booker <rbooker@baynetworks.com> writes:

>>If your application relies on detecting failed allocations and
>>dealing with them, you are out of luck.

>If a standard full memory test were run during stress testing on a
>system all of whose processes had been coded in standard C++, what
>would be the expected result?

I don't understand your question. Without knowing anything about
the test code or the system under test, how could anyone
predict anything?

My "out of luck" comment above applied to the specific case of
code written to an earlier language version but compiled with a
standard-conforming implementation. The original question was
about getting the old program behavior (null return from a
failed allocation request) without modifying the program code.
That you cannot do, but you can get the old behavior with
relatively minor code modifications.

--
Steve Clamage, stephen.clamage@sun.com
---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]





Author: Roland Booker <rbooker@baynetworks.com>
Date: 1999/06/21
Raw View
As a practical matter, bugs can be expected whenever code in this area
is touched, whether the decision is to go with new(nothrow) and work
with the new libraries or to change ptr != 0 to try-catch bad_alloc.
It may seem simple, but the failure is catastropic, and the change
has a lot in common with void main - int main (this is not a troll)
that keeps coming up here and in moderated from time to time.


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]






Author: Harvey Taylor <het@despam.pangea.ca>
Date: 1999/06/17
Raw View
Greetings,
 I see with some perturbation in this month's C++ Report the article
 by Michael Ball & Steve Clamage on dealing with the transition to
 the ISO standard new & delete, specifically the exception(s) raised
 when memory allocation fails.

 I had (foolishly) believed this transition would be painless
 because a long, long time ago it was possible under MSVC to define
 a new_handler which returned NULL and thus get the effect of
 std::nothrow without changing N thousands, (millions?) of lines
 of code.

 Alas it seems this is not the case, but I would like to verify
 the situation.

 Under an olde MSVC the new handler was:
  typedef int (__cdecl * _PNH)( size_t );

 Under the CD2 the new handler was:
  typedef void (*new_handler)();

 The new handler seems to be the same under the ISO standard.
 ie. typedef void (*new_handler)();

 If you are in an organization with a ton of legacy code which
 expects to see a NULL returned on memory allocation failure,
 is it true these are the only options?
  1) change N lines of code.
  2) ignore memory allocation failures.
  3) replace the global new/delete functions.

 BTW, if the new_handler takes void, it doesn't know how much
 memory to free up, assuming it is doing garbage collection,
 which means it could go round the "fail allocation, call
 new_handler, free a bit of memory, return" loop N times, which
 should make for some _interesting_ performance issues.

<cordially>
-het


--
 "Simplicity is the ultimate sophistication."  - old Apple logo

   Harvey Taylor     het@despam.pangea.ca


[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html              ]