Topic: A BIG bug in the language definition (was: Void pointer problem)
Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Mon, 1 Nov 1993 09:20:22 GMT Raw View
In article <2a51okINNjjp@emx.cc.utexas.edu> jamshid@emx.cc.utexas.edu (Jamshid Afshar) writes:
>In article <CF5MB7.M6v@ucc.su.oz.au>,
>John Max Skaller <maxtal@physics.su.OZ.AU> wrote:
>> Unfortunately, I think delete might be wrong. Perhaps it should have
>>been a virtual non-static member. A slight modification
>>to the above example show why:
>>
>> struct B { virtual void ~B(){} };
>> struct D {
>> void *operator new(size_t);
>> void operator delete(void*);
>> };
>> B* b = new D;
>> delete b; // core dump
>>
>>'new' always requires the complete object type, so being static
>>is fine for it. But while a delete calls the correct
>>destructor via the virtual destructor mechanism, and while
>>delete deletes a pointer to the complete object, again via
>>the virtual destructor mechanism, the wrong delete
>>operator is called.
>[possible solutions to this problem deleted]
>
>Hold your horses, C++ actually gets this right.
Not hardly.
>I think the ARM is explicit enough about this to even satisfy Ron ;-).
Not hardly.
>See the end of ARM 12.5.
OK. I've seen it and I get the point. But the point is plainly nonsensical.
The point that is made at the end of 12.5 is that delete() functions are
*not* called where they ought to be called (i.e. where any sane person
would expect them to be called, at the point where you see a delete
statement) but rather, compilers are expected to generate implicit calls
to delete() functions WITHIN DESTRUCTORS. That is plainly nonsensical,
and the results obtained from implementations which do this are plainly
nonsensical.
Let's say I have a particular class type `B' and that I wish to insure that
no user of B is ever able to create or destroy a B out in the heap. (This
seems like a reasonable thing to want to do in certain circumstances.) Now
consider the following code, in which I insure this by giving B its own
new and delete operators, and by making them private to B:
#include <stddef.h>
struct B
{
private:
void *operator new (size_t);
void operator delete (void*);
public:
~B () { }
};
struct D : public B { ~D () { } };
void
fubar ()
{
D d; // error!!??
}
Note that nowhere in this code do I make ANY attempt to either allocate or
deallocate ANY sort of object ON THE HEAP.
So this code should be fine, right? Wrong! Most compilers issue an error
on the code shown above BECAUSE B::delete IS PRIVATE!
Now you tell me... If I am NEVER allocating or deallocating anything on
the heap, then (by definition) I am NEVER using any new or delete operators,
yes? Thus, it should make no difference whatsoever what the accessability
of the new and/or delete operators is, right?
But most implementations *do* gripe about the fact that the delete() operator
is private? Is this the most absurd nonsense you have ever seen in your
life or what? You be the judge.
I happen to think that this is absolutely the most blatant example of a
case in which the current C++ language definition is absolutely, positively,
and without doubt BADLY BROKEN and UTTERLY RIDICULOUS.
The problem quite clearly is that calls to delete operators WHICH SHOULD
BE GENERATED INLINE AT THE POINT OF A DELETE EXPRESSION are instead
generated (implicitly) AS THE VERY LAST THING IN THE BODY OF EACH AND
EVERY DESTRUCTOR... EVEN IN CASES WHERE THOSE CALLS WILL NEVER EVEN BE
EXERCIZED.
(This is not only patently ridiculous, it is potentially quite wasteful
of code space also.)
Is there *any* reason why 12.5 encourages implementors to generate the
delete() calls within the bodies of destructors rather than inline at
the points where delete expression actually appear? Well, all I can
say is that if anyone ever finds ANY reason for such nonsense, please
clue me in. As far as I can tell this is not only bad from a language
design standpoint, it is also utterly pointless. The calls to delete()
should be generated where we see the delete expressions. Period.
Nothing else makes any sense.
And while we are on the subject, let me also note that a perfectly
analogous bit of nonsense also applies in the case of constructors and
`new' operators. Are calls to `new' operators generated where we would
intutively expect them to be... i.e. at the points where we see `new'
expressions? No. Of course not. That would be too sensible. Instead
there are some mumbo-jumbo rules which (in effect) force implementations
to do the same silly thing that cfront does, i.e. generate calls to
`new' operators as the first thing within each constructor function.
Again, this is wasteful of code space (especially in cases where I have
no intention whatsoever of EVER using ANY heap space) both in terms of
the size of the generated code for constructors, which end up being bloted,
*and* in terms of the amount of extra "standard library" stuff which may
get linked into my program. (I get the library version of ::new linked
in whether or not I have any intention of ever using it! How does *that*
strike all you folks out there doing embedded programming and trying to
squeeze your code into some small memory?) The same goes for ::delete.
if I have so much as one destructor anywhere in my program, then I will
get the library version of ::delete linked in it whether I want it or not.
Is this a good way to design a language feature?? You be the judge.
Oh yes, and lest I forget, let me point out that this whole ball of silly-
ness is also wasteful in terms of execution time, as well as code space.
The calls to the `new' and `delete' operators which C++ compilers now
secretly (and quietly) embeded at the beginnings of constructors and at
the ends of destructors (respectively) are *only* called when the object
being constructed or destructed resides in the heap. How do the constructor
and destructor function know if there were called for "heap resident"
object or not? Simple, the compiler arranges to pass them ADDITIONAL
hidden parameters WHOSE VALUES MUST BE CHECKED (to see if the constructor
should call `new' or the destructor should call `delete' respectively).
Both the additional parameter passing *and* these additional checks take
time. And in both cases, this overhead could be totally eliminated if
a more obvious and straightforward approach to new/construction and
delete/destruction were permitted in the language (i.e. if it were NOT
the case that all implementations were required to be exactly as silly
as cfront with respect to these things).
So to summarize, the current rules with respect to construction & new
and destruction & delete cause the normal accessability rules of the
language to get totally bent (as illustrated by my example above).
Additionally, the current way of doing things in this area is highly
non-intutive, and is also quite inefficient, both in terms of space
and speed.
'nuff said.
I might mention also that I pointed out to Margret Ellis some time ago
(over in comp.lang.c++) that I felt that examples like the one I gave
above (only for constructors/new rather than destructors/delete) clearly
illustrated a serious bug in cfront (and other implementations). I guess
she was just too flabberghasted by the realization of just how messed up
things really are in this area, because she never even responded.
P.S. While it *is* true that most C++ implementations *do* indeed issue
an (unwarranted?) error on my code example (above) I have found that one
particular PC-based implementation (which I shall not name) apparently
does not... but that is only because of a separate bug in that implementa-
tion. (It doesn't even realize that class-specific delete operators are
supposed to be inherited so it never even tries to use B::delete. It
erroneously uses ::delete instead... and ::delete definitely *is* access-
able, at all points in the code example. Thus no errors is issued.)
--
-- Ronald F. Guilmette, Sunnyvale, California -------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------