Topic: operator new -vs- operator new[]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/07/06 Raw View
Gary Mussar wrote:
[...]
> >- that an implementation that decided to use that for allocations
> > with new (not malloc or new[]) could still be conforming
> > (this could be wrong due to s.th. I'm missing *in the standard*).
>
> An implementation can do what ever it wants for allocation. The standard
> doesn't enforce anything. There are at least 2 standard methods for
> tracking the number of instances for new[] when used with simple
> OS allocators like malloc. (One involves allocating additional memory
> for a header to save the number of instances, the other involves using
> associative arrays with the memory address as the key.) There are
> some implementations that use different memory pools for new and
> new[]. This is all stuff that must be ported to allow the implementation
> to be used on an embedded system.
Well, I'll get more precise: My claim was, that an implementation
using such an allocation scheme could be conforming to the standard
and having a delete that actually frees the memory (that is, subsequent
new calls may return the same memory again), without storing the size
of (non-array) new-allocated objects somewhere (unless I missed
something).
However, I've now found a counterexample for that, so I must
admit that I really missed something (however my claim was correct,
since it contained explicitly the "unless" clause ;-)).
The following program is AFAIK fully conforming, but cannot
really free the block without storing it's size somehow.
class Whatever {};
int main()
{
// create a new Whatever:
Whatever* p = new Whatever;
// Now destruct it in-place
p->~Whatever();
// destroy whatever the destructor may have left in the memory
memset(p, 0, sizeof(Whatever));
// and finally free the memory
operator delete(p);
}
The memset is there because one could imagine that the destructor
leaves the size in the now raw memory for operator delete. However
anything which is inside the allocated memory gets destroyed by
memset, so operator delete must get that size from elswhere, so
operator new must store it somewhere for operator delete to get it.
(But I again want to make my "unless" clause: I again might have
overlooked something...)
[...]
I don't think it's worth to argue on the other points any more,
since I've now found my main assumption to be wrong.
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Gary Mussar" <Gary.Mussar.mussar@nt.com>
Date: 1998/07/03 Raw View
Christopher Eltschka wrote in message <359A397C.1328C48D@physik.tu-muenchen.de>...
>> 1) Throw the runt piece away. Unfortunately, this usually means you have
>> lost it forever. If the user frees their piece of memory, it cannot be
coalesced
>> into a larger piece.
>
>Why not? If you have a header, you probably can afford a single bit
>in it which tells you if the block is unusable. Then on freeing the
>preceding block, you just look at the block following, and if it
>has the "unusable" flag set, merge in into your freed block.
>The size of the unusable block is well known.
Sigh. I don't want to debate the decisions made by the various OS
manufacturers. In the particular OS I have described, there are no
free bits. All memory is tracked as being on the free list or belonging
to some thread/process. Keeping runt pieces on the free list adds
a significant amount of time to the search for an appropriate size
piece of memory and is something to be avoided. (BTW, this is a
commercially available, embedded OS, not something we wrote
ourselves and not something we would be modifying.) More complex
schemes can be created but there are always tradeoffs. It sounds like
you believe you have a scheme of low cost, high performance, trackable
memory management. I hope you package it and make some money off
it. Once it is ubiquitous, you can then lobby the C++ standard
committee to change the requirements of the C++ allocatators to
require/use your memory management API.
>However, if there are header blocks at the beginning, it doesn't
>make much sense not to store the size, since not storing the size
>would only be done to *avoid* those headers.
In this particular OS, you do not have the option of excluding a
header. The header is used for tracking and sanity testing in
a realtime system. The size of the header is constrainted by the
alignment requirements. Any allocated piece of memory will have
16 byte alignment (which is why the header is a multiple of 16
bytes and allocations are rounded to a 16 byte boundary). This
"feature" of the memory subsystem would normally imply that
the application had better not require more strignent alignment
than 16 bytes (although it is possible to build an overlay to the
system that could provide this).
>A memory allocation system as I thought of would be able to use
>every block of available memory (where "block" is the size to which
>you round up an allocation of 1). An example for such a heap
>management would be the Turbo Pascal 6.0 heap:
Did Turbo Pascal 6.0 heap management handle multi-threading?
Gary Mussar <mussar@nortel.ca> Nortel
Phone: +1-613-763-4937 FAX: +1-613-763-9406
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/07/03 Raw View
Gary Mussar wrote:
>
> Christopher Eltschka wrote in message <359A397C.1328C48D@physik.tu-muenchen.de>...
> >> 1) Throw the runt piece away. Unfortunately, this usually means you have
> >> lost it forever. If the user frees their piece of memory, it cannot be
> coalesced
> >> into a larger piece.
> >
> >Why not? If you have a header, you probably can afford a single bit
> >in it which tells you if the block is unusable. Then on freeing the
> >preceding block, you just look at the block following, and if it
> >has the "unusable" flag set, merge in into your freed block.
> >The size of the unusable block is well known.
>
> Sigh. I don't want to debate the decisions made by the various OS
> manufacturers. In the particular OS I have described, there are no
> free bits. All memory is tracked as being on the free list or belonging
> to some thread/process. Keeping runt pieces on the free list adds
> a significant amount of time to the search for an appropriate size
> piece of memory and is something to be avoided. (BTW, this is a
> commercially available, embedded OS, not something we wrote
> ourselves and not something we would be modifying.) More complex
> schemes can be created but there are always tradeoffs. It sounds like
> you believe you have a scheme of low cost, high performance, trackable
> memory management. I hope you package it and make some money off
> it. Once it is ubiquitous, you can then lobby the C++ standard
> committee to change the requirements of the C++ allocatators to
> require/use your memory management API.
>
Well, I always thought that the usual way to handle heaps in C(++)
would be to allocate larger blocks from the OS and do the heap
management inside the block yourself, if only for efficiency
reasons. OS memory allocations will probably be more costy in every
case, since to access memory not already attached to a process, there
will likely be a context switch (and a second one back to the program
after allocation). Sounds very much overhead just for allocating a few
bytes.
Moreover, if you let the OS handle all memory allocations, and
the OS stores the memory block size anyway, there's no reason for
you to store the size again - the OS will use it's own size anyway,
so for this case I'm obviously correct (although in a different way).
But I'm not arguing for a change in the heap management API,
but I claim that (unless I'm missing something) the *current*
standard allows such a heap management (that is, an implementation
using such a heap management could be conforming). This is
independant of what any existing implementation does or any
existing OS/ABI/whatever demands.
To make it clear:
I argue:
- that it is possible to do such a heap management in principle
(Proof: TP 6.0 used it)
- that an implementation that decided to use that for allocations
with new (not malloc or new[]) could still be conforming
(this could be wrong due to s.th. I'm missing *in the standard*).
I don't argue
- that this heap management is a good one, an efficient one,
superiour to other heap management strategies, applicable to
every system, etc. pp.
- that the standard should be changed to allow such a heap management
with new[] and malloc() (and possibly to allow it with new,
if I'm wrong with my second point)
I can imagine that f.ex. on embedded systems with tight memory,
a heap management that avoids headers where not needed could
be an advantage, however I don't know if there is any drawback
for this on embedded systems (maybe the added complexity for
different strategies on new and malloc would be unacceptable).
> >However, if there are header blocks at the beginning, it doesn't
> >make much sense not to store the size, since not storing the size
> >would only be done to *avoid* those headers.
>
> In this particular OS, you do not have the option of excluding a
> header. The header is used for tracking and sanity testing in
> a realtime system. The size of the header is constrainted by the
> alignment requirements. Any allocated piece of memory will have
> 16 byte alignment (which is why the header is a multiple of 16
> bytes and allocations are rounded to a 16 byte boundary). This
> "feature" of the memory subsystem would normally imply that
> the application had better not require more strignent alignment
> than 16 bytes (although it is possible to build an overlay to the
> system that could provide this).
If the header is done by the OS, it's nothing the C++ compiler
will have to care about. Especially, the code generated by new
will *not* have to store the size.
But this is no argument against what I wrote anyway, since this
OS isn't a definition of what is allowed by the C++ standard
(if we would discuss about a *requirement*, things would be
quite different).
BTW, in the environment that OS runs on, you are probably not
too concerned about your memory consumption, are you?
>
> >A memory allocation system as I thought of would be able to use
> >every block of available memory (where "block" is the size to which
> >you round up an allocation of 1). An example for such a heap
> >management would be the Turbo Pascal 6.0 heap:
>
> Did Turbo Pascal 6.0 heap management handle multi-threading?
Does the C++ standard demand implementations to support multi-threading?
(BTW, with appropriate locks, every heap management gets MT-safe
- it's only slow then ;-))
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/07/01 Raw View
Gary Mussar wrote:
[...]
> In many systems, the memory management code allocates an invisible
> header on the piece of memory to track how large it is, track the owner,
> track the memory pool used, etc.. When the system allocates a piece of
> memory, it allocates a slightly larger piece to account for this header. If
> the system is splitting a piece of free memory into 2 smaller parts (one
> for the user and the remainder kept on the free list), it makes absolutely no
> sense to keep pieces that cannot be used for minimum allocations. The
> size of the piece for minimum allocation must include the size of the tracking
> header.
>
> In the case I was mentioning, the header was 16 bytes long and all allocations
> were rounded to 16 byte boundaries. So if the user asked for an allocation of
> 2 bytes, the system would need a piece of memory that was at least 18
> bytes long. This would get rounded to 32 bytes. If the piece of free memory was
> 256 bytes long, the system would split off the 32 byte part and leave a 224
> piece on the free list. If the piece of free memory was 48 bytes long, the system
> wouldn't bother splitting it up since the remaining piece of 16 bytes would
> only be large enough to hold the tracking header and no user data.
>
> At this point you can:
>
> 1) Throw the runt piece away. Unfortunately, this usually means you have
> lost it forever. If the user frees their piece of memory, it cannot be coalesced
> into a larger piece.
Why not? If you have a header, you probably can afford a single bit
in it which tells you if the block is unusable. Then on freeing the
preceding block, you just look at the block following, and if it
has the "unusable" flag set, merge in into your freed block.
The size of the unusable block is well known.
However, if there are header blocks at the beginning, it doesn't
make much sense not to store the size, since not storing the size
would only be done to *avoid* those headers. I guess you wouldn't
care anyway about memory which might be saved this way, if you have
plenty of it. And only in such systems, detailed headers which
contain more than the size would be reasonable (if there's not much
memory, you won't bother to divide it further into pools, f.ex.)
A memory allocation system as I thought of would be able to use
every block of available memory (where "block" is the size to which
you round up an allocation of 1). An example for such a heap
management would be the Turbo Pascal 6.0 heap: Here, no used sizes
were stored, but only the free blocks were stored in a free list,
which was written directly into the free blocks (plus three pointers
in static memory). The size of the block was determined by the size
of the list items. Of course, such a model is not applicable for
malloc and for new[], but IMHO it would be for new. Also, it would
make sense for new, since you might allocate quite small objects
- something which probably won't happen too often for new[].
[...]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Gary Mussar" <Gary.Mussar.mussar@nt.com>
Date: 1998/07/01 Raw View
Christopher Eltschka wrote in message <3597B88C.A18CB841@physik.tu-muenchen.de>...
>I don't see any advantage in allocating that extra memory. Indeed,
>you can't use it for allocations of more that 16 or 32 bytes. But if
>you add it to the allocated block, you can't allocate it for
>*anything*. Therefore I can't see any advantage for this.
>
>Rounding up to a 16 byte boundary would be no problem, since
>this is a predictable function, that is, given a requested size (and
>only that), you can calculate the real size.
In many systems, the memory management code allocates an invisible
header on the piece of memory to track how large it is, track the owner,
track the memory pool used, etc.. When the system allocates a piece of
memory, it allocates a slightly larger piece to account for this header. If
the system is splitting a piece of free memory into 2 smaller parts (one
for the user and the remainder kept on the free list), it makes absolutely no
sense to keep pieces that cannot be used for minimum allocations. The
size of the piece for minimum allocation must include the size of the tracking
header.
In the case I was mentioning, the header was 16 bytes long and all allocations
were rounded to 16 byte boundaries. So if the user asked for an allocation of
2 bytes, the system would need a piece of memory that was at least 18
bytes long. This would get rounded to 32 bytes. If the piece of free memory was
256 bytes long, the system would split off the 32 byte part and leave a 224
piece on the free list. If the piece of free memory was 48 bytes long, the system
wouldn't bother splitting it up since the remaining piece of 16 bytes would
only be large enough to hold the tracking header and no user data.
At this point you can:
1) Throw the runt piece away. Unfortunately, this usually means you have
lost it forever. If the user frees their piece of memory, it cannot be coalesced
into a larger piece.
2) Find some way to keep track of all the runt pieces.
3) Give the user just a little more than they asked for. This is happening
anyway because of the rounding. The same mechanism takes care of both.
As strange as it may be, quite a few OSs do 3).
Gary Mussar <mussar@nortel.ca> Nortel
Phone: +1-613-763-4937 FAX: +1-613-763-9406
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 1998/06/29 Raw View
AllanW@my-dejanews.com writes:
>In article <35928158.3DC99BA7@physik.tu-muenchen.de>,
> Christopher Eltschka <celtschk@physik.tu-muenchen.de> wrote:
>>
>> AllanW@my-dejanews.com wrote:
>>
>> [...]
>>
>> > There is no standard way of finding the size of a memory block given only
>> > it's address. (Should there be? operator new, new[], malloc(), etc.
>> > already have to maintain this someplace anyway; it would be a trivial
>> > problem for library developers to add some standard way of getting the
>> > size... But would that encourage bad coding practices?)
>>
>> Is new really required to store the size in every case?
>There are many special cases where this is not required. But I assume
>(is that going to get me in trouble? :-) that doing so would be more
>trouble for the compiler writer than it's worth.
As always, let's be clear about the difference between the
promises and restrictions in the language definition, and
what an implementation might do. Let's just consider the
allocation and deallocation of raw storage, without the
other odds and ends, like exceptions and new-handlers.
The requirements on operator new are that properly-aligned
space of at least the requested size is allocated, and the
space requested does not get reallocated without an intervening
delete. (If more space is allocated than is requested, no
valid program can detect that it happened. Hence, "at least".)
The requirements on operator delete are ... none! (Well, it
can't trash allocated memory outside the scope of a valid
delete request.) An implementation (or a programmer) could
provide an operator delete that does nothing. That might
even make sense in an application environment where it was
known that returning allocated storage was not necessary.
So if the operator delete does nothing, the matching operator
new does not have to store the allocated size anywhere.
Now for more realistic cases.
1. Memory management consists of lists of fixed-size memory
blocks. An allocation request is satisfied by supplying a
fixed-size block that is big enough. The size of the block
is implicit based on the group it is attached to (somehow).
The actual requested size is not relevant and is not stored
anywhere. This is actually a very good style of memory
management. As I describe it here, if you ask for more
memory than the largest-available predefined block,
allocation fails. A real implementation would pick some
reasonable maximum preallocated size, and resort to
more traditional means for bigger requests.
2. Memory-management uses the "buddy system" as explained
in Knuth. (DEC VAX/VMS actually used that system as one
of its options.) The amount of allocated space is implicit in
the address. The requested size is not stored anyplace, and
neither is the allocated size. This memory management technique
is too inefficient for general-pupose use.
There are more requirements on new-expressions and delete-
expressions (like keeping an element count so that destructors
can be called correctly), but those occur at a higher level.
I've discussed those issues in an earlier article.
--
Steve Clamage, stephen.clamage@sun.com
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/06/29 Raw View
Gary Mussar wrote:
[...]
> I'm not implying that you are advocating poor practises. I do want to
> point out that round the allocated size but saving the actual request
> size doesn't address the second reason I mentioned which is the
> allocator not wanting to leave useless small chunks of memory in the
> pool after splitting off a piece. In the example I gave, the memory
> manager would round up to a 16 byte boundary and would not leave
> a piece smaller than 32 bytes. This means that the acutal size
> of the block may be 0-47 bytes longer than the requested size.
> If you saved just the requested size, you would not have enough
> information to determine the real size.
I don't see any advantage in allocating that extra memory. Indeed,
you can't use it for allocations of more that 16 or 32 bytes. But if
you add it to the allocated block, you can't allocate it for
*anything*. Therefore I can't see any advantage for this.
Rounding up to a 16 byte boundary would be no problem, since
this is a predictable function, that is, given a requested size (and
only that), you can calculate the real size.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: AllanW@my-dejanews.com
Date: 1998/06/27 Raw View
In article <35928158.3DC99BA7@physik.tu-muenchen.de>,
Christopher Eltschka <celtschk@physik.tu-muenchen.de> wrote:
>
> AllanW@my-dejanews.com wrote:
>
> [...]
>
> > There is no standard way of finding the size of a memory block given only
> > it's address. (Should there be? operator new, new[], malloc(), etc.
> > already have to maintain this someplace anyway; it would be a trivial
> > problem for library developers to add some standard way of getting the
> > size... But would that encourage bad coding practices?)
>
> Is new really required to store the size in every case?
There are many special cases where this is not required. But I assume
(is that going to get me in trouble? :-) that doing so would be more
trouble for the compiler writer than it's worth.
[Snip explanation of the specific cases]
> So the only case where the size needs to be stored would be for
> objects without non-trivial destructor and/or a deallocation function,
> since only those can be deleted with a pointer to incomplete type.
Or arrays of those types.
> One could even remove the need to store the size in that case by making
> that size a symbol at class definition time, and referring to that
> symbol at the delete expression. This is possible, since delete
> with base class pointers is only allowed for virtual destructors,
> and virtual destructors are never trivial (since the implicitly
> declared destructor isn't virtual, if you don't inherit from
> a class with virtual destructor, there must be an explicitly
> declared destructor in the inheritance graph and only implicit
> destructors can be trivial - see 12.4/3).
This only works if you're deleting a non-array object.
> So we really have three different requirements:
>
> malloc(): must store the size for free()
> new: need not store the size, since delete can calculate it
> new[]: need not store the size as well, but must store the element count
If you consider malloc to allocate an array of chars (but algined in a way
appropriate for any variable type), then in the first case, saving the
size is the same as saving the number of chars. This is conceptually true
for the third case as well, since the original requested size (not
including any required overhead) can always be converted into the element
count and vice-versa.
> Anything I've missed?
I'm afraid so. The programmer can replace operator new[] with code that
returns suitable memory for the new-expression. In this case, the
replacement code will be required to save the size somewhere, so that
the programmer's replacement operator delete[] can work correctly. But
the programmer doesn't have to provide any mechanism for the
delete-expression to determine the number of elements. Therefore, the
new-expression must save the size or the number of elements somewhere,
*independant* of the size maintained by operator new[].
That was the point of my original post. The seemingly-redundant size
(or count) has to be stored somewhere. It's naive (but easy) to think
that one count can serve both purposes.
There is one more consideration. Doing this imposes a burden on the
programmer to get the new/new[] and delete/delete[] verbage just right,
otherwise the results will be quite unpleasant. Of course, the
programmer already has this burden, but I've noticed that in many
shops they assume that the damage from this error this is minimal.
(See several threads in this group about assumed behavior when one uses
delete instead of delete[], even though the only thing that the
standard says is that it's "undefined.")
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member Forum
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: David R Tribble <david.tribble@noSPAM.central.beasys.com>
Date: 1998/06/26 Raw View
I, David R Tribble wrote:
>> A variation of the second scheme is to save the size of the new'd
>> memory block (which contains an array of objects) with the block
>> itself, and to compute the number of elements by dividing the total
>> size of the block by the size of the individual array elements.
>> This has the possible advantage that, since ::new probably stores
>> the size of the new'd object with the object (as described above),
>> this doesn't require any extra storage when new'ing an array. In
>> other words, the count of the array elements is not actually stored
>> anywhere as such, but is computed at delete time from the size of
>> the array (which is stored somewhere) and the size of the element
>> type.
Gary Mussar <Gary.Mussar.mussar@nt.com> wrote:
> You would get a rude awaking with some memory managers that I have
> used. What is invisibly stored by ::new is usually the real size of
> the piece of memory allocated, not the size that was asked for. Some
> memory managers will round requests up to a particular boundary and/or
> not split a block of available memory if the remaining piece is less
> than a "useful" size. (e.g. One memory manager would round to a
> 16 byte boundary and wouldn't split a block if the left over piece
> would be less than 32 bytes).
Pete Becker wrote:
> Most memory managers round block sizes up to some convenient size.
> That may be something nearby like the next larger multiple of 8, or it
> may be something further away like the next power of 2. The size of
> the block does not tell you how many objects the block actually holds.
Both of these are correct.
However, a memory manager that rounds up the size of the allocated
block but stores the actual request size with the block could use
the divide-by-sizeof method to determine the number of array items
in the block. Obviously, it would have to round up the stored size
when it did the actual block deallocation.
I'm not advocating any particular strategy, I'm just pointing out
a possible reason why using 'delete' where 'delete[]' should be used
sometimes works.
And BTW, I'm certainly not advocating using 'delete' where 'delete[]'
should be used; I believe in correct coding.
Along this line of discussion, and strictly for argument's sake,
we could extend the syntax of C++ to catch these kinds of bugs.
Consider stealing an idea from the Java syntax:
void func()
{
Foo[] a; // 'a' points to an array of Foo
a = new Foo[10]; // Okay
delete[] a; // Okay
a = new Foo; // Error
delete a; // Error
}
The declaration of 'a' defines it to be a pointer to an array,
similar to 'Foo * a', but specifically telling the compiler that
it is an array pointer. (To avoid misinterpretation, we state
that 'a' is a pointer to the first element of an array of Foo,
rather than a pointer to an entire Foo array, so that '*a' and
'a[i]' still work as before.)
The first pair of 'new' and 'delete' allocate and deallocate an
array correctly, so the compiler doesn't complain. The second
pair, however, attempt to 'new' and 'delete' a non-array object,
which could be caught and flagged by the compiler.
The only problem with this extended syntax that I can see is
that it confuses the meaning of abstract declarators such as
'(Foo[])', which could mean either '(Foo*)', pointer-to-Foo (the
present meaning) or '(Foo[])', pointer-to-array-of-Foo.
-- David R. Tribble, david.tribble@central.beasys.com --
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Gary Mussar" <Gary.Mussar.mussar@nt.com>
Date: 1998/06/27 Raw View
David R Tribble wrote:
> Gary Mussar <Gary.Mussar.mussar@nt.com> wrote:
> > You would get a rude awaking with some memory managers that I have
> > used. What is invisibly stored by ::new is usually the real size of
> > the piece of memory allocated, not the size that was asked for. Some
> > memory managers will round requests up to a particular boundary and/or
> > not split a block of available memory if the remaining piece is less
> > than a "useful" size. (e.g. One memory manager would round to a
> > 16 byte boundary and wouldn't split a block if the left over piece
> > would be less than 32 bytes).
>
> However, a memory manager that rounds up the size of the allocated
> block but stores the actual request size with the block could use
> the divide-by-sizeof method to determine the number of array items
> in the block. Obviously, it would have to round up the stored size
> when it did the actual block deallocation.
I'm not implying that you are advocating poor practises. I do want to
point out that round the allocated size but saving the actual request
size doesn't address the second reason I mentioned which is the
allocator not wanting to leave useless small chunks of memory in the
pool after splitting off a piece. In the example I gave, the memory
manager would round up to a 16 byte boundary and would not leave
a piece smaller than 32 bytes. This means that the acutal size
of the block may be 0-47 bytes longer than the requested size.
If you saved just the requested size, you would not have enough
information to determine the real size.
--
Gary Mussar <mussar@nortel.ca> Phone: (613) 763-4937
Nortel FAX: (613) 763-9406
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: David R Tribble <david.tribble@noSPAM.central.beasys.com>
Date: 1998/06/24 Raw View
AllanW@my-dejanews.com writes:
>> But this is true anyway, because the library already has to hold on
>> to the length of the object somewhere. ...
>> The compiler has to leave this information somewhere else, not part
>> of the info block defined by the standard global operator new,
>> because that might not be what got called. So it has to store it in
>> ... um ...
>> How the heck does this work, anyway?
Steve Clamage wrote:
> ...
> In the case of an array allocation, the new-expression must
> save the number of elements someplace so that number can be
> found later.
> ...
> I know of two common ways to save the number of array elements:
> [1] in a separate associative array, or [2] by requesting more
> storage than is needed and saving it there.
> ...
>
> [2] To store the count with the allocated data, ask operator new
> for m extra bytes, store the count at the address A returned,
> and use the value A+m as the pointer returned to the progam.
> The first object gets created at address A+m, not at address A.
> A delete-expression gets a value B. It retrieves the count from
> address B-m, calls N destructors relative to address B, then
> calls operator delete using address B-m.
A variation of the second scheme is to save the size of the new'd
memory block (which contains an array of objects) with the block
itself, and to compute the number of elements by dividing the total
size of the block by the size of the individual array elements.
This has the possible advantage that, since ::new probably stores
the size of the new'd object with the object (as described above),
this doesn't require any extra storage when new'ing an array. In
other words, the count of the array elements is not actually stored
anywhere as such, but is computed at delete time from the size of
the array (which is stored somewhere) and the size of the element
type.
This is another argument for using 'delete[]' on arrays instead
of 'delete', since the 'delete[]' operator must know that is has
to compute the number of elements in the array being deleted
(so that it can call the right number of destructors) instead of
simply assuming it's deleting only a single object.
I suspect that most implementations use just this sort of scheme,
which is why bad code that calls 'delete' when it should be
calling 'delete[]' still manages to call the right number of
destructors and free the correct number of bytes; such
implementations probably use the same code for both 'delete' and
'delete[]', and both always do the division.
int test()
{
Foo * a;
Foo * b;
a = new Foo[10];
// Allocates 10*sizeof(Foo) bytes, and
// Calls Foo::Foo() 10 times
b = new Foo[10];
// Ditto
delete[] a;
// Calls Foo::~Foo() 10 times, and
// Deallocates 10*sizeof(Foo) bytes
delete b;
// ERROR! But probably still...
// Calls Foo::~Foo() 10 times, and
// Deallocates 10*sizeof(Foo) bytes
}
-- David R. Tribble, david.tribble@central.beasys.com --
C++, the PL/1 of the 90s.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 1998/06/24 Raw View
"Paul D. DeRocco" <pderocco@ix.netcom.com> writes:
>Steve Clamage wrote:
>>
>> It matters both in theory and in practice. Operator new[] was
>> added to C++ so that array allocation could use a different
>> memory pool from single-object allocation. If an implementation
>> takes advantage of that (and I must assume that the implementors
>> who asked for the enhancement do so), your program will fail
>> if you mismatch the forms of new and delete.
>I thought it was because in the case of an array of objects with
>destructors, it needs to be told to look for the length of the array
>somewhere, so that it knows how many times to call the destructor. The
>different operator allows this to be done for arrays, without burdening
>single allocations with the same overhead.
We might be talking about different things. Some history
might help.
It has always been the rule in C++ that you create a single
heap object with "new" and delete it with "delete"; and that
you create a C-style array of objects with "new[]" and delete
it with "delete[]".
In very early C++, there was only one "operator new" that
served for all allocations, and one matching "operator delete".
For an array, the programmer must specify how many objects
were wanted, and the "[n]" syntax did that. For deleting
an array, the programmer was required to supply the same
value of n in the delete expression, e.g.
delete [n] p;
At that time (pre-1989) it mattered in practice whether you
matched up the new and delete only if the oject had a destructor
that needed to be run. The language rule said you needed to
match them up, but programmers found that code still worked
if they ignored the rule for simple types:
char *p = new char[1000];
...
delete p; // wrong, but it worked
Because having to supply the object count was inconvenient
and error-prone, in about 1989 the delete syntax was changed
to remove the object count. The implementation was required
to remember the count. The programmer still needed to know
whether one object or an array was allocated, and to use the
correct form of delete.
If the C++ runtime system had to keep track of the size
of arrays, why not eliminate the two different forms
entirely? The reason is that storing and retrieving the
size would have to be done for every allocation, adding
noticeable overhead to allocation of single objects. For
arrays of objects with destructors, the overhead would
be insignificant.
For simple types, compilers usually still didn't do anything
special for delete[], since there was still only one form
of operator new that served both single objects and arrays.
That is, allocating char or int arrays was made just as
efficient as allocating a single object of the same size.
The code above continued to work, but was still technically
(or pedantically) wrong.
Early in the C++ standardization process, some users and
implementors wanted the capability to specify different memory
pools for arrays (presumed to be large) than for single objects
(presumed not to be large). The pools in some cases were to be
in different categories of physical memory. Graphics workstations
were the driving force, where arrays can be very large indeed.
Once operator new[]() and operator delete[]() were introduced,
the "wrong" code above became wrong in practice as well as
in theory. If a system chose to implement separate memory
pools, using the wrong form of delete could not possibly
produce a correct result.
How much of a burden is this on progammers? Not much.
Usually you know that you are allocating a single object
or an array, and it is obvious from the program design.
When it isn't obvious, you have at least two choices that
remove the burden of remembering or checking:
1. Use an array class -- a single object -- instead of a
C-style array.
2. Always allocate a C-style array instead of single object.
If you just need one object, use an array size of 1.
--
Steve Clamage, stephen.clamage@sun.com
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: jkanze@otelo.ibmmail.com
Date: 1998/06/24 Raw View
In article <358F3999.57D04BF1@ix.netcom.com>,
"Paul D. DeRocco" <pderocco@ix.netcom.com> wrote:
>
> Steve Clamage wrote:
> >
> > It matters both in theory and in practice. Operator new[] was
> > added to C++ so that array allocation could use a different
> > memory pool from single-object allocation. If an implementation
> > takes advantage of that (and I must assume that the implementors
> > who asked for the enhancement do so), your program will fail
> > if you mismatch the forms of new and delete.
>
> I thought it was because in the case of an array of objects with
> destructors, it needs to be told to look for the length of the array
> somewhere, so that it knows how many times to call the destructor. The
> different operator allows this to be done for arrays, without burdening
> single allocations with the same overhead.
This is probably the reason why the new operator behaves differently if
it is allocating an array, and why you have two distinct delete operators.
It is probably not the reason why the operator new and operator new[] are
two distinct functions, which is what Steve Clamage is talking about.
--
James Kanze +33 (0)1 39 23 84 71 mailto: kanze@gabi-soft.fr
+49 (0)69 66 45 33 10 mailto: jkanze@otelo.ibmmail.com
GABI Software, 22 rue Jacques-Lemercier, 78000 Versailles, France
Conseils en informatique orient e objet --
-- Beratung in objektorientierter Datenverarbeitung
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Gary Mussar" <Gary.Mussar.mussar@nt.com>
Date: 1998/06/25 Raw View
David R Tribble wrote:
> A variation of the second scheme is to save the size of the new'd
> memory block (which contains an array of objects) with the block
> itself, and to compute the number of elements by dividing the total
> size of the block by the size of the individual array elements.
> This has the possible advantage that, since ::new probably stores
> the size of the new'd object with the object (as described above),
> this doesn't require any extra storage when new'ing an array.
You would get a rude awaking with some memory managers that I have used.
What is invisibly stored by ::new is usually the real size of the piece
of memory allocated, not the size that was asked for. Some memory
managers will round requests up to a particular boundary and/or
not split a block of available memory if the remaining piece is less
than a "useful" size. (e.g. One memory manager would round to a
16 byte boundary and wouldn't split a block if the left over piece
would be less than 32 bytes).
--
Gary Mussar <mussar@nortel.ca> Phone: (613) 763-4937
Nortel FAX: (613) 763-9406
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: AllanW@my-dejanews.com
Date: 1998/06/25 Raw View
In article <359045BA.7677@noSPAM.central.beasys.com>,
david.tribble@noSPAM.central.beasys.com wrote:
>
> AllanW@my-dejanews.com writes:
> >> But this is true anyway, because the library already has to hold on
> >> to the length of the object somewhere. ...
> >> The compiler has to leave this information somewhere else, not part
> >> of the info block defined by the standard global operator new,
> >> because that might not be what got called. So it has to store it in
> >> ... um ...
> >> How the heck does this work, anyway?
>
> Steve Clamage wrote:
> > ...
> > In the case of an array allocation, the new-expression must
> > save the number of elements someplace so that number can be
> > found later.
> > ...
> > I know of two common ways to save the number of array elements:
> > [1] in a separate associative array, or [2] by requesting more
> > storage than is needed and saving it there.
> > ...
> >
> > [2] To store the count with the allocated data, ask operator new
> > for m extra bytes, store the count at the address A returned,
> > and use the value A+m as the pointer returned to the progam.
> > The first object gets created at address A+m, not at address A.
> > A delete-expression gets a value B. It retrieves the count from
> > address B-m, calls N destructors relative to address B, then
> > calls operator delete using address B-m.
>
> A variation of the second scheme is to save the size of the new'd
> memory block (which contains an array of objects) with the block
> itself, and to compute the number of elements by dividing the total
> size of the block by the size of the individual array elements.
If you're saying that new T[5] would save 5*sizeof(T) instead of 5, then
this is extremely equivalent.
But if you're saying that the internal size, stashed away *somewhere*
by operator new, is also used to determine how many objects to
destruct -- well, this is exactly what I realized CANNOT work properly,
as I wrote the words quoted above.
It seems "obvious" that operator new works this way, for t=new T[3]:
- Calculate the size needed for 3 T's; let's call this S.
- Find some memory. We'll call the address t.
- Store S somewhere. This could be in some construct deep inside
the library's implementation of operator new[] and operator
delete[].
- Call T's default constructor on t, t+sizeof(T), t+sizeof(T)*2.
Then delete[]t would do this:
- Look up the value of S. THIS IS WHERE THE PROBLEM COMES IN.
- Calculate the number of elements allocated as S/sizeof(T).
- Since there are 3 elements in the array, call T's destructor on
t, t+sizeof(T), t+sizeof(t)*2.
- Call operator delete[] to return the memory to the free pool.
But this fails when operator new[] is replaced. What happens instead:
- Calculate the size needed for 3 T's; let's call this S.
- Call the replaced operator new[](S) to get some memory. Operator
new is responsible for saving the memory block size somewhere.
- Call T's default constructor on M, M+sizeof(T), M+sizeof(T)*2.
Then delete[]t would do this:
- Look up the value of S, using the version of operator new[] that
comes with the standard library. But this block isn't found
there, so the lookup fails.
There is no standard way of finding the size of a memory block given only
it's address. (Should there be? operator new, new[], malloc(), etc.
already have to maintain this someplace anyway; it would be a trivial
problem for library developers to add some standard way of getting the
size... But would that encourage bad coding practices?)
Steve Clamage's said that the number of elements could be stored in either
an associative array, or in a hidden int at the beginning of the block.
This makes a lot of sense. It's very natural to suppose that the length
maintained by operator new et. al. is that same number, but this is WRONG.
The operator new[] function must store the size of the block in one place, so
that it can delete it later, but the generated code that calls operator
new must stash the number of elements (or a block size used to calculate
the number of elements) in another place so that it can delete the proper
number of elements WITHOUT having special knowledge of the block size or
internals of operator new[]. Storing one number in one place simply can't
work in the general case.
> This is another argument for using 'delete[]' on arrays instead
> of 'delete',
Was this controversial?
> since the 'delete[]' operator must know that is has
> to compute the number of elements in the array being deleted
> (so that it can call the right number of destructors) instead of
> simply assuming it's deleting only a single object.
That type of thinking leads to statements such as "you can use operator
delete on array of chars, instead of operator delete[], because chars
have no destructors." Unfortunately this works on most platforms most
of the time, and some platforms all of the time -- but it's still wrong.
> I suspect that most implementations use just this sort of scheme,
> which is why bad code that calls 'delete' when it should be
> calling 'delete[]' still manages to call the right number of
> destructors and free the correct number of bytes; such
> implementations probably use the same code for both 'delete' and
> 'delete[]', and both always do the division.
I don't believe that VC++ ever worked this way. Version 5 certainly
doesn't.
> int test()
> {
> Foo * a;
> Foo * b;
>
> a = new Foo[10];
> // Allocates 10*sizeof(Foo) bytes, and
> // Calls Foo::Foo() 10 times
>
> b = new Foo[10];
> // Ditto
>
> delete[] a;
> // Calls Foo::~Foo() 10 times, and
> // Deallocates 10*sizeof(Foo) bytes
>
> delete b;
> // ERROR! But probably still...
> // Calls Foo::~Foo() 10 times, and
> // Deallocates 10*sizeof(Foo) bytes
> }
I tried this on Microsoft Visual C++ version 5.0. I created a class
Foo that reports every constructor and destructor on std::cout. The
result: test() called the constructor 20 times and called the
destructor 11 times. I wasn't surprised.
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pete Becker <petebecker@acm.org>
Date: 1998/06/25 Raw View
David R Tribble wrote:
>
> A variation of the second scheme is to save the size of the new'd
> memory block (which contains an array of objects) with the block
> itself, and to compute the number of elements by dividing the total
> size of the block by the size of the individual array elements.
> This has the possible advantage that, since ::new probably stores
> the size of the new'd object with the object (as described above),
> this doesn't require any extra storage when new'ing an array. In
> other words, the count of the array elements is not actually stored
> anywhere as such, but is computed at delete time from the size of
> the array (which is stored somewhere) and the size of the element
> type.
Most memory managers round block sizes up to some convenient size. That
may be something nearby like the next larger multiple of 8, or it may be
something further away like the next power of 2. The size of the block
does not tell you how many objects the block actually holds.
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/06/25 Raw View
AllanW@my-dejanews.com wrote:
[...]
> There is no standard way of finding the size of a memory block given only
> it's address. (Should there be? operator new, new[], malloc(), etc.
> already have to maintain this someplace anyway; it would be a trivial
> problem for library developers to add some standard way of getting the
> size... But would that encourage bad coding practices?)
Is new really required to store the size in every case?
Author: "Paul D. DeRocco" <pderocco@ix.netcom.com>
Date: 1998/06/23 Raw View
Steve Clamage wrote:
>
> It matters both in theory and in practice. Operator new[] was
> added to C++ so that array allocation could use a different
> memory pool from single-object allocation. If an implementation
> takes advantage of that (and I must assume that the implementors
> who asked for the enhancement do so), your program will fail
> if you mismatch the forms of new and delete.
I thought it was because in the case of an array of objects with
destructors, it needs to be told to look for the length of the array
somewhere, so that it knows how many times to call the destructor. The
different operator allows this to be done for arrays, without burdening
single allocations with the same overhead.
--
Ciao,
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: AllanW@my-dejanews.com
Date: 1998/06/21 Raw View
In article <3588CE82.34579541@physik.tu-muenchen.de>,
Christopher Eltschka <celtschk@physik.tu-muenchen.de> wrote:
> However, what about the following:
>
> int main()
> {
> typedef int array2[2];
> array2* a;
> a=new array2;
> delete a; // or delete[] ?
> }
>
> Does this use new (since there's no [...] in the new expression),
> or new[] (since the type is really an array type)? Therefore
> should the array be deleted with delete or delete[]? Or maybe the
> new expression here is invalid in itself?
IN THEORY, the answer matters a bit, but IN PRACTICE it probably doesn't,
because type int has no destructor.
But what about this?
struct C {
C();
C(const C&);
~C();
const C&operator=(const C&);
// ...
};
int main()
{
typedef C array2[2];
array2* a;
a=new array2;
delete[] a; // or delete a;
}
My instinct says that "delete[] a;" is correct. Anyone out there with a
spec who would like to confirm or deny this?
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: AllanW@my-dejanews.com
Date: 1998/06/21 Raw View
In article <6mbpre$ct7$1@nnrp1.dejanews.com>,
jkanze@otelo.ibmmail.com wrote:
> Luckily, in modern C++, there is never any excuse to allocate an array
> anyway -- just use vector. Note that even if dynamic allocation is
> required (for explicitly controlled lifetime),
>
> new vector< int >( 5 ) ;
>
> still uses plain delete.
And
vector<int> *i = new vector<int>[5];
still uses delete[]. I'd be astonished to find a C-type array of vectors in
any real code, though.
Your point about rarely using C-style arrays is a good one. In fact, I think
that when teaching C++, C-style arrays should be considered an "advanced"
topic. In beginning C++ classes, when the topic of arrays comes up, students
should be introduced to vector<> and string -- and that's all. This is
exactly opposite the way it's being taught right now -- the STL, if it is
taught at all, is reserved for the last 2 weeks of the "advanced C++" class.
Of course, it may be difficult to explain that a quoted literal is not an
object of type string, but can be assigned to one, and so on. But for me,
the best part about the newest C++ draft standard is that it reduces the
need to do the most error-prone things (pointer arithmetic and array
processing) while still leaving it part of the language for those times
when it's really neccesary.
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: stephen.clamage@sun.com (Steve Clamage)
Date: 1998/06/21 Raw View
AllanW@my-dejanews.com writes:
>In article <3588CE82.34579541@physik.tu-muenchen.de>,
> Christopher Eltschka <celtschk@physik.tu-muenchen.de> wrote:
>>
>> Does this use new (since there's no [...] in the new expression),
>> or new[] (since the type is really an array type)? Therefore
>> should the array be deleted with delete or delete[]? Or maybe the
>> new expression here is invalid in itself?
>IN THEORY, the answer matters a bit, but IN PRACTICE it probably doesn't,
>because type int has no destructor.
It matters both in theory and in practice. Operator new[] was
added to C++ so that array allocation could use a different
memory pool from single-object allocation. If an implementation
takes advantage of that (and I must assume that the implementors
who asked for the enhancement do so), your program will fail
if you mismatch the forms of new and delete.
If you allocate an array (whether or not the [] appears in
the new-expression) you must use "delete[]" or "operator delete[]"
and not plain "delete" or "operator delete" to deallocate.
--
Steve Clamage, stephen.clamage@sun.com
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 1998/06/21 Raw View
Esa Pulkkinen <esap@cs.tut.fi> writes:
>The point I tried to convey was that new-expression when used with an
>array type could return pointer to the array (which has type of the form
>"T (*)[i]", with all the dimensions available) and not a pointer to the
>first element of the array (which has type of form "T *") - then there
>would be no distinction between different forms of delete-expression (or
>new-expression), and the extra syntax and the associated semantics could
>be removed. But (as I said) this means arrays couldn't support array
>sizes that are determined at run-time.
>This is really analogous to the fact that an expression "new vector<T>"
>returns "vector<T>*" and not "vector<T>::iterator". This also means 'T'
>can't vary at run time, even if vector's iterators were independent of
>the parameter.
>It's just natural to think arrays should work the same.
The problem is that built-in arrays are inherited from C and
do not have the right properties. That is, they have neither
value nor reference semantics, but some of each, and have
inconsistent behavior in seemingly-similar usages.
I don't think it is possible to arrange for arrays to have the
right properties and maintain C compatibility. All the suggestions
I've seen fail on some point.
OTOH, it has always been possible in C++ to define your own
array type that has nice properties and is reasonably efficient.
The C++ standard library now has several to choose from.
--
Steve Clamage, stephen.clamage@sun.com
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 1998/06/21 Raw View
sbnaran@bardeen.ceg.uiuc.edu (Siemel Naran) writes:
>>In the case of an array allocation, the new-expression must
>>save the number of elements someplace so that number can be
>>found later.
Please notice the phrases "new-expresion" and "array allocation".
>[A]
>It's not necessary for the program to store the space of a
>fixed size array:
> int main() { int a[10]; }
That is not a new-expression.
>[B]
>It's also not necessary for the compiler to store the
>size of a single object that was dynamically allocated:
> int main() { int* a=new int; delete a; }
That is not an array allocation.
>[C]
>By contrast, in
> int main() { int* a=new int[10]; delete[] a; }
>the program must store the size of the array
Voila! An array allocation via a new-expression.
>[D]
>And in the hypothetical
> int main() { int (*a)[10]=new int[10]; delete a; }
>the compiler doesn't need to store the size of the
>array. There is only one object, just like in [B].
>And when the program reaches the operator delete
>statement, it knows it is deleting a fixed size array,
>like in [A], whose size is known at compile time, so
>it also doesn't need to look up the size.
Not so, on at least three counts. Refer to draft standard
sections 5.3.4 "New" and 5.3.5 "Delete".
1. The type of
new int[10]
is int*, and it allocates an array of 10 ints. The
assignment to 'a' should be rejected by the compiler
as type mis-match. For example, my compiler gives the
message "Error: Cannot use int* to initialize int(*)[10]".
2. If you used a cast to perform the assignment anyway,
the delete-expression has undefined bevavior, because
the expression does not have the same type as the
new-expression that allocated it.
3. When an array is allocated, either explicitly with []
syntax as in D, or implicitly via a typedef as in
typedef int ia10[10];
... new ia10 ...
an array is allocated and the "delete[]" syntax must be
used to delete it. The compiler is not required to diagnose
the error.
--
Steve Clamage, stephen.clamage@sun.com
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/06/22 Raw View
Steve Clamage wrote:
>
> sbnaran@bardeen.ceg.uiuc.edu (Siemel Naran) writes:
>
> >>In the case of an array allocation, the new-expression must
> >>save the number of elements someplace so that number can be
> >>found later.
>
> Please notice the phrases "new-expresion" and "array allocation".
>
> >[A]
> >It's not necessary for the program to store the space of a
> >fixed size array:
> > int main() { int a[10]; }
>
> That is not a new-expression.
That's true - but it's there to clarify the point he's making later.
>
> >[B]
> >It's also not necessary for the compiler to store the
> >size of a single object that was dynamically allocated:
> > int main() { int* a=new int; delete a; }
>
> That is not an array allocation.
Same argument as above.
>
> >[C]
> >By contrast, in
> > int main() { int* a=new int[10]; delete[] a; }
> >the program must store the size of the array
>
> Voila! An array allocation via a new-expression.
Exactly. Guess why?
>
> >[D]
> >And in the hypothetical
> > int main() { int (*a)[10]=new int[10]; delete a; }
> >the compiler doesn't need to store the size of the
> >array. There is only one object, just like in [B].
> >And when the program reaches the operator delete
> >statement, it knows it is deleting a fixed size array,
> >like in [A], whose size is known at compile time, so
> >it also doesn't need to look up the size.
>
> Not so, on at least three counts. Refer to draft standard
> sections 5.3.4 "New" and 5.3.5 "Delete".
Refer to your favourite dictionary, word "hypothetical" ;-)
What he describes is *not* current C++, but "whishful C++" - he
argues that if C++ worked this (logical) way, there would be no
need to store the reference count. Also note that this would
*not* give problems with C compatibility, since C didn't have
new/delete, and since a in this case is not of type pointer to int
or array to int, but pointer to array of int.
However, there would be a problem with C++-Compatibility, of
course (since the expression new int[10] can only return int*
_or_ int(*)[10], the second alternative would of course break the
first one). However, without prior C++ praxis, the cleaner way would
have been the second one, and an extra syntax for variable size
allocations, say a=new[10] int (this would also correspond better
with the delete[] a syntax).
However, it's far too late now, and it was probably far too late
even when standardisation began.
>
> 1. The type of
> new int[10]
> is int*, and it allocates an array of 10 ints. The
> assignment to 'a' should be rejected by the compiler
> as type mis-match. For example, my compiler gives the
> message "Error: Cannot use int* to initialize int(*)[10]".
That's current C++, true. However it's not "wishful C++".
>
> 2. If you used a cast to perform the assignment anyway,
> the delete-expression has undefined bevavior, because
> the expression does not have the same type as the
> new-expression that allocated it.
Correct (and I think nobody really wants this point to be
different)
>
> 3. When an array is allocated, either explicitly with []
> syntax as in D, or implicitly via a typedef as in
> typedef int ia10[10];
> ... new ia10 ...
> an array is allocated and the "delete[]" syntax must be
> used to delete it. The compiler is not required to diagnose
> the error.
Again, you're arguing today's real C++, not "wishful C++".
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: sbnaran@bardeen.ceg.uiuc.edu (Siemel Naran)
Date: 1998/06/20 Raw View
>In the case of an array allocation, the new-expression must
>save the number of elements someplace so that number can be
>found later.
[A]
It's not necessary for the program to store the space of a
fixed size array:
int main() { int a[10]; }
Since a is known to have size 10 at compile time, the
destructor main::~main() knows to delete 10 integers,
calling int::~int() for each int in reverse order.
[B]
It's also not necessary for the compiler to store the
size of a single object that was dynamically allocated:
int main() { int* a=new int; delete a; }
Since "a" was created with operator new as opposed to
operator new[], it is only one int with size=sizeof(int),
and so the delete statement knows exactly how many
destructors to call (just one) and how much space to
delete.
[C]
By contrast, in
int main() { int* a=new int[10]; delete[] a; }
the program must store the size of the array
(10*sizeof(int) bytes) because the size of the array
is only known at run time. This is the only way the
delete[] statement knows how many destructors to call
and how much space to delete.
[D]
And in the hypothetical
int main() { int (*a)[10]=new int[10]; delete a; }
the compiler doesn't need to store the size of the
array. There is only one object, just like in [B].
And when the program reaches the operator delete
statement, it knows it is deleting a fixed size array,
like in [A], whose size is known at compile time, so
it also doesn't need to look up the size.
--
----------------------------------
Siemel B. Naran (sbnaran@uiuc.edu)
----------------------------------
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Paul D. DeRocco" <pderocco@ix.netcom.com>
Date: 1998/06/19 Raw View
AllanW@my-dejanews.com wrote:
>
> But this is true anyway, because the library already has to hold on to
> the length of the object somewhere. Once you've got the mechanism to
> associate extra information (the original allocation length) with each
> allocated block, it's no big deal to add one more item (say the
> per-item length) to that same list.
[snipped change-of-heart]
The library doesn't necessarily hold the length of the object. It may
pass each "operator new" or "malloc" call directly to the OS. Even if
the RTL remembers the size of each block, it may only do so with a
certain granularity, so for very small structures, the number of items
might not be determinable from the size of the memory block. Thus, extra
size info must be stored for new'ed arrays. And it would be a shame to
require this same overhead (typically four bytes on a 32-bit machine) to
be tacked on to every non-array allocation.
--
Ciao,
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 1998/06/19 Raw View
AllanW@my-dejanews.com writes:
>But this is true anyway, because the library already has to hold on to the
>length of the object somewhere. ...
>The compiler has to leave this information somewhere else, not part of the
>info block defined by the standard global operator new, because that might
>not be what got called. So it has to store it in ... um ...
>How the heck does this work, anyway?
First let's remember that a new-expression involves more than
just calling operator new, but operator new has a single task:
allocate raw storage. Similarly, a delete expression involves
more than just calling operator delete, but operator delete
has a single task: if it does anything at all, it makes raw
storage available for later re-use.
The result of a new-expression is to get storage from an
operator new, and if that succeeds, call needed constructors,
and if they exit via an exception, destroy any constructed
objects and in any case call an operator delete.
In the case of an array allocation, the new-expression must
save the number of elements someplace so that number can be
found later.
The result of a delete-expression is to call needed destructors
in reverse order, then call an operator operator delete.
For an array deletion, it needs to find the number of elements
that were allocated so that the right number objects get
destroyed.
I know of two common ways to save the number of array elements:
in a separate associative array, or by requesting more storage
than is needed and saving it there.
A separate associative array (indexed by the address being
allocated or deleted) is safer but less efficient. It is safer,
because
1. it is less likely to overwritten by off-by-one indexing by
the program into the allocated data, and
2. it can verify that the address passed in was allocated by
a new[] expression and has not already been deleted.
The implementation is not required to perform these checks,
but it seems like a nice thing to do.
To store the count with the allocated data, ask operator new
for m extra bytes, store the count at the address A returned,
and use the value A+m as the pointer returned to the progam.
The first object gets created at address A+m, not at address A.
A delete-expression gets a value B. It retrieves the count from
address B-m, calls N destructors relative to address B, then
calls operator delete using address B-m.
The value m is an implmementation constant, the size of a
large-enough integer type plus any extra bytes to ensure
alignment is preserved for any array of objects.
With either scheme, operator delete is passed the same value of
raw storage that was returned by opreator new. Those operators
neither know nor care how that storage was used.
--
Steve Clamage, stephen.clamage@sun.com
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Esa Pulkkinen <esap@cs.tut.fi>
Date: 1998/06/19 Raw View
[about new[] and delete[] being required to match]
> Esa Pulkkinen <esap@cs.tut.fi> writes:
> >Easy: Make array operator new return an object with type T(*)[i]. Then
> >the delete operator knows exactly how it should deallocate the object by
> >looking at the type of the object [just like it always does...]. Then
> >you could write:
stephen.clamage@sun.com (Steve Clamage) writes:
> Ummm, operator new doesn't know anything about types. It returns
> raw storage of type void*. Similarly, operator delete knows
> nothing about types, and receives a void* pointing to raw storage.
Ack. My mistake. Replace "operator new" with "new-expression" and
"operator delete" with "delete-expression" in my post and it might make
more sense. Sorry about that, I should have known better.
The point I tried to convey was that new-expression when used with an
array type could return pointer to the array (which has type of the form
"T (*)[i]", with all the dimensions available) and not a pointer to the
first element of the array (which has type of form "T *") - then there
would be no distinction between different forms of delete-expression (or
new-expression), and the extra syntax and the associated semantics could
be removed. But (as I said) this means arrays couldn't support array
sizes that are determined at run-time.
This is really analogous to the fact that an expression "new vector<T>"
returns "vector<T>*" and not "vector<T>::iterator". This also means 'T'
can't vary at run time, even if vector's iterators were independent of
the parameter.
It's just natural to think arrays should work the same.
> Your solution would seem to require a separate operator new
> and operator delete for every type (and array size) that
> get allocated. Is that what you had in mind?
No.
--
Esa Pulkkinen | C++ programmers do it virtually
E-Mail: esap@cs.tut.fi | everywhere with class, resulting
WWW : http://www.cs.tut.fi/~esap/ | in multiple inheritance.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: sbnaran@bardeen.ceg.uiuc.edu (Siemel Naran)
Date: 1998/06/19 Raw View
>int main()
>{
> typedef int array2[2];
> array2* a;
> a=new array2;
> delete a; // or delete[] ?
>}
I did try this. The typedef is just a synonym, sort of
like a macro. "new array2" is equivalent to "new int[2]",
and the assignment in LINE3 is still a casting violation.
The weird thing is that is that both
delete a;
delete[] (int*)a;
destroy both objects (I've verified that int::~int()
is called for both objects in both cases).
One reason why we might want to create a fixed-size array
using operator new is that we want to utilize a user
defined operator new[] or use memory from the stack as
opposed to the heap.
int a [200000]; // created on stack
int (*a)[200000]=new ...; // created on heap
(Do I have the words stack and heap mixed up in the above?)
But unfortunately C++, like C, allows array decay -- the
transformation of int[N] into int*. From this perspective,
a single object that is an array of objects is the same
thing as an array of objects. This conversion loses some
amount of type safety. For example, in,
template <int N>
int action(int (&a)[N], int (&b)[N]);
the statement
int a[10], b[20]; return action(a,b);
will be flagged as an error at compile time. But here
there is no compile-time error:
int action(int* a, int* b, const int N);
int a[10], b[20]; return action(a,b);
We can make our own classes to enforce the needed type
safety. For example:
template <int N>
class Vec { int d_vec[N]; ... };
which basically adds a level of safety on top of C++'s
built-in array type by preventing array decay.
But it would be nice if the compiler's built-in array
had this level of safety.
--
----------------------------------
Siemel B. Naran (sbnaran@uiuc.edu)
----------------------------------
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Paul D. DeRocco" <pderocco@ix.netcom.com>
Date: 1998/06/18 Raw View
Esa Pulkkinen wrote:
>
> As an added bonus, the compiler knows the number of elements in the
> array at compile time, so it doesn't even need to store it anywhere
> (:-). Of course, had C++ taken this route, there would have been a
> need for a variable-sized array type long before STL's vector<> came
> along.
The compiler doesn't necessarily know the length of an array to be
deleted. It does if the deletion happens as a result of an exception
thrown by an element's constructor, but when the delete operator is
explicitly called by the programmer, the length info is generally
nowhere in sight.
--
Ciao,
Paul
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: 1998/06/18 Raw View
Ron Natalie wrote:
>
> Siemel Naran wrote:
>
> >
> >
> > In ALLOCATE ONE OBJECT g++ complains about the cast in the
> > call to operator new. I have to use
> > a=(Type(*)[2])new Type[2];
> >
> > What I'd like to know is
> > (1) Whether the cast above is required by the rules.
>
> Yes it is. But it's still wrong. Your new there allocates
> an array of two Type objects, just like your previous simple
> examples.
>
> The type of (new Type[2]) is Type*. [ POINTER TO Type ]
> The type of Type (*a)[2] is Type (*)[2] [ POINTER TO ARRAY[2] OF TYPE].
>
> They are subtely different. The problem is the idiocy of
> the array type in C and C++ which is only half implemented,
> and then to compensate for it, arrays are silently converted
> to pointers to their first element all over the place.
>
However, what about the following:
int main()
{
typedef int array2[2];
array2* a;
a=new array2;
delete a; // or delete[] ?
}
Does this use new (since there's no [...] in the new expression),
or new[] (since the type is really an array type)? Therefore
should the array be deleted with delete or delete[]? Or maybe the
new expression here is invalid in itself?
[...]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Ron Natalie <ron@sensor.com>
Date: 1998/06/18 Raw View
Christopher Eltschka wrote:
>
> {
> typedef int array2[2];
> array2* a;
> a=new array2;
> delete a; // or delete[] ?
> }
>
> Does this use new (since there's no [...] in the new expression),
> or new[] (since the type is really an array type)? Therefore
> should the array be deleted with delete or delete[]? Or maybe the
> new expression here is invalid in itself?
>
You should use delete[]. The new expression takes a type
and does the "array" thing whenever the type is specified
with the [] syntax (i.e., a = new T[4];) or when the type
is an array such as in your case.
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: jkanze@otelo.ibmmail.com
Date: 1998/06/18 Raw View
In article <3588CE82.34579541@physik.tu-muenchen.de>,
Christopher Eltschka <celtschk@physik.tu-muenchen.de> wrote:
> int main()
> {
> typedef int array2[2];
> array2* a;
> a=new array2;
> delete a; // or delete[] ?
> }
>
> Does this use new (since there's no [...] in the new expression),
> or new[] (since the type is really an array type)?
> Therefore
> should the array be deleted with delete or delete[]? Or maybe the
> new expression here is invalid in itself?
Typedef's make no difference; it is the underlying type that counts.
So the new expression is illegal, since new int[2] returns a pointer
to the first element (int*) and not to an array2.
If you think about it, the alternative would be horrible:
typedef int array[ 5 ] ;
int* a ;
if ( someCondition() )
a = new array ;
else
a = new int[ 10 ] ;
//
delete [] /*???*/ a ;
I tried to work out some simple rule once, and failed. You have to
understand the C++ type system; there is no way around it. Consider
things like:
new int (*[5])( int ) ;
and new int (*)[ 5 ] ;
The first requires delete[] (array of pointers to functions), the second
plain delete (a scalar pointer to an array of int's). Now throw in a few
typedef's, and see just how confused you can make it.
Luckily, in modern C++, there is never any excuse to allocate an array
anyway -- just use vector. Note that even if dynamic allocation is
required (for explicitly controlled lifetime),
new vector< int >( 5 ) ;
still uses plain delete.
--
James Kanze +33 (0)1 39 23 84 71 mailto: kanze@gabi-soft.fr
+49 (0)69 66 45 33 10 mailto: jkanze@otelo.ibmmail.com
GABI Software, 22 rue Jacques-Lemercier, 78000 Versailles, France
Conseils en informatique orient e objet --
-- Beratung in objektorientierter Datenverarbeitung
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: stephen.clamage@sun.com (Steve Clamage)
Date: 1998/06/17 Raw View
Esa Pulkkinen <esap@cs.tut.fi> writes:
>> Ron Natalie wrote:
>> > You should always pair delete[] to new[]. This is one of
>> > those stupidities of the language as well, primarily historical.
>"Paul D. DeRocco" <pderocco@ix.netcom.com> writes:
>> I can't see how this could be avoided, without burdening every
>> allocation of a single object with the same overhead (typically a prefix
>> containing a count) that allocating an array has.
>Easy: Make array operator new return an object with type T(*)[i]. Then
>the delete operator knows exactly how it should deallocate the object by
>looking at the type of the object [just like it always does...]. Then
>you could write:
Ummm, operator new doesn't know anything about types. It returns
raw storage of type void*. Similarly, operator delete knows
nothing about types, and receives a void* pointing to raw storage.
Your solution would seem to require a separate operator new
and operator delete for every type (and array size) that
get allocated. Is that what you had in mind?
--
Steve Clamage, stephen.clamage@sun.com
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Ron Natalie <ron@sensor.com>
Date: 1998/06/17 Raw View
Steve Clamage wrote:
> Ummm, operator new doesn't know anything about types. It returns
> raw storage of type void*. Similarly, operator delete knows
> nothing about types, and receives a void* pointing to raw storage.
>
>
Depends which new you're talking about. Certainly the
operator itself knows about types, it gets a type name
as it's argument.
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: clamage@Eng (Steve Clamage)
Date: 1998/06/17 Raw View
Ron Natalie <ron@sensor.com> writes:
>Steve Clamage wrote:
>> Ummm, operator new doesn't know anything about types. It returns
>> raw storage of type void*. Similarly, operator delete knows
>> nothing about types, and receives a void* pointing to raw storage.
>>
>>
>Depends which new you're talking about. Certainly the
>operator itself knows about types, it gets a type name
>as it's argument.
The term "operator new" means only the set of overloaded
functions that allocate raw storage and return a void*.
The word "new" is also a keyword that introduces a new-expression.
The new-expression knows about types, and among other things
causes the appropriate version of operator new to be called.
The type information is lost when calling the function.
Remember that programmers are allowed to replace the standard
library versions of operator new, so the implementation cannot
depend on being able to pass extra information to operator
new. The public interface is the only available interface.
--
Steve Clamage, stephen.clamage@sun.com
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: AllanW@my-dejanews.com
Date: 1998/06/17 Raw View
In article <3585C058.422EA418@ix.netcom.com>,
"Paul D. DeRocco" <pderocco@ix.netcom.com> wrote:
>
> Ron Natalie wrote:
> >
> > You should always pair delete[] to new[]. This is one of
> > those stupidities of the language as well, primarily historical.
>
> I can't see how this could be avoided, without burdening every
> allocation of a single object with the same overhead (typically a prefix
> containing a count) that allocating an array has. When you call
> delete[], you are telling the compiler that you are giving it a pointer
> to an array, and that it is to find the array length in whatever hidden
> location new[] normally stashes it.
But this is true anyway, because the library already has to hold on to the
length of the object somewhere. Once you've got the mechanism to associate
extra information (the original allocation length) with each allocated block,
it's no big deal to add one more item (say the per-item length) to that
same list.
...Except, of course, that the application can override operator new and
operator delete, and even use placement new. And yet somehow
extern operator new(const char*,size_t);
int main() {
myClass *x = new("Place") myClass[5];
// ...
delete[] x;
}
has to know to delete 5 myClass objects -- not 6, not 4, but exactly 5, and
it doesn't really matter where the storage came from. So it can't store the
number of objects, or the object size, along with the memory-chunk size,
because they might not be at all related. So forget what I just said.
The compiler has to leave this information somewhere else, not part of the
info block defined by the standard global operator new, because that might
not be what got called. So it has to store it in ... um ...
How the heck does this work, anyway?
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: sbnaran@bardeen.ceg.uiuc.edu (Siemel Naran)
Date: 1998/06/15 Raw View
Summary: What to do if the basic type of an object is an
array of objects.
We all know that when you create one object, you use
operator new. And when you create an array of objects,
even if that array has 0 or 1 objects, you use operator
new[]. Like this:
struct Type
{
Type() { cout << "Type::Type()" << endl; }
~Type() { cout << "Type::~Type()" << endl; }
};
int main()
{
Type* a;
// ALLOCATE ONE OBJECT
a=new Type; // calls Type::Type()
delete a; // calls Type::~Type()
cout << endl;
// ALLOCATE AN ARRAY OF OBJECTS
a=new Type[3]; // calls Type::Type() 3 times
delete[] a; // calls Type::~Type() 3 times
cout << endl;
}
But what if the basic type of the object is an array
of objects?
int main()
{
Type (*a)[2];
// ALLOCATE ONE OBJECT
a=new Type[2]; // calls Type::Type() 2 times
// to access objects, use (*a)[i]
delete a; // calls Type::~Type() 2 times
// note that we don't use operator delete[]
cout << endl;
// ALLOCATE AN ARRAY OF OBJECTS
a=new Type[3][2]; // calls Type::Type() 2*3=6 times
// to access objects, use a[i][j]
delete[] a; // calls Type::~Type() 2*3=6 times
cout << endl;
}
In ALLOCATE ONE OBJECT g++ complains about the cast in the
call to operator new. I have to use
a=(Type(*)[2])new Type[2];
What I'd like to know is
(1) Whether the cast above is required by the rules.
(2) Whether the function main() above is correct.
After all, in ALLOCATE ONE OBJECT, it looks like
we're using operator new[] and hence we ought to
use operator delete[]
(3) Whether the order of constructor calls is guaranteed.
a[0][0], a[0][1], a[1][0], a[1][1], a[2][0], a[2][1]
And the order of destructors is the reverse.
Thanks.
--
----------------------------------
Siemel B. Naran (sbnaran@uiuc.edu)
----------------------------------
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Ron Natalie <ron@sensor.com>
Date: 1998/06/15 Raw View
Siemel Naran wrote:
>
>
> In ALLOCATE ONE OBJECT g++ complains about the cast in the
> call to operator new. I have to use
> a=(Type(*)[2])new Type[2];
>
> What I'd like to know is
> (1) Whether the cast above is required by the rules.
Yes it is. But it's still wrong. Your new there allocates
an array of two Type objects, just like your previous simple
examples.
The type of (new Type[2]) is Type*. [ POINTER TO Type ]
The type of Type (*a)[2] is Type (*)[2] [ POINTER TO ARRAY[2] OF TYPE].
They are subtely different. The problem is the idiocy of
the array type in C and C++ which is only half implemented,
and then to compensate for it, arrays are silently converted
to pointers to their first element all over the place.
> (2) Whether the function main() above is correct.
> After all, in ALLOCATE ONE OBJECT, it looks like
> we're using operator new[] and hence we ought to
> use operator delete[]
You should always pair delete[] to new[]. This is one of
those stupidities of the language as well, primarily historical.
> (3) Whether the order of constructor calls is guaranteed.
> a[0][0], a[0][1], a[1][0], a[1][1], a[2][0], a[2][1]
> And the order of destructors is the reverse.
Yes, the order is guaranteed as you have specified.
Both construction and destruction.
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]