Topic: free store management
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/10/03 Raw View
"Francis Glassborow" <francis.glassborow@ntlworld.com> wrote
> You continually make posts that suggest that you do not understand the
> needs of programmers in other domains.
No, they not suggest that. Why my understanding on what other group need or
desire should influence what I need or desire?
> One of the major considerations
> those of us involved in actually standardising C++ have to make is the
> broad needs of the whole C++ community, that includes those who have to
> work in many resource constricted environments.
Sure. And should also include the others who want to see less bugs. Everyone
try to express what he wants, then some mix will come out. It appears to me
the referred other groups had the major impact on the outcome. (When they
may, or may not be in majority among the C++ users.) I think tho other side
also worth at least a hearing out.
> When you make
> suggestions please spend a little time thinking about how they might
> impact other people's work.
I do think about them too. But the desired way in discussing such issues,
IMHO, would be if they come out and say: stop it, I have a plenty of
projects that are just working now, but would go below the acceptable with
that change on memory manager part. Maybe I mised some comments on the
recent thread, but I've seen exactly 0 such comments.
The said 'you not pay for what you not use' concept seems too abstract to
me, to be the major guideline in all issues. And in the actual discussion
some of the major factors that unfluenced the decision (provided by Mr.
Koenig) may be just gone by these days.
Or let's look that prce issue the other way around: I now pay (in terms of
possibility of bugs) for new/delete being interchangeable with malloc/free.
I don't need that feature, thank you, and I guess (from guidelines written
by noble and respected people, about we shouldn't mix mem-allocation that
way) quite many other people don't use it either.
The facts supporting that the current way around delete is good, are just
not enough convincing. And I don't see is is a problem in myself alone
either.
> There are many groups with quite different priorities to yours.
And I never implied they should shut up or put my problems before theirs.
And honestly, Francis, isn't this the very forum where the plebs can express
its constructive opinion about what is problematic with standard C++? Or
only Panglossian comments are allowed?
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/10/03 Raw View
In article <39d903e5@andromeda.datanet.hu>, Balog Pal <pasa@lib.hu>
writes
>Or let's look that prce issue the other way around: I now pay (in terms of
>possibility of bugs) for new/delete being interchangeable with malloc/free.
>I don't need that feature, thank you, and I guess (from guidelines written
>by noble and respected people, about we shouldn't mix mem-allocation that
>way) quite many other people don't use it either.
1) new/delete is not exchangeable with malloc/free and I do not
understand where you get that idea from.
2) Currently I can provide optimised class based memory allocation for
single instances of a class. Any tinkering with the global behaviour of
delete has serious repercussions on that.
3) new/delete new[]/delete[] are low level tools, and as such should be
rarely, if ever, used at application level. They should be used by
class/library implementors who should have the skill to use them
correctly, and who are often concerned about efficiency because what
they do impacts all the layers built on their work.
If you have not noticed it said before, perhaps it is because it has
been said gently, but, certainly as far as I am concerned, to only
conceivable change that I would wish to entertain is the possibility at
some future date of supporting base type pointers to homogeneous arrays
of types derived from the base. (this could be done if the vtable - or
equivalent - included the sizeof the derived type)
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Trevor L. Jackson, III" <fullmoon@aspi.net>
Date: 2000/10/03 Raw View
Francis Glassborow wrote:
> In article <39d8e892@andromeda.datanet.hu>, Balog Pal <pasa@lib.hu>
> writes
> >You're probably writing non-interactive programs. When the 'system idle
> >process' gets average 97+% of all the CPU time iven if you sit there working
> >hard profiling becomes something not really wanted. And when speed it is an
> >issue, I always find the bottleneck is I/O (disk, network), or remote
> >programs (database server). Scanning a table to see if a pointer was array
> >or not? Hardly anyone would notice.
>
> You continually make posts that suggest that you do not understand the
> needs of programmers in other domains. One of the major considerations
> those of us involved in actually standardising C++ have to make is the
> broad needs of the whole C++ community, that includes those who have to
> work in many resource constricted environments. When you make
> suggestions please spend a little time thinking about how they might
> impact other people's work. There are many groups with quite different
> priorities to yours.
This seems to imply that correctness is not a property of interest to those who
work in resource-constrained environments. From my experience in that type of
work this implication is false. In spite of small memories and even smaller CPU
clocks, correctness _vastly_ outweighs performance. So an interface designed to
promote correctness is far more valuable than an interface designed to save a
word of memory at the cost of doubling the size of the interface.
Consider that one of the worst sins a designer can commit is premature
optimization. IMHO this sin has been grievously committed against the C++ heap
manager. I maintain this based on my experience implementing ANSI-standard heap
managers for C (not C++, but not that far either). Performance is cheap.
Correctness is extremely expensive.
Let's see, as our software ages what happens to it's quality? <frown> What
happens to the speed/size of the hardware we run it on? <smile> Think those
trends matter?
Consider also that in every case in which heap performance matters, one can quite
easily optimize the performance of the necessary dynamic memory management with a
dedicated memory manager (alloca() is often even better ;-). Thus crippling the
interface to the general heap manager on the basis of performance is not
appropriate.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/10/02 Raw View
"Francis Glassborow" <francis.glassborow@ntlworld.com> wrote
> >Safety should have been the number one priority always, but 'efficiency'
> >and similar stuff came up on top in so many cases. Today computers
> >around have tons of hardly used resources, and almost no one ever
> >bothers to profile for perfrmance, or in many cases even to compile a
> >release build. While we have tons of bug-infested programs around. It's
> >quite time to re-evaluate the priorities, and asign the appropriate
> >weights.
>
> For a successful commercial language, the number one priority is to
> persuade programmers to use it. There is very little point in having a
> wonderful, safe language if users do not want it.
Sure. For a new language that is a big requirement. But for the existing,
and already popualar languages the requirements change. As for C++ there's
not a start from scratch. A zillion prorgammers use it already, and tons of
code exists. That will not phase out if you start to shift the design toward
more secure one.
Certainly if you just add stuff to the existing mix, you need to persuade
the programmers to pick it up, and use the new instead of the old. (like
xxx_cast<>). But if you do not add it at all, how we get any progress at
all?
> Perhaps you should try Eiffel, or Modula 3. I am told that both are much
> safer than C++, but neither are used anything like so widely. Ever
> wonder why?
You imply because they're safe? I doubt. If you come to popularity and
spread reason often is just a small force, and chance rules. (Or sometimes
some subtle interests).
Did C became popular for being a good, well-designed language? No way. Is it
good just for the boom it had? And if it were that good, why people bother
to revise it? Or derive new languages, like Bjarne did, really?
Eiffel and modula may be excellent languages, but having not enough support
a few people move on them, that leading to even less support, and so on.
Having more security in other languages is hardly a good excuse for not
adding them to C++. C++ has suppoer, has programmers, and a big deal of
programs are written it C++. As we go forward in time more and more complex
stuff is written, and the more we need extra tool to manage risks.
As I see it, C++ should extend to help the big applications, and not try to
reach down to lower levels instead. On low levels the programmers will
likely chose something lower level anyway, or add just enough assy, you'll
hardly tell it is still C++. So the run is better happen in the right
direction.
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/10/02 Raw View
<alan_griffiths@my-deja.com> wrote
> > > If you do not like new and delete, use new[] and delete[].
....
> > But other people around use both, and sometimes that
> > leads to actual problems.
>
> Which are not solved by making the delete operators interchangeable.
Yes it is. If any form is as good as the other they can use whatever delete
without warrying. Just like it happened when the delete[count] version was
lifted.
> If
> the "raw" memory management APIs are being used the code is hard to
> validate for memory leaks and exception safety. (I'm prepared to bet it
> isn't exception safe, and leaks are probably only avoided because of
> dynamic checks - purify, boundschecker, etc.)
Hm? I can't really follow that, as seneral idea at least. As for exceptions,
you're right, I know not too much people who think in exception safety as a
routine task, and shape the code accordingly. But that problem does not
disturb me from now, as I enforce handling all real exceptions (those,
actually expected), and I treat 'not enough memory' as nonexisting
condition. For very practical reason: I see no rationale writing code that
is aware about failing mem allocation on 10%, while 90% of the code picked
up in libraries, and similar places will die on that condition anyway. And
I know that memory is really there always. And if not, the program will go
down from the new_handler.
> > Safety should have been the number one priority always, but
> 'efficiency' and
> > similar stuff came up on top in so many cases.
>
> You may very well believe this, but you are not forced to use C++.
Actually I'm not only forced to use C++, but also some selected set of
compilers. ;-)
> Modular or Eiffel would be more in accord with your requirements.
But that is not the point. I'd like to see safer C++, not emigrating to
other language. 'Use some other language' is easy to say, but even switching
to another C++ compiler is not an easy task.
> > Today computers around
> have
> > tons of hardly used resources, and almost no one ever bothers to
> profile for
> > perfrmance, or in many cases even to compile a release build.
>
> That is not my experience, and I doubt that it applies in general.
> Every non-trivial program I've written in the last decade has had
> performance targets and has been profiled.
You're probably writing non-interactive programs. When the 'system idle
process' gets average 97+% of all the CPU time iven if you sit there working
hard profiling becomes something not really wanted. And when speed it is an
issue, I always find the bottleneck is I/O (disk, network), or remote
programs (database server). Scanning a table to see if a pointer was array
or not? Hardly anyone would notice.
I remember MSVC had a serious 'bug' in memory management. I discovered it by
incident when ported the revelant part of MFC to sun, and had a bug or
nonimplement at the AFX_STATE mutexing. The already made code supposed to
run, but it misteriously dedlocked at memory allocation. It turned out the
MFC version of new looked like this:
void* __cdecl operator new(size_t nSize)
{
void* pResult;
AFX_MODULE_THREAD_STATE* pState = AfxGetModuleThreadState();
_PNH pfnNewHandler = pState->m_pfnNewHandler;
do
{
pResult = malloc(nSize);
} while (pResult == NULL && pfnNewHandler != NULL &&
(*pfnNewHandler)(nSize) != 0);
return pResult;
}
As you see, to save a few lines of source, the new handler pointer is
obtained before first attempt of allocation, which will always succeed
anyway. That AfxGetModuleThreadState(); call seems like an easy one, but is
a real heavyweight, it locks critical section, scans several lists, calls
system API to get thread-local storage, etc. And that for every call to new.
It was that way in msvc 4.0 and 4.1, and very likely in its predecessors. It
mage into a plenty of code without being discovered, until DDJ issued an
article about heap managers, and they made some tests to compare them. The
test doing nothing but mass memory-allocations certainly uncovered the
problem, and it was fixed in i think 4.2, and further. Without that article
we might still have it in a quite widely used compiler.
Teh memory manager should certainly not run with brakes to the metal, but
practice shows it can travel far that way without being noticed.
> > While we have
> > tons of bug-infested programs around. It's quite time to re-evaluate
> the
> > priorities, and asign the appropriate weights.
>
> And I'd place a large weight on training the 'other people around' to
> use the language effectively.
Sure they're trained. But that is task of Sysiphos. Also, we have only a few
dozen of compilers and dozens of thousands of programmers. So solving such
issues in the compiler should pay off.
> The free store management 'problem' would be better solved by the
> introduction of an effective collection of memory management 'smart
> pointers' into the library. (Follow the links below for some examples.)
When you start from scratch, it is certainly easy. When you also have a
decade's work to maintain and extend, it becomes a problem.
Paul
>
> --
> Alan Griffiths (alan.griffiths@experian.com, +44 115 934 4517)
> Senior Systems Consultant, Experian
> C++ links: http://www.accu.org/ http://www.boost.org/
> C++ links: http://www.octopull.demon.co.uk/arglib/
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
>
> ---
> [ comp.std.c++ is moderated. To submit articles, try just posting with ]
> [ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
> [ --- Please see the FAQ before posting. --- ]
> [ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
>
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/10/02 Raw View
In article <39d8e892@andromeda.datanet.hu>, Balog Pal <pasa@lib.hu>
writes
>You're probably writing non-interactive programs. When the 'system idle
>process' gets average 97+% of all the CPU time iven if you sit there working
>hard profiling becomes something not really wanted. And when speed it is an
>issue, I always find the bottleneck is I/O (disk, network), or remote
>programs (database server). Scanning a table to see if a pointer was array
>or not? Hardly anyone would notice.
You continually make posts that suggest that you do not understand the
needs of programmers in other domains. One of the major considerations
those of us involved in actually standardising C++ have to make is the
broad needs of the whole C++ community, that includes those who have to
work in many resource constricted environments. When you make
suggestions please spend a little time thinking about how they might
impact other people's work. There are many groups with quite different
priorities to yours.
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: alan_griffiths@my-deja.com
Date: 2000/09/26 Raw View
In article <045001c024eb$795ce720$aa0e38c3@bpnt>,
"Balog Pal" <pasa@lib.hu> wrote:
>
> "Francis Glassborow" <francis.glassborow@ntlworld.com> wrote
> > If you do not like new and delete, use new[] and delete[].
>
> In my personal code I don't ever use new[]. If I need a collection I
use a
> collection. Also, it is a very rare case I use new too, and those rare
cases
> are quite well guarded.
Which is as it should be - memory management is done by classes whose
purpose is to manage memory (i.e. containers and/or smart pointers).
> But other people around use both, and sometimes that
> leads to actual problems.
Which are not solved by making the delete operators interchangeable. If
the "raw" memory management APIs are being used the code is hard to
validate for memory leaks and exception safety. (I'm prepared to bet it
isn't exception safe, and leaks are probably only avoided because of
dynamic checks - purify, boundschecker, etc.)
> If the argument about passing count to
delete[] is
> error-prone was considered, the similar argument about messing up the
> operators should be considered multiple times more seriously. And
this one
> is hardly my own strange observation, A plenty of people ran into this
> problem all around the world.
>
> Safety should have been the number one priority always, but
'efficiency' and
> similar stuff came up on top in so many cases.
You may very well believe this, but you are not forced to use C++.
Modular or Eiffel would be more in accord with your requirements.
> Today computers around
have
> tons of hardly used resources, and almost no one ever bothers to
profile for
> perfrmance, or in many cases even to compile a release build.
That is not my experience, and I doubt that it applies in general.
Every non-trivial program I've written in the last decade has had
performance targets and has been profiled.
> While we have
> tons of bug-infested programs around. It's quite time to re-evaluate
the
> priorities, and asign the appropriate weights.
And I'd place a large weight on training the 'other people around' to
use the language effectively.
The free store management 'problem' would be better solved by the
introduction of an effective collection of memory management 'smart
pointers' into the library. (Follow the links below for some examples.)
--
Alan Griffiths (alan.griffiths@experian.com, +44 115 934 4517)
Senior Systems Consultant, Experian
C++ links: http://www.accu.org/ http://www.boost.org/
C++ links: http://www.octopull.demon.co.uk/arglib/
Sent via Deja.com http://www.deja.com/
Before you buy.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Andrew Koenig <ark@research.att.com>
Date: 2000/09/24 Raw View
wmm> I don't know what Bjarne had in mind here; I quoted the ARM
wmm> to describe the thinking of the person who actually made the
wmm> decision at a time fairly close to when the decision was
wmm> made, but anything beyond the actual words he wrote (at least
wmm> on this particular subject, which I don't recall hearing him
wmm> discuss elsewhere) is just pure speculation on my part.
The following is not speculation.
Originally, C++ did not store the size of an allocated area at all.
Whn you deleted an array, you had to supply the count yourself, as
delete [n] ptr;
If you didn't supply [n], the default was 1.
There was a general desire to lift this requirement. However,
we felt that if we did so, we had to
maintain compatibility with C, to the extent that
implementations that wished to allow users to allocate
storage with malloc and free it with delete, or allocate
it with new and free it with free, would be able to do
so efficiently;
use the underlying malloc/free for allocation without
any change to the code; and
impose no overhead at all for single-element allocation.
wmm> Obviously it's possible to use some sort of associative
wmm> array to keep track of such information apart from the
wmm> program's data storage -- I'm guessing that's what you
wmm> were referring to -- but that's a fair amount of overhead
wmm> (which is why most or all implementations these days keep
wmm> track of the number of elements and such in storage that
wmm> immediately precedes the object's address).
The original implementation did use an associative array to keep
track of sizes. The overhead was significant, so we compromised
by requiring users to distinguish between arrays and scalars,
but not to remember the array's size.
I argued at the time that this strategy was ill advised, but
I was in the minority.
--
Andrew Koenig, ark@research.att.com, http://www.research.att.com/info/ark
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: hnsngr@sirius.com (Ron Hunsinger)
Date: 25 Sep 00 01:05:14 GMT Raw View
In article <yu99bsxfp3a8.fsf@europa.research.att.com>, Andrew Koenig
<ark@research.att.com> wrote:
> I argued at the time that this strategy was ill advised, but
> I was in the minority.
Just out of curiosity, what strategy did you argue for?
-Ron Hunsinger
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/09/25 Raw View
"James Kuyper" <kuyper@wizard.net> wrote
> > > This would be a serious incompatibility.
> > > In addition, the extra information required
> > > for each object allocated on the free store could easily
> > > increase the space overhead signficantly.
> >
> > Well, that is something to consider. The info we need is exactly 1 bit
per
> > allocated block. Wow. Certainly lots of implemetations would need more
than
>
> Do you know a convenient efficient way to add only 1 bit to size of an
> allocated object? The absolute minimum that's consistent with the
> standard is 1 byte. On any machine with alignment issues, the minimum
> number of bytes added is equal to the alignment of the pointer.
You forget, that any heap manager already use some memory for bookkeeping.
Like a bit that marks the block free or occupied. It may already be expanded
to that bigger amount, thus having a plenty of space left. Also, there are
other strategies, like holding a bitmap. And a zillion other ways. Never
underestimate the guys writing the 'implementation'. Memory managers have
huge background, I'd guess any aged programmer already wrote not even one
imlementation himself.
Sure I can't say the feature would be without cost. It has cost. The
question is whether the cost worth the gain or not, in what situations, and
what is the distribution of those situations. I see the overall software
quality that hits the shelves quite bad to extremely bad. So I'd welcome
all things that lessens the bug count.
BTW, can I find statistics somewhere about the estimated cost of using buggy
software? In lost time, data, direct consequences, etc? I expect billions of
$.
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/09/25 Raw View
In article <045001c024eb$795ce720$aa0e38c3@bpnt>, Balog Pal
<pasa@lib.hu> writes
>Safety should have been the number one priority always, but 'efficiency'
>and similar stuff came up on top in so many cases. Today computers
>around have tons of hardly used resources, and almost no one ever
>bothers to profile for perfrmance, or in many cases even to compile a
>release build. While we have tons of bug-infested programs around. It's
>quite time to re-evaluate the priorities, and asign the appropriate
>weights.
For a successful commercial language, the number one priority is to
persuade programmers to use it. There is very little point in having a
wonderful, safe language if users do not want it.
Perhaps you should try Eiffel, or Modula 3. I am told that both are much
safer than C++, but neither are used anything like so widely. Ever
wonder why?
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: James Kuyper <kuyper@wizard.net>
Date: 2000/09/25 Raw View
Balog Pal wrote:
>
> "Michiel Salters" <salters@lucent.com> wrote
...
> > Well, a common counterexample is a C string allocated to fit.
>
> People, be serious, please. Array is an array, and a string is a different
> thing. And if you'd not have the autocount, in those situations using
Incorrect. In some languages there's a difference, but in C a string is
a null-terminated array of characters. The same issue applies to any
array where a special value stored in the array is used to indicate the
end of the array. For instance, argv[]: argv[argc] is required to be 0.
It's not uncommon to use a similar technique with arrays storing
integers or floating point values as well.
...
> > Say I've
> > determined a length L substring somewhere I'd like to copy. Before
> > std::string I'd allocate L+1 bytes, copy, 0-terminate it and then
> > forget all about L.
>
> Well, who said you must forget L+1? You knew it, so you can keep it. And if
> you plan to mess with the puffer, you likely need checks not to exceed the
> original puffer.
No one said that you must forget it, just that the use of the
terminating value means you don't need to remember the length. Since you
don't need to remember it, it's wasteful of time (programmer's
development time as well as CPU time) to keep track of it. In
particular, you don't need the size to perform the checks you describe;
just make sure you stop your loops at the terminating value.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pierre Baillargeon <pb@artquest.net>
Date: 2000/09/22 Raw View
wmm@fastdial.net wrote:
>
> In article <39C772F9.BA6A30C2@artquest.net>,
> Pierre Baillargeon <pb@artquest.net> wrote:
> > Steve Clamage wrote:
> > >
> > I did not mean arbitrary in that sense, but I do find interesting that
> > you would think that. Arbitrary means to me that the arguments put
> > forward are not very convincing.
>
> Which means that your definition of "abitrary" is fundamentally
> subjective -- different people can find a given argument
> persuasive or not, depending on their background, perspective,
> etc. I'd be pretty willing to bet that, for every decision you
> categorize as "arbitrary," you could find a number of people on
> the Committee who _did_ find the arguments "convincing."
Yes, all decisions can be put on a scale of "arbitraryness". But in some
cases the same arguments are found convincing for one case and not for
other cases. I now realize that my "arbitraryness detector" is triggered
by inconsistency. In this case, space overhead (keeping the array count
in two places and keeping a array vs. pointer flag), and error prone
(providing the wrong count vs. calling wrong delete operator).
> Actually, this decision was made at AT&T before the Committee
> was formed (see ARM, p. 65 [in my first-printing copy]). Some
> quotes:
Go ahead, blame someone else! ;)
>
> ==================
>
> The user is required to specify when an array is deleted. The
> reason for this is to avoid requiring the implementation to
> store information specifying whether a chunk of memory
> allocated by operator new() is an array or not. This can be a
> minor nuisance for the user, but the alternative would imply a
> difference from the C object layout. This would be a serious
> incompatibility.
I do not understand the point about the object layout, maybe you could
shed more light? As far as I know, the information is kept outside of
the object in some invisible implementation specific area. Furthermore,
C did not have a new operator, so I fail to see where compatibility
comes into play.
> In addition, the extra information required
> for each object allocated on the free store could easily
> increase the space overhead signficantly. The alternative of
> making an array into a proper self-describing object was also
> rejected for C compatibility reasons...
I fail to buy the "significantly increased space overhead" argument. A
single bit being significant overhead? That sounds like an hyperbole:
we're talking about a single bit here. Due to other requirements, all
implementations already have some form of book-keeping done which
provides space to store that bit either for free (using low-order bits
in hidden pointers or memory chunk size information) or little cost. In
pooled implementation, the bit can even be shared among dozens of
allocations. Same thing for bitmap based schemes.
> Earlier definitions of C++ required users to specify the
> number of elements in the array being deleted... This led to
> clumsy code and errors, so that burden was shifted to the
> implementations.
So if those programmers had been as poor to when calling the delete
operators, we would have a single one? And had they been diligent we
would be still passing the count? And you were arguing that the decision
was *not* arbitrary, right? ;)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pierre Baillargeon <pb@artquest.net>
Date: 2000/09/22 Raw View
Francis Glassborow wrote:
>
> In article <39C6778C.20F588F2@artquest.net>, Pierre Baillargeon
> <pb@artquest.net> writes
> >IOW, if you are unable to provide the number of element at deletion,
> >just how did you use your array?
>
> exactly as an old style C-string (array of char) does it, by having a
> terminating value. I know that I can over-write an array of char up to
> the '\0', to go further I must know the original size.
>
But then you *do* have the array size... or do you overallocate arrays
bounded by sentinel value? That sounds like using buggy software as a
design criterion to me. Otherwise, I fail to see how this contradict my
claim that you must know the size of the array to be able to use it.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/09/22 Raw View
"Michiel Salters" <salters@lucent.com> wrote
> > Maybe I'm really a strange programmer, but I have *never* allocated an
> > array and didn't track its size. What is the point of having an array if
> > you don't know how many items you have got? Then again, that could
> > explain all these buffer overflow attacks we're hearing about...
>
> > IOW, if you are unable to provide the number of element at deletion,
> > just how did you use your array?
>
> Well, a common counterexample is a C string allocated to fit.
People, be serious, please. Array is an array, and a string is a different
thing. And if you'd not have the autocount, in those situations using
"array" would just stand out pretty well. You still have operator new,
malloc, strdup, and similar stuff.
> Say I've
> determined a length L substring somewhere I'd like to copy. Before
> std::string I'd allocate L+1 bytes, copy, 0-terminate it and then
> forget all about L.
Well, who said you must forget L+1? You knew it, so you can keep it. And if
you plan to mess with the puffer, you likely need checks not to exceed the
original puffer.
> Ok, I could provide the length but not efficiently.
Certainly not as efficiently as the compiler.
> In similar cases though, I might not be able at all, e.g if I've
> removed a small part of the string.
When you want to model a situation, you should do that precisely. If you
foresee problems with some structure you use the existing alternatives that
do not have it, right?
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/09/22 Raw View
"Francis Glassborow" <francis.glassborow@ntlworld.com> wrote
> >So, in worst case the memory overhead is 4 / (4 + 4 + 4) = 33%.
>
> This is speculative. I have many small math classes whose size is 4 or 8
> bytes. These usually have their own operator new/delete managing a pool
> of memory.
Yeah, that is what we talk about. Everyone who use many small items use some
allocator, and not allovase every item with new. Even without the byte
overhead new is very expensive, it will scan a list to find a block,
bookkeep, around that lock a mutex too... adding some extra bits to that is
just next to nothing.
> These is easy and efficient. If I always have to deal with a
> counter as well, that not only significantly changes the size, but
> seriously impacts on how I can manage non-array dynamic objects. The
> cost is substantial in both space and performance.
IMHO no one argues t would be good to get rid of the delete[] auto counting.
Just as good as that, the little overhead cost of a uniform delete pays off.
Now you must bookkeep the types you allocate. As some APIs may tie you to
use either scalar or vector delete, using only [] everywhere is not a
takeable one in all cases, it sounds more like a bad excuse.
> Note also that I can delete a single object through a base class
> pointer, I cannot do this for an array.
Yeah, to save the implementation another piece of bookkeeping. I hope the
impacts of those issues will be re-evaluated in the next revision of C++. I
see tons of memory around, and eagerly need correct programs to fill it.
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/09/22 Raw View
<wmm@fastdial.net> wrote
> > As I said in another reply, I fail to
> > see how one can use an array without knowing its size, thus the size
> > should always be available when deletion is needed.
>
> An example of this kind of thing is a NUL-terminated character
> string.
You can't use unknown values with new[], so that does not really apply in
the discussion.
> You obviously know how big it is when you allocate it,
Well, then what we're talking about? '-)
> Another possible error is related to program change. (This is
> obviously poor coding, but perfect programs don't have errors,
> right?:-) Consider code like:
>
> X* p = new X[20];
> int num_elements = 20;
That is not 'poor coding, but it is extremely <*^%$&^$%> bad, broken and
also tasteless.
const int num_elements = 20;
X* p = new X[num_elements];
is the acceptable way.
> Now you decide you need 30 elements; you change the allocation
> and overlook the need to change the number you remember.
Having written 20 in the middle of your code is bad enough. Writing it twice
is just unacceptable. Bugs like yo write are just waiting their time.
> The point is that if you have two pieces of information to keep
> in sync
That is actually one piece of information you arbitrarily forked. Did I ever
mention redundancy is evil?
> (the address and the number of elements for the delete),
> it offers an opportunity for bugs that isn't there if you only
> have one piece of information.
Sure for delete[] automated and unmissable count provided by the compiler is
good. Too bad the code I see around use quite low percentage of arrays with
new, so the chance of using the wrong delete is much higher than using bad
count would be.
Forcing the use only new[] never appeared to me as a solution until
mentioned here, but I doubdt it could be applied to the existing code base
and habits, almost like starting 'drive on left' in car traffic.
> Actually, this decision was made at AT&T before the Committee
> was formed (see ARM, p. 65 [in my first-printing copy]). Some
> quotes:
>
> ==================
>
> The user is required to specify when an array is deleted. The
> reason for this is to avoid requiring the implementation to
> store information specifying whether a chunk of memory
> allocated by operator new() is an array or not. This can be a
> minor nuisance for the user, but the alternative would imply a
> difference from the C object layout.
Oh yeah, that nuisance appears not that minor in practice (especially if
you're not the programmer who didn't bother with syncing, but the one to
find out why the program crashes in some rare situations at random
locations), and that C compatibility is way behind us. That argument was
good back then, but today?
> This would be a serious incompatibility.
> In addition, the extra information required
> for each object allocated on the free store could easily
> increase the space overhead signficantly.
Well, that is something to consider. The info we need is exactly 1 bit per
allocated block. Wow. Certainly lots of implemetations would need more than
that, possibly a whole int for all blocks. Is that really 'considerable'
today? Not back in time, I mean today literally. I believe allocators are
even in the standard library to suppress overhead for small blocks, and
they're used even now where small allocations are expected. And in the other
cases the programmer just does not care if the heap manager overhead is 4 or
6 or 10 bytes.
> The alternative of
> making an array into a proper self-describing object was also
> rejected for C compatibility reasons...
C compatibility was not considered in other cases (what I think is generally
good).
I think half-assed solutions tend to be bad, it's better making up the mind
to be compatible or drop it, then walk the proper way gaining all the
benefits along with paying the price.
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/09/22 Raw View
In article <003c01c024ba$45f958e0$aa0e38c3@bpnt>, Balog Pal
<pasa@lib.hu> writes
>Yeah, that is what we talk about. Everyone who use many small items use
>some allocator, and not allovase every item with new. Even without the
>byte overhead new is very expensive, it will scan a list to find a
>block, bookkeep, around that lock a mutex too... adding some extra bits
>to that is just next to nothing.
Of course I allocate every dynamic element with new, it just calls a
user defined operator new. As it stands I can happily do that, and it is
not expensive, or nothing like as expensive as having to manage every
object as an array.
I think the chance of the Standards Committees revisiting the issue are
as close to zero as makes no difference. If you do not like new and
delete, use new[] and delete[].
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: wmm@fastdial.net
Date: 2000/09/22 Raw View
In article <003e01c024ba$49ce55b0$aa0e38c3@bpnt>,
"Balog Pal" <pasa@lib.hu> wrote:
> <wmm@fastdial.net> wrote
>
> > > As I said in another reply, I fail to
> > > see how one can use an array without knowing its size, thus the
size
> > > should always be available when deletion is needed.
> >
> > An example of this kind of thing is a NUL-terminated character
> > string.
>
> You can't use unknown values with new[], so that does not really
apply in
> the discussion.
>
> > You obviously know how big it is when you allocate it,
>
> Well, then what we're talking about? '-)
We're talking about allocating a string of 41 bytes, filling
in 40 of them with data and the 41st with 0, and then just
passing around the pointer to the first byte. The length is
known at allocation time, but the deletion may be in a
completely different place where the length is not known
(unless you waste the time to scan the string).
> > Another possible error is related to program change. (This is
> > obviously poor coding, but perfect programs don't have errors,
> > right?:-) Consider code like:
> >
> > X* p = new X[20];
> > int num_elements = 20;
>
> That is not 'poor coding, but it is extremely <*^%$&^$%> bad, broken
and
> also tasteless.
>
> const int num_elements = 20;
> X* p = new X[num_elements];
>
> is the acceptable way.
Yes, of course. But have you never seen it in someone else's
code? I certainly have. The point was that there will always
be poor coders, and not allowing them to make that particular
error seems like a good thing to me. Besides, I was responding
to a question asking what kinds of errors might result from
requiring the programmer to keep track of the number of
elements allocated; this is such an error.
> > Now you decide you need 30 elements; you change the allocation
> > and overlook the need to change the number you remember.
>
> Having written 20 in the middle of your code is bad enough. Writing
it twice
> is just unacceptable. Bugs like yo write are just waiting their time.
A variation on this, suggested by your complaint (and also one
I have seen), occurs when the size for the allocation is given
by a constant in a header file. Often such header files have
a large number of such constants, sometimes with cryptic names.
I have seen code that does "p = new X[ELEMENT_COUNT};" in one
place and "delete [MAX_ELEMENTS] p;" elsewhere.
Again, nearly all bugs can be avoided by smart people and good
programming practices -- but sometimes those are in short
supply.
> > The point is that if you have two pieces of information to keep
> > in sync
>
> That is actually one piece of information you arbitrarily forked.
No, it wasn't I who forked it; that's been the subject of this
whole discussion -- keeping both the count and the pointer, as
early versions of C++ required.
> Did I ever
> mention redundancy is evil?
The "multiple copy" problem is well-known.
> > (the address and the number of elements for the delete),
> > it offers an opportunity for bugs that isn't there if you only
> > have one piece of information.
>
> Sure for delete[] automated and unmissable count provided by the
compiler is
> good. Too bad the code I see around use quite low percentage of
arrays with
> new, so the chance of using the wrong delete is much higher than
using bad
> count would be.
> Forcing the use only new[] never appeared to me as a solution until
> mentioned here, but I doubdt it could be applied to the existing code
base
> and habits, almost like starting 'drive on left' in car traffic.
>
> > Actually, this decision was made at AT&T before the Committee
> > was formed (see ARM, p. 65 [in my first-printing copy]). Some
> > quotes:
> >
> > ==================
> >
> > The user is required to specify when an array is deleted. The
> > reason for this is to avoid requiring the implementation to
> > store information specifying whether a chunk of memory
> > allocated by operator new() is an array or not. This can be a
> > minor nuisance for the user, but the alternative would imply a
> > difference from the C object layout.
>
> Oh yeah, that nuisance appears not that minor in practice (especially
if
> you're not the programmer who didn't bother with syncing, but the one
to
> find out why the program crashes in some rare situations at random
> locations), and that C compatibility is way behind us. That argument
was
> good back then, but today?
C compatibility may not be important to you, but it is to a
lot of people. Besides, we're talking about a decision that
was made then -- why shouldn't we talk about the reasons that
went into the decision then? And since the decision was made,
there is lots of code written on the basis of that decision.
C++ has a history, for better or for worse, that can't be
easily thrown away. If you want to start fresh, I believe
that Microsoft is in the process of standardizing C#.
> > This would be a serious incompatibility.
> > In addition, the extra information required
> > for each object allocated on the free store could easily
> > increase the space overhead signficantly.
>
> Well, that is something to consider. The info we need is exactly 1
bit per
> allocated block. Wow. Certainly lots of implemetations would need
more than
> that, possibly a whole int for all blocks. Is that
really 'considerable'
> today? Not back in time, I mean today literally. I believe allocators
are
> even in the standard library to suppress overhead for small blocks,
and
> they're used even now where small allocations are expected. And in
the other
> cases the programmer just does not care if the heap manager overhead
is 4 or
> 6 or 10 bytes.
Four or eight (or even sixteen, on some architectures) extra
bytes per allocation may not matter to you, but even with
modern huge memories it adds up over the millions of
allocations some applications perform. I've seen some
applications legitimately overflow several gigs of swap
with allocated data; folks who write and use those programs
aren't going to be quite so cavalier about an extra 16 bytes
per allocation.
--
William M. Miller, wmm@fastdial.net
Vignette Corporation (www.vignette.com)
Sent via Deja.com http://www.deja.com/
Before you buy.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: wmm@fastdial.net
Date: 2000/09/23 Raw View
In article <003d01c024ba$483c9220$aa0e38c3@bpnt>,
"Balog Pal" <pasa@lib.hu> wrote:
> "Michiel Salters" <salters@lucent.com> wrote
>
> > > Maybe I'm really a strange programmer, but I have *never*
allocated an
> > > array and didn't track its size. What is the point of having an
array if
> > > you don't know how many items you have got? Then again, that could
> > > explain all these buffer overflow attacks we're hearing about...
> >
> > > IOW, if you are unable to provide the number of element at
deletion,
> > > just how did you use your array?
> >
> > Well, a common counterexample is a C string allocated to fit.
>
> People, be serious, please. Array is an array, and a string is a
different
> thing.
I'm perfectly serious. A NUL-terminated character string is
just one (very common) example of using a distinguished value
to mark the end of an array. Another one is an array of
pointers, where the last element is NULL.
> > Say I've
> > determined a length L substring somewhere I'd like to copy. Before
> > std::string I'd allocate L+1 bytes, copy, 0-terminate it and then
> > forget all about L.
>
> Well, who said you must forget L+1? You knew it, so you can keep it.
And if
> you plan to mess with the puffer, you likely need checks not to
exceed the
> original puffer.
The typical use for terminated uncounted arrays, like C strings
and an array of pointers, is to create them, fill them in, and
then treat them as read-only until you're finished with them
and delete them. The distinguished value at the end of the
array makes it easy to iterate over the members and not fall
off. Why should you have to remember the length? Typically
that means using a "{length, ptr}" data structure. Compilers
usually generate worse code to pass such a thing into or
out of a function than a bare pointer, and your source code
that references the array is cluttered up with the member
selection syntax instead of just the pointer name.
You're quite right in saying that if you expect to change
the contents of the array you need to keep track of the
maximum size. But there are many cases in which the length
of the array is only _needed_ to allocate and initialize it;
after that, the only reason to carry it around would be if the
language required you to supply it in order to delete the
array, which happily C++ does not require any more.
--
William M. Miller, wmm@fastdial.net
Vignette Corporation (www.vignette.com)
Sent via Deja.com http://www.deja.com/
Before you buy.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: James Kuyper <kuyper@wizard.net>
Date: 2000/09/23 Raw View
Balog Pal wrote:
>
> <wmm@fastdial.net> wrote
...
> > An example of this kind of thing is a NUL-terminated character
> > string.
>
> You can't use unknown values with new[], so that does not really apply in
> the discussion.
Why not? As wmm said:
> > You obviously know how big it is when you allocate it,
>
> Well, then what we're talking about? '-)
Whether or not to continue keeping track of it after it's been
allocated. In many contexts, there's no need to.
> > This would be a serious incompatibility.
> > In addition, the extra information required
> > for each object allocated on the free store could easily
> > increase the space overhead signficantly.
>
> Well, that is something to consider. The info we need is exactly 1 bit per
> allocated block. Wow. Certainly lots of implemetations would need more than
Do you know a convenient efficient way to add only 1 bit to size of an
allocated object? The absolute minimum that's consistent with the
standard is 1 byte. On any machine with alignment issues, the minimum
number of bytes added is equal to the alignment of the pointer.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: wmm@fastdial.net
Date: 2000/09/23 Raw View
In article <39CA6C85.182E98EA@artquest.net>,
Pierre Baillargeon <pb@artquest.net> wrote:
> >
> > ==================
> >
> > The user is required to specify when an array is deleted. The
> > reason for this is to avoid requiring the implementation to
> > store information specifying whether a chunk of memory
> > allocated by operator new() is an array or not. This can be a
> > minor nuisance for the user, but the alternative would imply a
> > difference from the C object layout. This would be a serious
> > incompatibility.
>
> I do not understand the point about the object layout, maybe you could
> shed more light? As far as I know, the information is kept outside of
> the object in some invisible implementation specific area.
Furthermore,
> C did not have a new operator, so I fail to see where compatibility
> comes into play.
I don't know what Bjarne had in mind here; I quoted the ARM
to describe the thinking of the person who actually made the
decision at a time fairly close to when the decision was
made, but anything beyond the actual words he wrote (at least
on this particular subject, which I don't recall hearing him
discuss elsewhere) is just pure speculation on my part. I'm
guessing that he may have been thinking in terms of
self-describing pointers, where a pointer is not just an
address but is really a struct containing an address and
descriptor information, like scalar vs array and the number
of elements in the array. Such pointers would obviously
not permit interoperation with C code (such as the
operating system) that expected bare-address pointers, that
assumed pointers inside structs occupy only the number of
bytes required to represent an address, etc. (Whether or
not C has a "new" operator, it's still desirable to pass
data and pointers into and out of C code.)
Obviously it's possible to use some sort of associative
array to keep track of such information apart from the
program's data storage -- I'm guessing that's what you
were referring to -- but that's a fair amount of overhead
(which is why most or all implementations these days keep
track of the number of elements and such in storage that
immediately precedes the object's address).
> > In addition, the extra information required
> > for each object allocated on the free store could easily
> > increase the space overhead signficantly. The alternative of
> > making an array into a proper self-describing object was also
> > rejected for C compatibility reasons...
>
> I fail to buy the "significantly increased space overhead" argument. A
> single bit being significant overhead? That sounds like an hyperbole:
> we're talking about a single bit here. Due to other requirements, all
> implementations already have some form of book-keeping done which
> provides space to store that bit either for free (using low-order bits
> in hidden pointers or memory chunk size information) or little cost.
In
> pooled implementation, the bit can even be shared among dozens of
> allocations. Same thing for bitmap based schemes.
I think he was referring to keeping the information with
either the pointer or the allocated memory; alignment
requirements would increase your "one bit" to between two
and sixteen bytes, depending on the machine architecture.
(Remember that in many C++ implementations in those days,
and even some today, the actual memory allocation mechanism
is the same "malloc" used by C; the C++ implementation
doesn't have access to those bits you're talking about
using.)
> > Earlier definitions of C++ required users to specify the
> > number of elements in the array being deleted... This led to
> > clumsy code and errors, so that burden was shifted to the
> > implementations.
>
> So if those programmers had been as poor to when calling the delete
> operators, we would have a single one? And had they been diligent we
> would be still passing the count? And you were arguing that the
decision
> was *not* arbitrary, right? ;)
That's right, it wasn't arbitrary. At the time the decision
was made, there were several years of experience with the
requirement for programmers to remember the array size in
order to delete it. Analysis of that experience said that
people tended to make errors about the size and found it
inconvenient to carry it around when otherwise all they
needed was the pointer. In language design, those kinds of
experiences generally indicate that you didn't get something
quite right the first time around.
I think you can reasonably question whether the right change
was made, or even whether the cure is worse than the problem.
But I don't think you can legitimately say that the decision
was arbitrary. It was based on careful consideration of the
experience and alternatives.
--
William M. Miller, wmm@fastdial.net
Vignette Corporation (www.vignette.com)
Sent via Deja.com http://www.deja.com/
Before you buy.
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: James Kuyper <kuyper@wizard.net>
Date: 2000/09/23 Raw View
Pierre Baillargeon wrote:
>
> Francis Glassborow wrote:
...
> > exactly as an old style C-string (array of char) does it, by having a
> > terminating value. I know that I can over-write an array of char up to
> > the '\0', to go further I must know the original size.
> >
>
> But then you *do* have the array size... or do you overallocate arrays
> bounded by sentinel value? That sounds like using buggy software as a
Why not? Allocate the maximum space you might need, fill with the actual
amount you do need. It's more wasteful than allocating exactly the
amount you need, but that's not always convenient to do. Even if the
array was originally allocated exactly the right size, that doesn't mean
it still is. Obviously, if you don't keep track of the size, you
shouldn't expand it, but it's perfectly safe to shrink the array.
> design criterion to me. Otherwise, I fail to see how this contradict my
> claim that you must know the size of the array to be able to use it.
You could find the size of the array by searching for the sentinel
value; but that's not the same as knowing the size. For one thing, it's
a lot slower.
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Balog Pal" <pasa@lib.hu>
Date: 2000/09/23 Raw View
"Francis Glassborow" <francis.glassborow@ntlworld.com> wrote
> Of course I allocate every dynamic element with new, it just calls a
> user defined operator new.
And that user-defined new calls the original new on 1-1 basis or you
allocate a pool, and your custom new just gives a slot from your pool? If
the first, did you ever profile your program? On several systems you'll get
terrible results, especially in a multithreaded build. Memory is generally
cheaper than processing power in this respect, if the items are really
dynamic by nature.
Also, how many bytes potentially wasted memory compensates a single bug?
> As it stands I can happily do that, and it is not expensive,
There are pretty optimized heap managers that handle small and large block
allocations differently. As even the most basic memory overhead may come out
as 'unacceptable' when you want to allocate single bytes.
> I think the chance of the Standards Committees revisiting the issue are
> as close to zero as makes no difference.
Too bad.
> If you do not like new and delete, use new[] and delete[].
In my personal code I don't ever use new[]. If I need a collection I use a
collection. Also, it is a very rare case I use new too, and those rare cases
are quite well guarded. But other people around use both, and sometimes that
leads to actual problems. If the argument about passing count to delete[] is
error-prone was considered, the similar argument about messing up the
operators should be considered multiple times more seriously. And this one
is hardly my own strange observation, A plenty of people ran into this
problem all around the world.
Safety should have been the number one priority always, but 'efficiency' and
similar stuff came up on top in so many cases. Today computers around have
tons of hardly used resources, and almost no one ever bothers to profile for
perfrmance, or in many cases even to compile a release build. While we have
tons of bug-infested programs around. It's quite time to re-evaluate the
priorities, and asign the appropriate weights.
Paul
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/09/21 Raw View
In article <39C6778C.20F588F2@artquest.net>, Pierre Baillargeon
<pb@artquest.net> writes
>IOW, if you are unable to provide the number of element at deletion,
>just how did you use your array?
exactly as an old style C-string (array of char) does it, by having a
terminating value. I know that I can over-write an array of char up to
the '\0', to go further I must know the original size.
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/09/21 Raw View
In article <8q5opf$486$1@nnrp1.deja.com>, rado42 <rado42@my-deja.com>
writes
>So, in worst case the memory overhead is 4 / (4 + 4 + 4) = 33%.
This is speculative. I have many small math classes whose size is 4 or 8
bytes. These usually have their own operator new/delete managing a pool
of memory. These is easy and efficient. If I always have to deal with a
counter as well, that not only significantly changes the size, but
seriously impacts on how I can manage non-array dynamic objects. The
cost is substantial in both space and performance.
Note also that I can delete a single object through a base class
pointer, I cannot do this for an array.
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: wmm@fastdial.net
Date: 2000/09/21 Raw View
In article <39C772F9.BA6A30C2@artquest.net>,
Pierre Baillargeon <pb@artquest.net> wrote:
> Steve Clamage wrote:
> >
> > Just because you don't agree with a decision doesn't mean it was
> > arbitrary. I don't know of any decision being based on a coin-toss,
> > lottery, or other arbitrary decision mechanism. All decisions I
> > know about were reached by evaluating alternatives and voting.
> >
>
> I did not mean arbitrary in that sense, but I do find interesting that
> you would think that. Arbitrary means to me that the arguments put
> forward are not very convincing.
Which means that your definition of "abitrary" is fundamentally
subjective -- different people can find a given argument
persuasive or not, depending on their background, perspective,
etc. I'd be pretty willing to bet that, for every decision you
categorize as "arbitrary," you could find a number of people on
the Committee who _did_ find the arguments "convincing."
> As I said in another reply, I fail to
> see how one can use an array without knowing its size, thus the size
> should always be available when deletion is needed.
An example of this kind of thing is a NUL-terminated character
string. You obviously know how big it is when you allocate it,
but the use of a distinguished value as a terminator means that
you don't have to carry around the length throughout the
processing. If it's essentially read-only until it's deleted,
and if it's easy to tell when you're done when you're iterating
through the elements, it's just extra baggage to have to keep
track of the number of elements after the array is allocated
and initialized.
> I've read that the original way to delete an array passed the count
but
> it was found to be "error-prone", without ever saying what errors
people
> were doing. The only error I could think of would be to pass a "fill
> count" instead of a "capacity count". Furthermore, it is trivial to
> write a debug version that does keep the count and check that the
> correct one is passed in, finding all errors in a snap.
Another possible error is related to program change. (This is
obviously poor coding, but perfect programs don't have errors,
right?:-) Consider code like:
X* p = new X[20];
int num_elements = 20;
Now you decide you need 30 elements; you change the allocation
and overlook the need to change the number you remember.
The point is that if you have two pieces of information to keep
in sync (the address and the number of elements for the delete),
it offers an opportunity for bugs that isn't there if you only
have one piece of information.
> The paradoxical thing is, just in case any reader didn't read my
orginal
> comments, I don't mind that the array new operator keeps the count for
> me. Quite the contrary, I would prefer having a single new operator!
It
> is just that it seems that opposing arguments prevailed in each case:
>
> - Keeping count for array of classes was not seen as "overhead", and
> passing it in was seen as "error prone", even though detecting errors
is
> easy.
>
> - Having two operator-delete was necessary to avoid "overhead", and
> calling the wrong one was seen as "easily detectable".
>
> That, to me, is the arbitrary bit. Or, using your definition, should I
> say that the composition of the comitee that lead to the voting was
> arbitrary? Of course, this is my arbitrary opinion.
Actually, this decision was made at AT&T before the Committee
was formed (see ARM, p. 65 [in my first-printing copy]). Some
quotes:
==================
The user is required to specify when an array is deleted. The
reason for this is to avoid requiring the implementation to
store information specifying whether a chunk of memory
allocated by operator new() is an array or not. This can be a
minor nuisance for the user, but the alternative would imply a
difference from the C object layout. This would be a serious
incompatibility. In addition, the extra information required
for each object allocated on the free store could easily
increase the space overhead signficantly. The alternative of
making an array into a proper self-describing object was also
rejected for C compatibility reasons...
Earlier definitions of C++ required users to specify the
number of elements in the array being deleted... This led to
clumsy code and errors, so that burden was shifted to the
implementations.
==========================
--
William M. Miller, wmm@fastdial.net
Vignette Corporation (www.vignette.com)
Sent via Deja.com http://www.deja.com/
Before you buy.
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: llewelly.@@edevnull.dot.com
Date: 2000/09/21 Raw View
Pierre Baillargeon <pb@artquest.net> writes:
[snip]
> Maybe I'm really a strange programmer, but I have *never* allocated an
> array and didn't track its size. What is the point of having an array if
> you don't know how many items you have got? Then again, that could
> explain all these buffer overflow attacks we're hearing about...
>
> What's more, most of my arrays are buried inside std::vector, which does
> track the size of the array, automatically, and I've never felt any pain
> over it.
>
> IOW, if you are unable to provide the number of element at deletion,
> just how did you use your array?
[snip]
The original decision to not require the size of the array to be
passed to delete[] was probably made around 1989 or so; relatively
early in C++'s history.
Wrapping nearly every pointer(0) in a class is reflex for most people on
this list, but I doubt it was a widespread technique at the
time. (With the exception of these newsgroups, I do not think it is a
widespread technique *now*.)
I agree that if every pointer to a dynamic array is wrapped, having
new[] and delete[] track the size is superfluous - unnecessary
overhead, because the size is already tracked elsewhere. However,
even today there are many programmers (and worse, many widely used
APIs (shame on them!)) that do wrap pointers to dynamic arrays.
Like many of the design decisions of C++, it is best considered in the
context of the time it made.
Notes:
(0) I include use of vector<>, etc, as a way to 'wrap a
pointer to a dynamic array.', even though vector<> does not
necessarily use new[]. In most cases, vector<> is the best way
to wrap a dynamic array.
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 2000/09/18 Raw View
In article <39C23265.25097C8A@artquest.net>, Pierre Baillargeon
<pb@artquest.net> writes
>I just wish that when people ask why some borderline feature has been
>designed like it is, comitee members admit that while some
>justifications can be given, many decisions ended up being simply
>arbitrary.
We often do, this was not one of those times. Exactly how do you propose
that the programmer track the size of a dynamically created array in a
simple way? And anything else will be error prone.
We were well aware that, if you really cared you could decide to use
only arrays (of one if necessary) and pay the performance cost. But the
way it is allows you a choice. This was not, IMO a borderline decision
but one made on clearly stated general principles (you do not pay for
what you do not want.)
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Steve Clamage <stephen.clamage@sun.com>
Date: 19 Sep 00 04:07:30 GMT Raw View
Pierre Baillargeon wrote:
>
> Stephen Clamage wrote:
> >
> > On 10 Sep 2000 11:10:47 -0400, Pierre Baillargeon <pb@artquest.net>
> > wrote:
> >
> > >I wonder then why the element count was not passed in by the caller as
> > >well. Isn't it also an "overhead", and that some people may not want to
> > >pay for it?
> >
> > As explained in the ARM and in D&E, keeping track of the element count
> > may be difficult for the programmer, and is quite error prone.
> > Keeping track of whether you allocated an array or a single object
> > normally is not a problem, and if it is, you can choose always to
> > allocate an array (possibly of one element). The user interface issues
> > are not comparable.
>
> To be honest, I very well knew all that. But it just bothers me when I
> read the justification for some of the decision in the standard. IMO,
> this one falls into the bag of arbitrary decisions: keeping the count
> was considered "error prone" (even though it is common practice in C
> with roll-your-own dynamic arrays),
I don't understand your argument. In C, you have only malloc and free.
When you use free, you don't need to know the element count. What is
the comparable C issue that leads you to think that requiring an element
count for delete[] imposes no extra inconvenience for the C++ programmer?
>
> I just wish that when people ask why some borderline feature has been
> designed like it is, comitee members admit that while some
> justifications can be given, many decisions ended up being simply
> arbitrary.
You'll have to be more specific in this complaint. It is quite common
for explanations in the ARM, C++PL, D&E -- and by individual committee
members -- to describe decisions as trade-offs or judgement calls.
Just because you don't agree with a decision doesn't mean it was
arbitrary. I don't know of any decision being based on a coin-toss,
lottery, or other arbitrary decision mechanism. All decisions I
know about were reached by evaluating alternatives and voting.
--
Steve Clamage, stephen.clamage@sun.com
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pierre Baillargeon <pb@artquest.net>
Date: 2000/09/19 Raw View
Francis Glassborow wrote:
>
> In article <39C23265.25097C8A@artquest.net>, Pierre Baillargeon
> <pb@artquest.net> writes
> >I just wish that when people ask why some borderline feature has been
> >designed like it is, comitee members admit that while some
> >justifications can be given, many decisions ended up being simply
> >arbitrary.
>
> We often do, this was not one of those times. Exactly how do you propose
> that the programmer track the size of a dynamically created array in a
> simple way? And anything else will be error prone.
Maybe I'm really a strange programmer, but I have *never* allocated an
array and didn't track its size. What is the point of having an array if
you don't know how many items you have got? Then again, that could
explain all these buffer overflow attacks we're hearing about...
What's more, most of my arrays are buried inside std::vector, which does
track the size of the array, automatically, and I've never felt any pain
over it.
IOW, if you are unable to provide the number of element at deletion,
just how did you use your array?
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pierre Baillargeon <pb@artquest.net>
Date: 2000/09/19 Raw View
Steve Clamage wrote:
>
> Just because you don't agree with a decision doesn't mean it was
> arbitrary. I don't know of any decision being based on a coin-toss,
> lottery, or other arbitrary decision mechanism. All decisions I
> know about were reached by evaluating alternatives and voting.
>
I did not mean arbitrary in that sense, but I do find interesting that
you would think that. Arbitrary means to me that the arguments put
forward are not very convincing. As I said in another reply, I fail to
see how one can use an array without knowing its size, thus the size
should always be available when deletion is needed.
I've read that the original way to delete an array passed the count but
it was found to be "error-prone", without ever saying what errors people
were doing. The only error I could think of would be to pass a "fill
count" instead of a "capacity count". Furthermore, it is trivial to
write a debug version that does keep the count and check that the
correct one is passed in, finding all errors in a snap.
The paradoxical thing is, just in case any reader didn't read my orginal
comments, I don't mind that the array new operator keeps the count for
me. Quite the contrary, I would prefer having a single new operator! It
is just that it seems that opposing arguments prevailed in each case:
- Keeping count for array of classes was not seen as "overhead", and
passing it in was seen as "error prone", even though detecting errors is
easy.
- Having two operator-delete was necessary to avoid "overhead", and
calling the wrong one was seen as "easily detectable".
That, to me, is the arbitrary bit. Or, using your definition, should I
say that the composition of the comitee that lead to the voting was
arbitrary? Of course, this is my arbitrary opinion.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: rado42 <rado42@my-deja.com>
Date: 2000/09/19 Raw View
[snip]
I didn't see anybody mentioning the 'practical' consideration:
How much are you actually paying for new[] instead of new?
The answer, of course, depends on the amount of the additional
memory required by the array size (typically 4 bytes for
32 bits compiler), compared to the 'rest' of the memory
being used (typically 4 bytes for the pointer itself + sizeof
the item[s] you are allocating, most probably rounded up in at
least 4 bytes increments. I will also mention here the overhead
that a typical memory manager has of it's own - at least the size
of the block been allocated, and typically even more - this adds
another 4 bytes.
So, in worst case the memory overhead is 4 / (4 + 4 + 4) = 33%.
Even this looks acceptable to me. OTOH, I believe that
the 'averidge' size of a class with non-trivial destructor
is much bigger - e.g. 30-40 bytes.
Comparing the 4 bytes overhead to 40 bytes estimates to
overhead less than 10%. I think that this price is more than
acceptable, compared to the problems that can arize due to
inapropriate misuse of delete and delete[].
Sure one can always use new[] and delete[], but this is somewhat
uncomfortable. The things would have been much simpler if there
was only one pair of constructs.
Unfortunatelly, it's too late for any changes. And, of course,
its better to use STL containers than arrays anyway ;-)
Radoslav Getov
Sent via Deja.com http://www.deja.com/
Before you buy.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: comeau@panix.com (Greg Comeau)
Date: 2000/09/20 Raw View
In article <39C772F9.BA6A30C2@artquest.net>,
Pierre Baillargeon <pb@artquest.net> wrote:
>Arbitrary means to me that the arguments put
>forward are not very convincing. As I said in another reply, I fail to
>see how one can use an array without knowing its size, thus the size
>should always be available when deletion is needed.
Well, in that case, what _you_ are saying is arbitrary.
Have you never used c-like string? Then I bet you've
used an array w/o knowing its size. Many times.
I have. I've also used other types of arrays w/o knowing
their size.
>I've read that the original way to delete an array passed the count but
>it was found to be "error-prone", without ever saying what errors people
>were doing. The only error I could think of would be to pass a "fill
>count" instead of a "capacity count". Furthermore, it is trivial to
>write a debug version that does keep the count and check that the
>correct one is passed in, finding all errors in a snap.
Even if that was the only error, the bottom line is that housekeeping
needs to be done, somewhere. It can be automated by the implementation,
or you can do it every single lousy time by hand.
>The paradoxical thing is, just in case any reader didn't read my orginal
>comments, I don't mind that the array new operator keeps the count for
>me. Quite the contrary, I would prefer having a single new operator! It
>is just that it seems that opposing arguments prevailed in each case:
>
>- Keeping count for array of classes was not seen as "overhead", and
>passing it in was seen as "error prone", even though detecting errors is
>easy.
>
>- Having two operator-delete was necessary to avoid "overhead", and
>calling the wrong one was seen as "easily detectable".
Oh, it's overhead for the new, that can't be denied, but it's
overhead that must be done either way, so it's no worse overhead.
So there is no opposition or arbitrary'ness here.
- Greg
--
Comeau Computing / Comeau C/C++ ("so close" 4.2.44 betas starting)
TRY Comeau C++ ONLINE at http://www.comeaucomputing.com/tryitout
Email: comeau@comeaucomputing.com / WEB: http://www.comeaucomputing.com
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Michiel Salters <salters@lucent.com>
Date: 2000/09/20 Raw View
Pierre Baillargeon wrote:
> Maybe I'm really a strange programmer, but I have *never* allocated an
> array and didn't track its size. What is the point of having an array if
> you don't know how many items you have got? Then again, that could
> explain all these buffer overflow attacks we're hearing about...
> IOW, if you are unable to provide the number of element at deletion,
> just how did you use your array?
Well, a common counterexample is a C string allocated to fit.Say I've
determined a length L substring somewhere I'd like to copy. Before
std::string I'd allocate L+1 bytes, copy, 0-terminate it and then
forget all about L. Ok, I could provide the length but not efficiently.
In similar cases though, I might not be able at all, e.g if I've
removed a small part of the string.
Michiel Salters
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pierre Baillargeon <pb@artquest.net>
Date: 2000/09/17 Raw View
Stephen Clamage wrote:
>
> On 10 Sep 2000 11:10:47 -0400, Pierre Baillargeon <pb@artquest.net>
> wrote:
>
> >I wonder then why the element count was not passed in by the caller as
> >well. Isn't it also an "overhead", and that some people may not want to
> >pay for it?
>
> As explained in the ARM and in D&E, keeping track of the element count
> may be difficult for the programmer, and is quite error prone.
> Keeping track of whether you allocated an array or a single object
> normally is not a problem, and if it is, you can choose always to
> allocate an array (possibly of one element). The user interface issues
> are not comparable.
>
To be honest, I very well knew all that. But it just bothers me when I
read the justification for some of the decision in the standard. IMO,
this one falls into the bag of arbitrary decisions: keeping the count
was considered "error prone" (even though it is common practice in C
with roll-your-own dynamic arrays), while remembering if a particular
pointer was an array or not was not, and the fact that even if one knows
it, it is easy to get wrong. Worse: some implementation don't care, so
an error may go unnoticed for years, until a new compiler is used.
I just wish that when people ask why some borderline feature has been
designed like it is, comitee members admit that while some
justifications can be given, many decisions ended up being simply
arbitrary.
IOW, instead of simply saying "here's why", saying "here's some facts,
but in the end its arbitrary".
OTOH, I know it's a pet peeve, just ignore me.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 08 Sep 00 14:47:51 GMT Raw View
4 Sep 2000 08:42:28 -0400, Balog Pal (mh) <pasa@lib.hu> pisze:
> I'm sure it was considered by the comitee. Why that idea got rejected?
I don't know, but using vector<> instead of raw arrays should make
delete[] unnecessary in end programs, leaving delete[] only for
implementations of vector<> and other containers.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZAST PCZA
QRCZAK
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 08 Sep 00 14:48:47 GMT Raw View
In article <39ada046@andromeda.datanet.hu>, Balog Pal (mh) <pasa@lib.hu>
writes
>That would keep some bugs away that can't be catched at compile time.
>And the cost is a mere sizeof(pointer) overhead per array, and a little
>slower delete. Even the syntax could be left as is, any delete working
>in similar way.
>
>I'm sure it was considered by the comitee. Why that idea got rejected?
It was, and over the early history of C++ (1980s) various different
variations were tried. The reason it is as it is, is the principle that
you should not pay for what you do not use. Anyone who wants this extra
security simply never uses new/delete but uses new[]/delete[] in all
cases (with a dimension of one where only a single object is required)
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: llewelly.@@edevnull.dot.com
Date: 08 Sep 00 14:50:21 GMT Raw View
"Balog Pal (mh)" <pasa@lib.hu> writes:
> This is probably a FAQ, but I don't recall to see an answer.
>
> We all know that pointers got from new must be freed by delete and from
> new[] by delete[].
> But I think it's still quite common to mess up by incident, and explode
the
> program by doing otherwise.
Funny. I almost never use new[] or delete[].
>
> Why standard C++ still keep this dangerous 'feature'? Some books say the
> free store manager keeps the number of items related to the pointer, and
> stuff like that. Is that enough reason? That same manager could store it
> elsewhere. Like keep a map of pointers and item counts. A single delete
> would look up the map, if the pointer found, proceeding to array delete,
> using the item count, and making single delete otherwise. Or use whatever
> implementation the compiler vendor likes, just behave correctly.
>
> That would keep some bugs away that can't be catched at compile time. And
> the cost is a mere sizeof(pointer) overhead per array, and a little
slower
> delete. Even the syntax could be left as is, any delete working in
similar
> way.
>
> I'm sure it was considered by the comitee. Why that idea got
> rejected?
This is covered in D&E, 10.5.1 (page 218 in my copy).
[snip]
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Ron Natalie <ron@sensor.com>
Date: 08 Sep 00 14:50:49 GMT Raw View
"Balog Pal (mh)" wrote:
>
> Why standard C++ still keep this dangerous 'feature'?
It's a relic of the "lets not make the C++ way of doing things any
more inefficient than the old C way." Stroustrup even mentions it in
passing in the C++ Programming Language (you don't even need to dig
up D&E for this):
The special destruction operator for arrays, delete[], isn't
logically necessary. However, suppose the implementation of
the freestore had been required to hold sufficient information
for every object to tell if it was an individual or an array.
The user could have been relieved of a burden, but that obligation
would have imposed significant time and space overheads on some
C++ implementations.
The "some C++ implementations" are pretty much those who implement new
with a very thin wrapper around malloc as most early ones did.
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Steve Clamage <stephen.clamage@sun.com>
Date: 08 Sep 00 14:51:13 GMT Raw View
"Balog Pal (mh)" wrote:
>
> This is probably a FAQ, but I don't recall to see an answer.
>
> We all know that pointers got from new must be freed by delete and from
> new[] by delete[].
> But I think it's still quite common to mess up by incident, and explode
the
> program by doing otherwise.
>
> Why standard C++ still keep this dangerous 'feature'?
C++ users, especially those doing computer graphics, requested the
ability to provide different global memory pools for arrays (presumably
large allocations) than for single objects (presumably small allocations).
It would be possible for the runtime system to keep track of which
allocator
was used, whichever form of delete-expression was used. A C++
implementation
could choose to do that, and allow your program to work when you used
the wrong form of delete-expression.
Providing that capability requires storing extra information for EVERY
memory allocation, even when you don't need or want the feature.
Where ever that extra data is stored, it represents a memory overhead
that some programs cannot tolerate. For that reason, the language
definition places the burden on the programmer instead of on the
runtime system.
If you are concerned about the problem, you can choose always to
allocate arrays (possibly an array of one object), and never allocate
non-array objects. That way, you explicitly accept the additional memory
overhead without imposing it on other programs.
--
Steve Clamage, stephen.clamage@sun.com
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Bill Wade" <bill.wade@stoner.com>
Date: 08 Sep 00 14:51:59 GMT Raw View
"Balog Pal (mh)" <pasa@lib.hu> wrote in message
news:39ada046@andromeda.datanet.hu...
> This is probably a FAQ, but I don't recall to see an answer.
>
> We all know that pointers got from new must be freed by delete and from
> new[] by delete[].
> But I think it's still quite common to mess up by incident, and explode
the
> program by doing otherwise.
>
> Why standard C++ still keep this dangerous 'feature'?
In general C has the attitude that you don't pay for what you don't use.
C++ picked up much of that attitude. There are exceptions. Making free(0)
legal probably added about 1% to the cost of free() in a typical 1990
compiler.
C++ could have been written so that
{
auto_ptr<Base> ptr = foo(); // foo() returns new Derived[100];
ptr.get()[23].SomeBaseFunc();
}
would "do the right thing" even when Base had no virtual functions and
Derived was an incomplete type in this context. However the space and time
penalties would have been significant.
C++ allows compilers on some platforms to generate faster code by
promising:
delete is not used with arrays.
delete[] is not used with derived types (static type != dynamic type).
operator[] is not used with derived types.
delete is used with derived types only when ~base() is virtual
delete is not used with static or auto objects.
All of these promises make it easy to invoke undefined behavior. Defining
the behavior for any of these cases adds to compiler complexity and on many
platforms causes a performance hit. I believe the committee decided that
developers would be willing to put up with the required programming
discipline in order to get the performance benefits.
Your vendor can provide non-portable support for the "wrong" delete. If
you
think this feature is useful, try to convince your vendor.
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: "Mike Wahler" <mkwahler@mkwahler.net>
Date: 08 Sep 00 14:52:29 GMT Raw View
Balog Pal (mh) <pasa@lib.hu> wrote in message
news:39ada046@andromeda.datanet.hu...
> This is probably a FAQ, but I don't recall to see an answer.
>
> We all know that pointers got from new must be freed by delete and from
> new[] by delete[].
> But I think it's still quite common to mess up by incident, and explode
the
> program by doing otherwise.
>
> Why standard C++ still keep this dangerous 'feature'? Some books say the
> free store manager keeps the number of items related to the pointer, and
> stuff like that. Is that enough reason? That same manager could store it
> elsewhere. Like keep a map of pointers and item counts. A single delete
> would look up the map, if the pointer found, proceeding to array delete,
> using the item count, and making single delete otherwise. Or use whatever
> implementation the compiler vendor likes, just behave correctly.
>
> That would keep some bugs away that can't be catched at compile time. And
> the cost is a mere sizeof(pointer) overhead per array, and a little
slower
> delete. Even the syntax could be left as is, any delete working in
similar
> way.
>
> I'm sure it was considered by the comitee. Why that idea got rejected?
Probably because efficiency was considered more important than
the convenience of the coder. :-) Anyone who has ever dealt with
the horrible efficiency of string handling in BASIC would probably
gladly assume the responsibility for using language constructs correctly
in the interest of performance.
$.02
-Mike
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Gregory Bond <gnb@hellcat.itga.com.au>
Date: 12 Sep 00 08:07:44 GMT Raw View
Pierre Baillargeon <pb@artquest.net> writes:
> I wonder then why the element count was not passed in by the caller as
> well. Isn't it also an "overhead", and that some people may not want to
> pay for it?
I have very dim memory that in the original C++PL 1st edition version
of the language, you actually did have to specify the number of
elements to delete[]!
(Or was that c-with-classes?)
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Stephen Clamage <stephen.clamage@sun.com>
Date: 12 Sep 00 08:07:52 GMT Raw View
On 10 Sep 2000 11:10:47 -0400, Pierre Baillargeon <pb@artquest.net>
wrote:
>Francis Glassborow wrote:
>>
>> In article <39ada046@andromeda.datanet.hu>, Balog Pal (mh) <pasa@lib.hu>
>> writes
>> >That would keep some bugs away that can't be catched at compile time.
>> >And the cost is a mere sizeof(pointer) overhead per array, and a little
>> >slower delete. Even the syntax could be left as is, any delete working
>> >in similar way.
>> >
>> >I'm sure it was considered by the comitee. Why that idea got rejected?
>>
>> It was, and over the early history of C++ (1980s) various different
>> variations were tried. The reason it is as it is, is the principle that
>> you should not pay for what you do not use. Anyone who wants this extra
>> security simply never uses new/delete but uses new[]/delete[] in all
>> cases (with a dimension of one where only a single object is required)
>
>I wonder then why the element count was not passed in by the caller as
>well. Isn't it also an "overhead", and that some people may not want to
>pay for it?
As explained in the ARM and in D&E, keeping track of the element count
may be difficult for the programmer, and is quite error prone.
Keeping track of whether you allocated an array or a single object
normally is not a problem, and if it is, you can choose always to
allocate an array (possibly of one element). The user interface issues
are not comparable.
As for the overhead, the system needs to keep track of the number of
elements only if the type has a non-trivial destructor. For simple
types, there is no overhead.
---
Steve Clamage, stephen.clamage@sun.com
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Pierre Baillargeon <pb@artquest.net>
Date: 10 Sep 2000 11:10:47 -0400 Raw View
Francis Glassborow wrote:
>
> In article <39ada046@andromeda.datanet.hu>, Balog Pal (mh) <pasa@lib.hu>
> writes
> >That would keep some bugs away that can't be catched at compile time.
> >And the cost is a mere sizeof(pointer) overhead per array, and a little
> >slower delete. Even the syntax could be left as is, any delete working
> >in similar way.
> >
> >I'm sure it was considered by the comitee. Why that idea got rejected?
>
> It was, and over the early history of C++ (1980s) various different
> variations were tried. The reason it is as it is, is the principle that
> you should not pay for what you do not use. Anyone who wants this extra
> security simply never uses new/delete but uses new[]/delete[] in all
> cases (with a dimension of one where only a single object is required)
I wonder then why the element count was not passed in by the caller as
well. Isn't it also an "overhead", and that some people may not want to
pay for it?
What's more, if one is worried about the cost of storing the array count
for non-arrays, one could always fallback to the malloc/free form
instead (maybe wrapping it in a template to cast the return value). Or
one could write a specialized allocator. After all, if memory footprint
is really important for the programmer, that is the only portable way of
ensuring the minimal memory usage possible.
As for the suggestion of always using the array version, it does not
solve the original problem: since both syntaxes are valid, one can still
very well call the wrong one, since the compiler accepts both.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: James Kuyper <kuyper@wizard.net>
Date: Sun, 10 Sep 2000 19:41:36 GMT Raw View
Pierre Baillargeon wrote:
>
> Francis Glassborow wrote:
...
> > It was, and over the early history of C++ (1980s) various different
> > variations were tried. The reason it is as it is, is the principle that
> > you should not pay for what you do not use. Anyone who wants this extra
> > security simply never uses new/delete but uses new[]/delete[] in all
> > cases (with a dimension of one where only a single object is required)
>
> I wonder then why the element count was not passed in by the caller as
> well. Isn't it also an "overhead", and that some people may not want to
> pay for it?
The difference is that it's an unavoidable overhead for arrays; someone
has to keep track of the length, whether it's the user or the
implementation.
> What's more, if one is worried about the cost of storing the array count
> for non-arrays, one could always fallback to the malloc/free form
> instead (maybe wrapping it in a template to cast the return value). Or
What advantage does that have over the non-array version of new/delete?
> one could write a specialized allocator. After all, if memory footprint
> is really important for the programmer, that is the only portable way of
> ensuring the minimal memory usage possible.
>
> As for the suggestion of always using the array version, it does not
> solve the original problem: since both syntaxes are valid, one can still
> very well call the wrong one, since the compiler accepts both.
Yes, but you can establish coding standards which prohibit the use of
the non-array version, and it's pretty easy to automatically check for
the use of that version.
By the way, my newsserver prohibits posting to multiple moderated
newsgroups. How did your message get through?
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: 11 Sep 00 15:39:33 GMT Raw View
In article <39B93860.A7F8C8FB@artquest.net>, Pierre Baillargeon
<pb@artquest.net> writes
>I wonder then why the element count was not passed in by the caller as
>well. Isn't it also an "overhead", and that some people may not want to
>pay for it?
I believe that was also tried experimentally
>
>What's more, if one is worried about the cost of storing the array count
>for non-arrays, one could always fallback to the malloc/free form
>instead (maybe wrapping it in a template to cast the return value). Or
>one could write a specialized allocator. After all, if memory footprint
>is really important for the programmer, that is the only portable way of
>ensuring the minimal memory usage possible.
Frankly that is ridiculous. If you start using malloc and free, you have
to remember to use placement new and explicit dtor calls.
>
>As for the suggestion of always using the array version, it does not
>solve the original problem: since both syntaxes are valid, one can still
>very well call the wrong one, since the compiler accepts both.
Yes, but with a correct modern C++ programming style that actually
becomes fairly unlikely. delete should be done in one of two places:
1) in a dtor
2) very close to a use of new without any intervening code that can
throw.
The second choice should only be used in extremis because it is still
vulnerable to bad things when someone touches the code.
Basically, learn to write clean code.
Francis Glassborow Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]
Author: llewelly.@@edevnull.dot.com
Date: 11 Sep 00 16:02:08 GMT Raw View
Pierre Baillargeon <pb@artquest.net> writes:
[snip]
> I wonder then why the element count was not passed in by the caller as
> well. Isn't it also an "overhead", and that some people may not want to
> pay for it?
[snip]
That was tried, but it was found to be too error-prone. See D&E,
10.5.1 (pg 218 in my copy.)
[ Send an empty e-mail to c++-help@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://reality.sgi.com/austern_mti/std-c++/faq.html ]