Topic: operator new[] and very large arrays
Author: fw@deneb.enyo.de (Florian Weimer)
Date: Fri, 20 May 2005 19:22:32 GMT Raw View
* Howard Hinnant:
> The committee has looked at this issue and their reply is here:
>
> http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_closed.html#256
If I read this correctly, the committee has decided that the behavior
is undefined. Is this correct?
In this case, implementations would be allowed to throw
std::bad_alloc, which is better than nothing (i.e. defined behavior
and no exception must be thrown).
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: fw@deneb.enyo.de (Florian Weimer)
Date: Tue, 11 Jan 2005 05:39:20 GMT Raw View
* John Nagle:
> Florian Weimer wrote:
>
>> Consider the following program:
>> struct foo
>> {
>> char data[16];
>> };
>> foo* bar()
>> {
>> size_t size = size_t(-1) / sizeof(foo) + 2;
>> return new foo[size];
>> }
>> In at least one implementation, bar() does not raise std::bad_alloc,
>> but returns a pointer to sizeof(foo) bytes on the heap (which is
>> obviously not sufficiently large to store the whole array).
>
> This is clearly a defect of the implementation. It's not
> a standards problem.
Okay. What should the implementation do instead? Is this really
permitted by the standard?
When interpreted literally, the standard doesn't seem to give
permission to raise std:bad_alloc. This exception can be raised by
the allocation function, but it only receives a size_t value, and in
the example above, this value looks completely harmless (after the
wrap-around, which the allocation function cannot detect).
Therefore, std::bad_alloc would have to be raised the new-expression
itself. However, the standard does not allow this, if I read it
correctly.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: nospam@nospam.ucar.edu ("Thomas Mang")
Date: Tue, 11 Jan 2005 05:36:39 GMT Raw View
"John Nagle" <nagle@animats.com> schrieb im Newsbeitrag
news:fElEd.267$8Z1.201@newssvr14.news.prodigy.com...
> Florian Weimer wrote:
>
> > Consider the following program:
> >
> > struct foo
> > {
> > char data[16];
> > };
> >
> > foo* bar()
> > {
> > size_t size = size_t(-1) / sizeof(foo) + 2;
> > return new foo[size];
> > }
> >
> > In at least one implementation, bar() does not raise std::bad_alloc,
> > but returns a pointer to sizeof(foo) bytes on the heap (which is
> > obviously not sufficiently large to store the whole array).
>
> This is clearly a defect of the implementation. It's not
> a standards problem.
>
How does an implementation have to behave if the value (array overhead)
added to the sizeof(foo)*num_objects_to_create produces an overflow for
std::size_t?
E.g: char*p = new char[std::numeric_limits<std::size_t>::max()]; // UB?
char* p2 = new char[100]; // Still UB if array overhead is very very large
on a (dumb) implementation?
Thomas
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: hinnant@metrowerks.com (Howard Hinnant)
Date: Tue, 11 Jan 2005 05:38:49 GMT Raw View
In article <fElEd.267$8Z1.201@newssvr14.news.prodigy.com>,
nagle@animats.com (John Nagle) wrote:
> Florian Weimer wrote:
>
> > Consider the following program:
> >
> > struct foo
> > {
> > char data[16];
> > };
> >
> > foo* bar()
> > {
> > size_t size = size_t(-1) / sizeof(foo) + 2;
> > return new foo[size];
> > }
> >
> > In at least one implementation, bar() does not raise std::bad_alloc,
> > but returns a pointer to sizeof(foo) bytes on the heap (which is
> > obviously not sufficiently large to store the whole array).
>
> This is clearly a defect of the implementation. It's not
> a standards problem.
The committee has looked at this issue and their reply is here:
http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_closed.html#256
-Howard
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: ron@sensor.com (Ron Natalie)
Date: Tue, 11 Jan 2005 05:38:03 GMT Raw View
Florian Weimer wrote:
>
> In at least one implementation, bar() does not raise std::bad_alloc,
> but returns a pointer to sizeof(foo) bytes on the heap (which is
> obviously not sufficiently large to store the whole array).
You're overflowing size_t inside the allocator (though I suspect you
new that). You've exceeded an implementation limit. While 5.3.4
would presume that you can feed any garbage value into new and expect
all errors to be reflected with std::bad_alloc, the implicit behavior
of the allocator is defined by the standard, and there is a defined
overflow behavior that is occuring.
I can't get worked up over this, what would you expect to happen if
the allocator suceeded in allcoating it? The behavior of an allocation
in excess of what can be reflected in size_t isn't well defined.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: nagle@animats.com (John Nagle)
Date: Wed, 12 Jan 2005 07:36:35 GMT Raw View
Howard Hinnant wrote:
>>>In at least one implementation, bar() does not raise std::bad_alloc,
>>>but returns a pointer to sizeof(foo) bytes on the heap (which is
>>>obviously not sufficiently large to store the whole array).
>
> The committee has looked at this issue and their reply is here:
>
> http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_closed.html#256
I'd argue that since the implementation does the multiply,
the implementation is responsible for insuring that the
multiply does not overflow, or at least that overflow is
detected. It's much easier to detect this once, in the
implementation of "new", than in all the places that
call "new".
Note that this is a source of exploitable security
vulnerabilities, so refusing to fix it may be
viewed as contributory negligence.
John Nagle
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: ron@sensor.com (Ron Natalie)
Date: Fri, 14 Jan 2005 02:21:26 GMT Raw View
John Nagle wrote:
> I'd argue that since the implementation does the multiply,
> the implementation is responsible for insuring that the
> multiply does not overflow, or at least that overflow is
> detected.
Only in the fact that the implementation generates code to
implement langauge constructs, the algorithm is spelled out
in the stadnard.
> It's much easier to detect this once, in the
> implementation of "new", than in all the places that
> call "new".
>
The fact that something makes good sense along these lines
(or any software engineering consideration) has never been
a justification in C++ standards making.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: fw@deneb.enyo.de (Florian Weimer)
Date: Mon, 10 Jan 2005 02:17:13 GMT Raw View
Consider the following program:
struct foo
{
char data[16];
};
foo* bar()
{
size_t size = size_t(-1) / sizeof(foo) + 2;
return new foo[size];
}
In at least one implementation, bar() does not raise std::bad_alloc,
but returns a pointer to sizeof(foo) bytes on the heap (which is
obviously not sufficiently large to store the whole array).
This violates the requirement of 5.3.4(10) (especially if there was a
user-defined version of operator new[]), but the standard does not
offer a way to signal that there is a problem. Allowing
implementations to throw std::bad_alloc in this situation seems to be
a reasonable solution.
Unfortunately, returning a pointer to a heap area which is too small
can result in a vulnerable application that permits code injection (as
opposed to a denial-of-service issue because of uncontrolled memory
consumption). Similar bugs were discovered in the Sun RPC library and
reported to be exploitable. Some vendors have fixed errors in the C
calloc function (see <http://cert.uni-stuttgart.de/advisories/calloc.php>,
which contains further references).
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: nagle@animats.com (John Nagle)
Date: Mon, 10 Jan 2005 03:18:58 GMT Raw View
Florian Weimer wrote:
> Consider the following program:
>
> struct foo
> {
> char data[16];
> };
>
> foo* bar()
> {
> size_t size = size_t(-1) / sizeof(foo) + 2;
> return new foo[size];
> }
>
> In at least one implementation, bar() does not raise std::bad_alloc,
> but returns a pointer to sizeof(foo) bytes on the heap (which is
> obviously not sufficiently large to store the whole array).
This is clearly a defect of the implementation. It's not
a standards problem.
John Nagle
Animats
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]