Topic: Defect in std::stable_sort / std::stable_partition?


Author: caj@cs.york.ac.uk (chris jefferson)
Date: Wed, 22 Jun 2005 14:57:54 GMT
Raw View
I've been trying quite hard to implement stable_sort and
stable_partition based on the standard, and have been finding it very
hard. I've decided the major problem is that what is written in the
standard is far too inprecise to be useful at all.

The line I have the big problem is:

"if enough extra memory is available, it is N log N".

All existing implementations seem to implement this as:

Given a range [first,last), try to allocate as large a buffer as
possible of type iterator_traits<iterator>::value_type, and then using
this buffer sort/partition the range.

The problem with this implementation is that the only time I've
personally run out of memory was stable_sorting a vector<vector<int> >,
and while sizeof(vector<int>) is usually very small, of course when this
buffer was then filled with copies of my vector<int>, memory overfilled,
vector<int> threw bad_alloc, and I found myself with a half-sorted array
and no way of actually forcing stable_sort into low-memory mode on
purpose, so had to write my own.

So, I have a few questions:

1) Is it valid for an implementation to catch if bad_alloc is generated
during a copy or construction, recover, and then use a lower-memory
version? or is it necessary to always stop at any exception and (after a
little tidying up) pass the exception back up to user code?

2) Assume the first thing I try to do is allocate a buffer of size
[first,last), and that fails. At that point all the existing
implementations I've seem then try a buffer of size [first,last)/2, and
continue down to size 1. It seems to me that if a buffer of size
[first,last) can't be allocated then we are in a low-memory situation,
and trying to scrat around allocating other sized buffers probably isn't
wise. Would it be standard-conforming to only try allocating once, then
give up?

3) Assuming bad_alloc can't be caught and dealt with in the algorithm
(and I have a feeling it probably can't be.. but could perhaps that
could be allowed in a future version of the standard?), it seems to be
the current behaviour is the only possible option (except always forcing
the low memory version of non-PODs, which probably isn't a good idea, or
forcing that the algorithm actually sorts/partitions an array of
pointers, then the original is swapped into place).

In that case it seems the standard could do with some slight rewording
to make it clearer what actually happens? I'm having difficulty coming
up with what this wording should be however...

Thank you,

Chris

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html                       ]