Topic: Containers and library issue 69 (vector/contiguity)
Author: pdimov@mmltd.net (Peter Dimov)
Date: Tue, 8 Jan 2002 15:20:45 GMT Raw View
"Jim Barry" <jim.barry@bigfoot.com> wrote in message news:<1010406031.12502.0.nnrp-01.3e31ffea@news.demon.co.uk>...
> "Peter Dimov" <pdimov@mmltd.net> wrote:
> > A dedicated member function is more flexible than &v[0]:
>
> Quite possibly, but &v[0] works already without respecifying vector. All
> it requires is the formalisation of contiguity as delivered by the TC.
True. It's v.empty()? 0: &v[0] but it works.
On the other hand the original thread was about a (probably
hypothetical) std::vector that has good reasons (segmented
architecture) to not be contiguous at all times.
> And I have to question the utility of the schizophrenic container that
> you describe. It seems to me that either vector should be contiguous or
> it shouldn't, not somewhere in between.
I believe it's the other way around: the utility of the "contiguity at
all times" is questionable, given a "contiguity at a specific time"
feature. Do you have an example?
--
Peter Dimov
Multi Media Ltd.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: jk@steel.orel.ru (Eugene Karpachov)
Date: Tue, 8 Jan 2002 16:38:59 GMT Raw View
Tue, 8 Jan 2002 06:59:25 GMT Ken Alverson want:
>"Jim Barry" <jim.barry@bigfoot.com> wrote in message
>news:1010406031.12502.0.nnrp-01.3e31ffea@news.demon.co.uk...
>> "Peter Dimov" <pdimov@mmltd.net> wrote:
>> > A dedicated member function is more flexible than &v[0]:
>>
>> Quite possibly, but &v[0] works already without respecifying vector.
>
>If you need another argument for the addition of a member function, consider
>the case where (for whatever reason) the vector is replaced with a different
>container. &v.front() and &*v.begin() still compile for virtually all
>containers (though may fail due to type differences in associative
>containers), and &v[0] still compiles for all indexable containers (vector,
>deque, map, etc). Now the code will fail unpredictably at runtime.
But note that when you provide &v[0] you are really want to provide
container with contiguous data memory layout. You will not want to
substitute vector with deque etc. if contiguous data is what you want,
so here is nothing to do with flexibility.
Still, I'm agree that &v[0] is kind of ugly; furthermore, member
function is more appropriate for functional style of programming, like
using as argument for binders, adaptors etc.
--
jk
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Ken Alverson" <Ken@Alverson.com>
Date: Tue, 8 Jan 2002 22:14:05 GMT Raw View
"Eugene Karpachov" <jk@steel.orel.ru> wrote in message
news:slrna3la95.1mm.jk@localhost.localdomain...
> Tue, 8 Jan 2002 06:59:25 GMT Ken Alverson want:
> >"Jim Barry" <jim.barry@bigfoot.com> wrote in message
> >news:1010406031.12502.0.nnrp-01.3e31ffea@news.demon.co.uk...
> >> "Peter Dimov" <pdimov@mmltd.net> wrote:
> >> > A dedicated member function is more flexible than &v[0]:
> >>
> >> Quite possibly, but &v[0] works already without respecifying vector.
> >
> >If you need another argument for the addition of a member function,
consider
> >the case where (for whatever reason) the vector is replaced with a
different
> >container. &v.front() and &*v.begin() still compile for virtually all
> >containers (though may fail due to type differences in associative
> >containers), and &v[0] still compiles for all indexable containers
(vector,
> >deque, map, etc). Now the code will fail unpredictably at runtime.
>
> But note that when you provide &v[0] you are really want to provide
> container with contiguous data memory layout. You will not want to
> substitute vector with deque etc. if contiguous data is what you want,
> so here is nothing to do with flexibility.
You're assuming the programmer knows &v[0] is used in the code. If you're
working on a team project, or you're a maintnance programmer, or it just
slipped your mind, you wouldn't necessarily know that.
Ken
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Jim Barry" <jim.barry@bigfoot.com>
Date: Wed, 9 Jan 2002 15:56:32 GMT Raw View
Peter Dimov wrote:
> On the other hand the original thread was about a (probably
> hypothetical) std::vector that has good reasons (segmented
> architecture) to not be contiguous at all times.
Yes, and my response would be that segmented architectures present a
curious and increasingly less important problem that is in any case
better solved by deque than by vector.
> I believe it's the other way around: the utility of the
> "contiguity at all times" is questionable, given a "contiguity
> at a specific time" feature. Do you have an example?
I think it runs deeper than the simple need to interface with C APIs.
The standard containers embody various trade-offs. For example,
std::list trades random-access iterators for efficient insertion
anywhere in the sequence. In contrast, std::vector takes on rather
inefficient insertion characteristics in order to provide the fastest
possible random-access iterators, generally pointers. Somewhere in
between lies std::deque.
To me, the whole point of std::vector is that is a container based on a
dynamically allocated array. In theory there is no need ever to use
new[] and delete[] directly, because std::vector can be used instead
without loss of performance. That can no longer be true if std::vector
does not allocate contiguously.
- Jim
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Jim Barry" <jim.barry@bigfoot.com>
Date: Mon, 7 Jan 2002 21:40:24 GMT Raw View
"Peter Dimov" <pdimov@mmltd.net> wrote:
> A dedicated member function is more flexible than &v[0]:
Quite possibly, but &v[0] works already without respecifying vector. All
it requires is the formalisation of contiguity as delivered by the TC.
And I have to question the utility of the schizophrenic container that
you describe. It seems to me that either vector should be contiguous or
it shouldn't, not somewhere in between.
- Jim
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Ken Alverson" <Ken@Alverson.com>
Date: Tue, 8 Jan 2002 06:59:25 GMT Raw View
"Jim Barry" <jim.barry@bigfoot.com> wrote in message
news:1010406031.12502.0.nnrp-01.3e31ffea@news.demon.co.uk...
> "Peter Dimov" <pdimov@mmltd.net> wrote:
> > A dedicated member function is more flexible than &v[0]:
>
> Quite possibly, but &v[0] works already without respecifying vector.
If you need another argument for the addition of a member function, consider
the case where (for whatever reason) the vector is replaced with a different
container. &v.front() and &*v.begin() still compile for virtually all
containers (though may fail due to type differences in associative
containers), and &v[0] still compiles for all indexable containers (vector,
deque, map, etc). Now the code will fail unpredictably at runtime.
I'd prefer a compile time error.
Also, assuming vector<bool> is left alone, you could disallow
vector<bool>::c_arr(), while you can't disallow &vector<bool>::operator[]...
Ken
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: pdimov@mmltd.net (Peter Dimov)
Date: Thu, 3 Jan 2002 19:15:09 GMT Raw View
"Jim Barry" <jim.barry@bigfoot.com> wrote in message news:<AhDY7.28526$Zg2.2944869@news11-gui.server.ntli.net>...
> Ken Alverson wrote:
> > I've always thought that if we are to assume an array layout
> > for the vector, we should have a function vector::c_arr() to
> > get the pointer to the internal array (mirroring but not
> > totally equivalent to basic_string::c_str()). &*v.begin()
> > just looks ugly and hacky to me (&v.front() seems marginally
> > better).
>
> I find &v[0] palatable enough, so I don't see any need for an additional
> member function. I would also find a guarantee of contiguity for
> basic_string extremely useful, but somehow I doubt that's on the cards.
A dedicated member function is more flexible than &v[0]:
* It may be specified to work for an empty vector.
* It may be specified to potentially cause reallocation.
* It may throw.
IOW the &v[0] interface requires that the vector elements are always
contiguous, whereas a member function interface might mean that the
vector elements are contiguous only after the member function has been
called (until the next reallocation occurs.)
--
Peter Dimov
Multi Media Ltd.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Pete Becker <petebecker@acm.org>
Date: Thu, 27 Dec 2001 00:56:43 GMT Raw View
Ken Alverson wrote:
>
> I've always thought that if we are to assume an array layout for the vector,
> we should have a function vector::c_arr() to get the pointer to the internal
> array (mirroring but not totally equivalent to basic_string::c_str()).
That is, mirroring but fundamentally diferent from basic_string::c_str.
<g> basic_string is not required to use contiguous storage. c_str was
added to provide an efficient conversion to a contiguous array of
characters.
--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Ken Alverson" <Ken@Alverson.com>
Date: Thu, 27 Dec 2001 08:17:48 GMT Raw View
"Pete Becker" <petebecker@acm.org> wrote in message
news:3C2A6860.2C833307@acm.org...
> Ken Alverson wrote:
> >
> > I've always thought that if we are to assume an array layout for the
vector,
> > we should have a function vector::c_arr() to get the pointer to the
internal
> > array (mirroring but not totally equivalent to basic_string::c_str()).
>
> That is, mirroring but fundamentally diferent from basic_string::c_str.
> <g> basic_string is not required to use contiguous storage. c_str was
> added to provide an efficient conversion to a contiguous array of
> characters.
Yes, I was trying to say as much without expanding my parenthetical to a
full paragraph ;)
Ken
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: pdimov@mmltd.net (Peter Dimov)
Date: Thu, 27 Dec 2001 17:02:28 GMT Raw View
"Ken Alverson" <Ken@Alverson.com> wrote in message news:<a04a4n$av3$1@eeyore.INS.cwru.edu>...
> I've always thought that if we are to assume an array layout for the vector,
> we should have a function vector::c_arr() to get the pointer to the internal
> array (mirroring but not totally equivalent to basic_string::c_str()).
Or std::vector<>::data():
T * std::vector<T>::data()
T const * std::vector<T>::data() const // ?
post: v.data() + i == &v[i] for every i such that 0 <= i < v.size().
> &*v.begin() just looks ugly and hacky to me (&v.front() seems marginally
> better).
And both are undefined behavior when the vector is empty.
--
Peter Dimov
Multi Media Ltd.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Joe Gottman" <joegottman@worldnet.att.net>
Date: Fri, 28 Dec 2001 14:43:20 CST Raw View
"Peter Dimov" <pdimov@mmltd.net> wrote in message
news:7dc3b1ea.0112270618.7502f5af@posting.google.com...
> "Ken Alverson" <Ken@Alverson.com> wrote in message
news:<a04a4n$av3$1@eeyore.INS.cwru.edu>...
> > I've always thought that if we are to assume an array layout for the
vector,
> > we should have a function vector::c_arr() to get the pointer to the
internal
> > array (mirroring but not totally equivalent to basic_string::c_str()).
>
> Or std::vector<>::data():
>
> T * std::vector<T>::data()
> T const * std::vector<T>::data() const // ?
>
> post: v.data() + i == &v[i] for every i such that 0 <= i < v.size().
>
> > &*v.begin() just looks ugly and hacky to me (&v.front() seems marginally
> > better).
>
> And both are undefined behavior when the vector is empty.
>
How would you have vector::data() act when the vector is empty? Would it
still be undefined, or would you return a null pointer?
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Sat, 29 Dec 2001 03:48:07 CST Raw View
Joe Gottman wrote:
...
> How would you have vector::data() act when the vector is empty? Would it
> still be undefined, or would you return a null pointer?
By analogy with, the specification for vector::data() should contain
wording like 21.3.6p3 for basic_string::data(): "... If size() is zero,
the member returns a non-null pointer that is copyable and can have zero
added to it."
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Ken Alverson" <Ken@Alverson.com>
Date: Sat, 29 Dec 2001 03:48:07 CST Raw View
"Joe Gottman" <joegottman@worldnet.att.net> wrote in message
news:uCPW7.312797$W8.11576669@bgtnsc04-news.ops.worldnet.att.net...
> >
> > > &*v.begin() just looks ugly and hacky to me (&v.front() seems
marginally
> > > better).
> >
> > And both are undefined behavior when the vector is empty.
> >
>
> How would you have vector::data() act when the vector is empty? Would
it
> still be undefined, or would you return a null pointer?
I think undefined is a poor choice, we should assume that the results will
be passed on to C library calls at some point, so it would seem desirable
that, for example, the following line would perform as expected:
memset( v.data(), 0, v.size()*sizeof(T) );
even if v is empty (in which case expected behavior would be a no-op). I
can't find any documentation on the moment about how memset (or similar)
handle NULL as an input, that may be a valid choice.
I don't like .data() over .c_arr() for the purely psychological reason that
.c_arr() dissuades use outside of C interop. From my experience, far too
many people use string::data() over string::c_str() because they think it's
"better" based purely on the name and then get burned when it doesn't do
what they expect (in this case, null terminate). Null termination isn't an
issue here, but I do think dissuading raw pointer manipulation for people
who are likely to get it wrong is worthwhile.
Ken
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: pdimov@mmltd.net (Peter Dimov)
Date: Sat, 29 Dec 2001 16:58:10 CST Raw View
"Joe Gottman" <joegottman@worldnet.att.net> wrote in message news:<uCPW7.312797$W8.11576669@bgtnsc04-news.ops.worldnet.att.net>...
> "Peter Dimov" <pdimov@mmltd.net> wrote in message
> news:7dc3b1ea.0112270618.7502f5af@posting.google.com...
> >
> > T * std::vector<T>::data()
> > T const * std::vector<T>::data() const // ?
> >
> > post: v.data() + i == &v[i] for every i such that 0 <= i < v.size().
> >
> > > &*v.begin() just looks ugly and hacky to me (&v.front() seems marginally
> > > better).
> >
> > And both are undefined behavior when the vector is empty.
> >
>
> How would you have vector::data() act when the vector is empty? Would it
> still be undefined, or would you return a null pointer?
The above definition covers the case. There is no precondition, so
data() must work and return a valid pointer. Of course the only thing
that is guaranteed about this pointer is that it can be copied, so
NULL is a legal return value.
--
Peter Dimov
Multi Media Ltd.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Joe Gottman" <joegottman@worldnet.att.net>
Date: Sat, 29 Dec 2001 16:58:19 CST Raw View
"James Kuyper Jr." <kuyper@wizard.net> wrote in message
news:3C2D1773.490C3523@wizard.net...
> By analogy with, the specification for vector::data() should contain
> wording like 21.3.6p3 for basic_string::data(): "... If size() is zero,
> the member returns a non-null pointer that is copyable and can have zero
> added to it."
>
I think having vector::data() return an assignable pointer would be
extremely dangerous. If the user does assign a value to it, that value will
be reachable ONLY by using data(), not by using iterators, operator[](),
etc. Any call to push_back will cause the value assigned via data() to be
clobbered. Plus, many implementations of vector allocate space for objects
when reserve() is called, but only call their constructors when objects are
actually added. In this case, calling data() on an empty vector would
return a pointer to an object that hasn't been constructed yet.
Joe Gottman
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Kalle Olavi Niemitalo <kon@iki.fi>
Date: Sun, 30 Dec 2001 20:37:42 CST Raw View
Let's say I have a vector whose size is zero and capacity is
nonzero. I then call data(), push_back() and again data().
I think both calls of data() should return the same address,
because the vector was not reallocated in the meantime.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Sun, 30 Dec 2001 20:37:51 CST Raw View
Joe Gottman wrote:
>
> "James Kuyper Jr." <kuyper@wizard.net> wrote in message
> news:3C2D1773.490C3523@wizard.net...
> > By analogy with, the specification for vector::data() should contain
> > wording like 21.3.6p3 for basic_string::data(): "... If size() is zero,
> > the member returns a non-null pointer that is copyable and can have zero
> > added to it."
> >
>
> I think having vector::data() return an assignable pointer would be
> extremely dangerous. ...
I was assuming that vector::data() was meant to be analogous to
basic_string::data(). In that case, the pointer it would return would
not be assignable. That is, the following code would not be legal:
T *pT = new T[10];
std::vector<T> v;
v.data() = pT;
To make the returned pointer assignable, data() would have to return a
reference to the pointer, rather than the pointer itself. However, I
suspect that what you actually meant was was something like this:
v.push_back(T());
*v.data() = T();
That would also not be legal, since by analogy data() would return a
'const T*', not a 'T*'.
...
> clobbered. Plus, many implementations of vector allocate space for objects
> when reserve() is called, but only call their constructors when objects are
> actually added. ...
Of course; that's the only legal way to do it. T must have a copy
constructor (see 23p3), but there is no requirement that T have any
other kind of constructor. Without a value to copy from, there's no
legal way for std::vector<T> to call a constructor of T (specializations
of std::vector<> for types known to have other types of constructors
could use them, though there's no obvious reason why they should).
> ... In this case, calling data() on an empty vector would
> return a pointer to an object that hasn't been constructed yet.
Of course; if data() is to be added as a member to std::vector, in
analogy with std::basic_string<>::data(), then there must be analogous
restrictions on it's use. See 21.3p5, the specifications with regard to
data() in tables 38 through 43, and 21.3.6p4.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: James Kanze <kanze@gabi-soft.de>
Date: Tue, 1 Jan 2002 10:13:53 CST Raw View
"Joe Gottman" <joegottman@worldnet.att.net> writes:
|> "James Kuyper Jr." <kuyper@wizard.net> wrote in message
|> news:3C2D1773.490C3523@wizard.net...
|> > By analogy with, the specification for vector::data() should
|> > contain wording like 21.3.6p3 for basic_string::data(): "... If
|> > size() is zero, the member returns a non-null pointer that is
|> > copyable and can have zero added to it."
|> I think having vector::data() return an assignable pointer would
|> be extremely dangerous. If the user does assign a value to it, that
|> value will be reachable ONLY by using data(), not by using
|> iterators, operator[](), etc.
Why would this be necessarily the case?
|> Any call to push_back will cause the value assigned via data() to be
|> clobbered.
Calling push_back may invalidate the returned pointer, just as it
currently may invalidate references and iterators.
|> Plus, many implementations of vector allocate space for objects when
|> reserve() is called, but only call their constructors when objects
|> are actually added.
I hope that's the case for all implementations, since it is a
requirement of the standard.
|> In this case, calling data() on an empty vector would return a
|> pointer to an object that hasn't been constructed yet.
Calling data() on an empty vector will result in a pointer that cannot
be dereferenced, in any case. What the result will actually be depends
on what is finally standardized: undefined behavior, null pointer, the
wording proposed by James Kuyper, or something else.
In any case, the returned pointer can only be valid for the range
[data()...data()+size()).
My personal feelings tend toward a T const* as the return type, but I
can understand those who want a T*, in order to interface with existing
C code, say for something line the following:
std::vector< char > hostname( MAXHOSTNAMELEN ) ;
gethostname( hostname.data(), hostname.size() ) ;
--
James Kanze mailto:kanze@gabi-soft.de
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
Ziegelh ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(0)179 2607481
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Jim Barry" <jim.barry@bigfoot.com>
Date: Thu, 3 Jan 2002 04:57:10 GMT Raw View
Ken Alverson wrote:
> I've always thought that if we are to assume an array layout
> for the vector, we should have a function vector::c_arr() to
> get the pointer to the internal array (mirroring but not
> totally equivalent to basic_string::c_str()). &*v.begin()
> just looks ugly and hacky to me (&v.front() seems marginally
> better).
I find &v[0] palatable enough, so I don't see any need for an additional
member function. I would also find a guarantee of contiguity for
basic_string extremely useful, but somehow I doubt that's on the cards.
- Jim
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: cmd@gmrc.gecm.com (Chris Dearlove)
Date: Tue, 18 Dec 2001 16:19:47 GMT Raw View
Chris Newton (chrisnewton@no.junk.please.btinternet.com) wrote:
: However, does tying the logical entity vector to its physical
: requirements so tightly mean that useful flexibility will be sacrificed
: in exchange for little immediate gain?
It's not little immediate gain, it's a substantial immediate gain
when you need to interface to C or C-like (e.g. Fortran) functions
which need continuity, but you want the advantages of vector in
your C++ code. This is a real requirement for a significant number
of users.
The approach you suggest may have been superior if it had been done
earlier. However there is an immediate need which cannot be expected
to wait until the next standard round. It's a compromise, leaving
something good now rather than better later, pretty much the case
with many other parts of the C++ library, and even the language
itself. (I think this could be added to the lists of things which
are regarded as "the spirit of C++".) I do realise that fixing this
now prevents your approach from taking off, because once the guarantee
is there for vector it will be unreasonable to remove it.
(Actually the time I needed this the issue hadn't been ruled on, and
also I was new to C++ and unaware of it. I assumed there was no
guarantee of continuity and created my own array class. With the
vector guarantee I could now use vector and get a superior solution
- my array class could be improved, for a start it's not STL compliant
- for no effort.)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: jthill_@mac.com (Jim Hill)
Date: Wed, 19 Dec 2001 19:00:28 GMT Raw View
cmd@gmrc.gecm.com (Chris Dearlove) wrote in message news:<9vn4i1$7t2$1@miranda.gmrc.gecm.com>...
> I assumed there was no guarantee of continuity and
> created my own array class.
Devil's Advocate time:
() One way, std::vector works everywhere. The other way, anything
using vector doesn't work everywhere.
() One way, it's easily fixable (like, just put the best of what's
out there now into boost). The other way ... I don't see a good
possibility.
() One way, code that needs contiguous vectors won't port to machines
that segment their address spaces. The other way, it still won't
port.
What does this change _actually_ buy us, again?
Jim
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Matthew Austern <austern@research.att.com>
Date: Thu, 20 Dec 2001 01:23:38 GMT Raw View
"Chris Newton" <chrisnewton@no.junk.please.btinternet.com> writes:
> First of all, I agree that most current library implementations
> implement vector using an array, and therefore storage is contiguous in
> these cases. I also realise that many well-regarded authors already
> write with the assumption that this is the case, rightly or wrongly. In
> this light, standardising the contiguity requirement is an attractive
> option.
>
> However, does tying the logical entity vector to its physical
> requirements so tightly mean that useful flexibility will be sacrificed
> in exchange for little immediate gain?
>
> I am aware of at least one suggestion for a standard library
> implementation to run on older Intel x86 processors. For those not
> familiar with it, memory on such machines is divided into 64K segments,
> and addressed by specifying a segment and an offset into it. A frequent
> irritation on such machines was that even if you had more than 64K of
> memory, a native C or C++ array couldn't conveniently be bigger than 64K
> in size, and larger arrays were not supported by most compilers.
We were aware that adding the contiguity requirement would rule out
such implementations, and decided that it was worth the price.
I can't speak for the reasoning of everyone in the LWG, but mine went
something like this: a vector<> implementation that did allow more
elements than could fit into physical memory (given a particular
memory model) would be a tricky thing. It would probably look more
like a deque<> than like an ordinary vector<>, in things like
performance characteristics and invalidation guarantees. A container
like that might well be useful, but I didn't think it would be all
that useful to call it a vector. I thought it would be better to make
it clear that a vector must be (as it is in today's implementation) a
single contiguous block of memory, and that more complicated
containers are called something different.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Pete Becker <petebecker@acm.org>
Date: Thu, 20 Dec 2001 01:55:06 GMT Raw View
Matthew Austern wrote:
>
> I can't speak for the reasoning of everyone in the LWG, but mine went
> something like this: a vector<> implementation that did allow more
> elements than could fit into physical memory (given a particular
> memory model) would be a tricky thing. It would probably look more
> like a deque<> than like an ordinary vector<>, in things like
> performance characteristics and invalidation guarantees. A container
> like that might well be useful, but I didn't think it would be all
> that useful to call it a vector. I thought it would be better to make
> it clear that a vector must be (as it is in today's implementation) a
> single contiguous block of memory, and that more complicated
> containers are called something different.
>
It's not a matter of fitting into physical memory. If a system has a
couple of megabytes of flat address space, implemented with only 16K of
physical RAM backed by hard disk and lots of swapping, a conforming
vector can use all of that address space.
Nor is it a matter of having a single contiguous block of memory,
whatever that means. <g>
The proposed change to the standard requires (I hope) that elements of a
vector have contiguous addresses. That is, getting to the next element
requires only a pointer increment.
On a segmented architecture with 64K segments this doesn't preclude
vectors with more than 64K of data. Pointer manipulations are more
efficient if the data is restricted to 64K, but it's straightforward for
a compiler to implement pointers that access more than that (Borland and
MS and everyone else who targeted the 8086 supported 'huge' pointers
that did exactly that -- basically, when pointer arithmetic runs off the
end of a segment it adjusts the segment selector -- the details don't
really affect this discussion, but if someone wants to know, I can go
into them).
--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "David Abrahams" <david.abrahams@rcn.com>
Date: Thu, 20 Dec 2001 02:46:30 GMT Raw View
"Pete Becker" <petebecker@acm.org> wrote in message
news:3C21447E.C9689071@acm.org...
> On a segmented architecture with 64K segments this doesn't preclude
> vectors with more than 64K of data. Pointer manipulations are more
> efficient if the data is restricted to 64K, but it's straightforward for
> a compiler to implement pointers that access more than that (Borland and
> MS and everyone else who targeted the 8086 supported 'huge' pointers
> that did exactly that -- basically, when pointer arithmetic runs off the
> end of a segment it adjusts the segment selector -- the details don't
> really affect this discussion, but if someone wants to know, I can go
> into them).
Maybe you've forgotten that those 'huge' pointers only work if your element
size is a power of 2...
-Dave
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Anthony Williams" <anthwil@nortelnetworks.com>
Date: Thu, 20 Dec 2001 18:49:23 GMT Raw View
"David Abrahams" <david.abrahams@rcn.com> wrote in message
news:9vriqk$inn$1@bob.news.rcn.net...
>
> "Pete Becker" <petebecker@acm.org> wrote in message
> news:3C21447E.C9689071@acm.org...
> > On a segmented architecture with 64K segments this doesn't preclude
> > vectors with more than 64K of data. Pointer manipulations are more
> > efficient if the data is restricted to 64K, but it's straightforward for
> > a compiler to implement pointers that access more than that (Borland and
> > MS and everyone else who targeted the 8086 supported 'huge' pointers
> > that did exactly that -- basically, when pointer arithmetic runs off the
> > end of a segment it adjusts the segment selector -- the details don't
> > really affect this discussion, but if someone wants to know, I can go
> > into them).
>
> Maybe you've forgotten that those 'huge' pointers only work if your
element
> size is a power of 2...
It is possible to implement them without that restriction, just slower.
Anthony
--
Anthony Williams
Software Engineer, Nortel Networks Optical Components Ltd
The opinions expressed in this message are not necessarily those of my
employer
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: cmd@gmrc.gecm.com (Chris Dearlove)
Date: Thu, 20 Dec 2001 18:49:58 GMT Raw View
Jim Hill (jthill_@mac.com) wrote:
: What does this change _actually_ buy us, again?
To be able to get the improvements of a C++ vector rather than
a raw array in a C++ program, and operate on the data in the array
without copying it, using a function designed to operate on
contiguous data (typically a C or Fortran library function, for
example the BLAS library).
Of the three parts of the above I hope we're agreed on two of them
(using a vector, not having to copy into and out of it to use it).
The third is a requirement only some people have, but those who
have it have it strongly. The same is probably true of the people
with segmented memory who need large vectors. If it weren't possible
to satisfy both groups simultaneously then it would be necessary
to decide who to disappoint. It may be the perceived importance
of the two groups, or the suggestions others have made that the
latter group can be satisfied by other means, that was the
deciding factor in accepting the change, I don't know.
Just for the record I originally posted just pointing out that an
implied suggestion that the change had no value isn't the case.
That's all I'm doing again. I accept there are arguments the other
way too. Of course I don't have to argue for the change, I won that
one before I even knew about it. (I lost - here only - a suggestion
to mandate the format of complex - as Fortran does - which would
be similarly valuable to me and, my testing the waters suggested,
some but not enough others.)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Pete Becker <petebecker@acm.org>
Date: Thu, 20 Dec 2001 18:50:24 GMT Raw View
David Abrahams wrote:
>
> "Pete Becker" <petebecker@acm.org> wrote in message
> news:3C21447E.C9689071@acm.org...
> > On a segmented architecture with 64K segments this doesn't preclude
> > vectors with more than 64K of data. Pointer manipulations are more
> > efficient if the data is restricted to 64K, but it's straightforward for
> > a compiler to implement pointers that access more than that (Borland and
> > MS and everyone else who targeted the 8086 supported 'huge' pointers
> > that did exactly that -- basically, when pointer arithmetic runs off the
> > end of a segment it adjusts the segment selector -- the details don't
> > really affect this discussion, but if someone wants to know, I can go
> > into them).
>
> Maybe you've forgotten that those 'huge' pointers only work if your element
> size is a power of 2...
>
No, I haven't forgotten it. It's not true. Data elements that cross a
64K boundary don't pose any problems in real mode. They're mildly
problematic in protected mode, but a compiler can pad elements so that a
sufficiently large array will exactly fit into 64K bytes. Compiler
writers didn't do that, but that doesn't mean it can't be done.
--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Stephen Howe" <SPAMstephen.howeGUARD@tnsofres.com>
Date: Thu, 20 Dec 2001 18:50:52 GMT Raw View
"David Abrahams" <david.abrahams@rcn.com> wrote in message
news:9vriqk$inn$1@bob.news.rcn.net...
> Maybe you've forgotten that those 'huge' pointers only work if your
element
> size is a power of 2...
That depends on the vendor. Microsoft allowed it such that an element can be
abitrary size if the size of memory is less than 128K as you can always
position the array so that it is partitioned between elements on a 64K
boundary. For arrays bigger than 128K, the element size must be a power of 2
(as you can't straddle more than one 64K boundary if the element size is not
a power of 2). All of this assumes some sort of "normalised" pointer
arithmetic where the segment portion changes infrequently.
With later version of their compiler, Borland allowed "unnormalised"
pointers and the restriction of elements being some power of 2 for memory
sizes greater than 128K is gone but at the cost of more expensive pointer
arithmentic.
Stephen Howe
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Carl Daniel" <cpdaniel@pacbell.net>
Date: Thu, 20 Dec 2001 19:20:05 GMT Raw View
"David Abrahams" <david.abrahams@rcn.com> wrote in message
news:9vriqk$inn$1@bob.news.rcn.net...
>
> "Pete Becker" <petebecker@acm.org> wrote in message
> news:3C21447E.C9689071@acm.org...
> > On a segmented architecture with 64K segments this doesn't preclude
> > vectors with more than 64K of data.>
>
> Maybe you've forgotten that those 'huge' pointers only work if your
element
> size is a power of 2...
>
Perhaps that limitation exists for some implementation, but it's not
inherent to the definition of segmented pointers. As long as the elements
are each themselves less than 64K-16 bytes in size (in the case of x86 real
mode), it's always possible to construct a pointer to the n+1st element from
a pointer to nth element (using the "minimize offset" variant of pointer
arithmetic for such pointers).
-cd
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Fri, 21 Dec 2001 00:22:44 GMT Raw View
Jim Hill wrote:
...
> Devil's Advocate time:
>
> () One way, std::vector works everywhere. The other way, anything
> using vector doesn't work everywhere.
?? Both under the current rules, and with the proposed resolution of DR
69, all code that uses std::vector correctly (i.e. in accordance with
either the old rules or the proposed new ones, respectively) works under
all conforming implementations - by definition.
I think you're not referring to std::vector in general, but to a
particular way of using std::vector.
> () One way, it's easily fixable (like, just put the best of what's
> out there now into boost). The other way ... I don't see a good
> possibility.
>
> () One way, code that needs contiguous vectors won't port to machines
> that segment their address spaces. The other way, it still won't
> port.
I may be exceptionally tired, but I found that a little too vague. Could
you identify which way you think falls into each category, and why?
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Gabriel Dos Reis <dosreis@cmla.ens-cachan.fr>
Date: Fri, 21 Dec 2001 13:17:17 CST Raw View
cmd@gmrc.gecm.com (Chris Dearlove) writes:
[...]
| (I lost - here only - a suggestion
| to mandate the format of complex - as Fortran does -
C99 already does that. I'm not sure whether compatibility reasons may
force C++ to adopt similar position. But one thing is certain, C++
mandates Cartesian representaion for std::complex<>.
--
Gabriel Dos Reis, dosreis@cmla.ens-cachan.fr
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: jthill_@mac.com (Jim Hill)
Date: Sat, 22 Dec 2001 03:00:51 CST Raw View
"James Kuyper Jr." <kuyper@wizard.net> wrote in message news:<3C227F8D.CC19D40B@wizard.net>...
> I may be exceptionally tired, but I found that a little too vague.
It's this simple: Big-Flat-Vector codes won't run on machines that
can't handle them efficiently, no matter what the C++ standard says. A
vendor would have to put real work into implementing anything *but*
BFVs, and what for? If there's no good reason won't happen, and if
there /is/ a good reason, well, does that make ruling it out a good
idea?
Jim
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Sat, 22 Dec 2001 17:43:49 CST Raw View
Jim Hill wrote:
>
> "James Kuyper Jr." <kuyper@wizard.net> wrote in message news:<3C227F8D.CC19D40B@wizard.net>...
> > I may be exceptionally tired, but I found that a little too vague.
>
> It's this simple: Big-Flat-Vector codes won't run on machines that
> can't handle them efficiently, no matter what the C++ standard says. A
Sure they will - people run inefficient code all the time; either
because they don't know it's inefficient, or because they (rightly or
wrongly; it depends upon the context) don't care about the inefficiency,
or because they're unaware of the alternatives.
In C++ there are always alternatives, including in this case inventing
your own container class that has the characteristics you want it to
have. You can choose either contiguity, or the ability to have a vector
with more elements than will fit in the largest available single block
of memory controlled by the allocator it is built with; whether or not
the proposed resolution of DR 69 is ever approved.
> vendor would have to put real work into implementing anything *but*
> BFVs, and what for? If there's no good reason won't happen, and if
> there /is/ a good reason, well, does that make ruling it out a good
> idea?
I'm don't care whether or not the proposed resolution to DR 69 is a good
idea; I'm agnostic on that issue. All I'm worrying about right now is
your earlier statement:
> () One way, std::vector works everywhere. The other way, anything
> using vector doesn't work everywhere.
Could you please identify which way is the "one way", and which way is
"the other way", and then please explain why you think "using vector
doesn't work everywhere" under "the other way". I can see how mis-using
std::vector<> won't necessarily work everywhere, but that's not a
situation that's unique to std::vector<>; it's true of every feature of
the standard.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: brangdon@cix.co.uk (Dave Harris)
Date: Sat, 22 Dec 2001 17:43:53 CST Raw View
kuyper@wizard.net (James Kuyper Jr.) wrote (abridged):
> As the standard is currently written, it's possible to meet
> all complexity requirements with a vector consisting of
> multiple equal-sized blocks, whose total size is larger
> than any single block of memory that can be allocated.
Could you say some more about this? I don't see how such a multi-block
vector would differ from a deque. How does it provide stronger guarantees
than a deque?
Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
brangdon@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Sat, 22 Dec 2001 21:59:04 CST Raw View
Dave Harris wrote:
>
> kuyper@wizard.net (James Kuyper Jr.) wrote (abridged):
> > As the standard is currently written, it's possible to meet
> > all complexity requirements with a vector consisting of
> > multiple equal-sized blocks, whose total size is larger
> > than any single block of memory that can be allocated.
>
> Could you say some more about this? I don't see how such a multi-block
> vector would differ from a deque. How does it provide stronger guarantees
> than a deque?
By managing the elements in the container in a slightly different way
than deque does. The internals for handling the memory blocks would be
quite similar to those of deque, except that new blocks are never
inserted before begin().
The first way in which std::vector<> provides better guarantees than
std::deque is with respect to reserve(). Insertions in std::vector<>
cannot cause reallocations so long as they do not push size() higher
than the value given in the most recent call to reserve(). I don't see
any problem with performing reservations on a multi-block vectors. I
haven't thought as much about the implementation of std::deque as I have
about std::vector<>. However, it seems to me that if you use a circular
list of memory blocks, there's no reason why it couldn't have a
reserve() as well. However, the standard doesn't require it.
The second way is that whenever you insert or erase an element of a
std::vector<>, all iterators for elements before the erased/inserted
element remain valid; that's not true for deque, except for insertions
and erasures at the very end. This improved validity guarantee is
purchased at the cost of higher complexity requirements for inserts and
erases in the first half of a std::vector<>, compared to std::deque<>.
That trade-off can be performed just as easily for multi-block vectors
as for single-block vectors.
I'm not arguing that this is a very important issue. I'm only pointing
out that there is a difference in std::vector<>'s favor. Given the sheer
number of C++ programs that have been written, I'm sure that there's at
least a few programs out there which depend upon these guarantees. I'd
not care to guess how many.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: brangdon@cix.co.uk (Dave Harris)
Date: Sun, 23 Dec 2001 03:24:25 CST Raw View
chrisnewton@no.junk.please.btinternet.com (Chris Newton) wrote (abridged):
> On the other hand, you seem to propose (as, IIRC, does Scott Meyers in
> "Effective STL") that we pass in something like &*v.begin() to
> interfaces such as the above, and let them access the vector's data
> directly. If we're going to do that, we might as well abandon any
> pretense that the container is an abstract entity, fix the
> implementation entirely and save ourselves several pages of standard.
I disagree. Even with the contiguity constraint, there is still scope for
variation in implementations. For example, the growth policy; whether the
allocator takes up space in each instance; whether end() is deduced from
size() or vice versa. There is much fine detail about what to make inline
and what not, where the best trade-off may depend on the platform. For
that matter, the best implementation may not be expressible as C++ at all.
Vendors are allowed to use non-portable magic.
That said, I agree that the std containers are designed with a specific
class of implementations in mind, and I think it would have been better if
they had been named for that class. Eg perhaps "deque" should have been
called "segmented_vector", and "deque" itself reserved for a stack-like
adaptor that could be based on either segmented_vector or list. This is
roughly what you proposed earlier. I suspect it is not worth trying to
correct this retrospectively now, though.
> The proposed change to the standard would render such a
> vector implementation non-conforming, which would be a shame since
> it solves a genuine problem and currently fits in without
> introducing any new ones.
How do you feel about vendors augmenting the std containers with extra
containers of their own, tuned to the problems posed by their hardware? Eg
they could provide a standard conforming std::vector with a lowish maximum
size, and a vendor-specific segmented_vector with a higher maximum size
but no contiguity guarantee.
It's a bit of a cop-out, but we /can/ always write our own containers. The
standard ones don't have to fill every conceivable need. Personally if I
am working on a 16-bit platform I would be content to suffer 16-bit limits
in std::vector. It seems more natural than giving up the contiguity
guarantee (which I see as useful, even fundamental).
Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
brangdon@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Ken Alverson" <Ken@Alverson.com>
Date: Tue, 25 Dec 2001 21:06:30 GMT Raw View
chrisnewton@no.junk.please.btinternet.com (Chris Newton) wrote:
> On the other hand, you seem to propose (as, IIRC, does Scott Meyers in
> "Effective STL") that we pass in something like &*v.begin() to
> interfaces such as the above, and let them access the vector's data
> directly. If we're going to do that, we might as well abandon any
> pretense that the container is an abstract entity, fix the
> implementation entirely and save ourselves several pages of standard.
I've always thought that if we are to assume an array layout for the vector,
we should have a function vector::c_arr() to get the pointer to the internal
array (mirroring but not totally equivalent to basic_string::c_str()).
&*v.begin() just looks ugly and hacky to me (&v.front() seems marginally
better).
Ken
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Chris Newton" <chrisnewton@no.junk.please.btinternet.com>
Date: Tue, 25 Dec 2001 21:06:33 GMT Raw View
"Dave Harris" <brangdon@cix.co.uk> wrote...
> How do you feel about vendors augmenting the std
> containers with extra containers of their own, tuned
> to the problems posed by their hardware? Eg they
> could provide a standard conforming std::vector with
> a lowish maximum size, and a vendor-specific
> segmented_vector with a higher maximum size
> but no contiguity guarantee.
I have no problem with that idea. There are always going to be
platform-specific requirements, and it may well not be appropriate to
force all implementations to support them by mandating that support in
the standard.
OTOH, I'd be interested to know whether some refactoring of the existing
standard container framework could reduce the impact of using such
non-portable extensions. For example, if my "two layer" idea were
adopted in some form, code could still be written using portable
high-level container interfaces, and only the lower-level implementation
modified to use a platform-specific extension.
> It's a bit of a cop-out, but we /can/ always write our own
> containers. The standard ones don't have to fill every
> conceivable need. Personally if I am working on a 16-bit
> platform I would be content to suffer 16-bit limits in
> std::vector.
Of course, we can't predict that everyone will be so content.
Flexibility is surely the name of the game.
> It seems more natural than giving up the contiguity
> guarantee (which I see as useful, even fundamental).
But the question is, can we have it both ways?
Merry Christmas,
Chris
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Chris Newton" <chrisnewton@no.junk.please.btinternet.com>
Date: Mon, 17 Dec 2001 15:53:17 GMT Raw View
Dear all,
With reference to library issue 69 (the lack of a current requirement
for vector's storage to be contiguous), I would like to ask for a quick
"pause for thought". I think the implications of the proposed change are
deeper than I've seen discussed anywhere at present. I'd like to know
what those involved in developing the standard library think of the
reasoning below, particularly if this really has all been considered
privately before, and rejected for whatever reason.
First of all, I agree that most current library implementations
implement vector using an array, and therefore storage is contiguous in
these cases. I also realise that many well-regarded authors already
write with the assumption that this is the case, rightly or wrongly. In
this light, standardising the contiguity requirement is an attractive
option.
However, does tying the logical entity vector to its physical
requirements so tightly mean that useful flexibility will be sacrificed
in exchange for little immediate gain?
I am aware of at least one suggestion for a standard library
implementation to run on older Intel x86 processors. For those not
familiar with it, memory on such machines is divided into 64K segments,
and addressed by specifying a segment and an offset into it. A frequent
irritation on such machines was that even if you had more than 64K of
memory, a native C or C++ array couldn't conveniently be bigger than 64K
in size, and larger arrays were not supported by most compilers.
As the standard reads today, the suggestion was that a conforming vector
container could be implemented on such platforms even if the compiler's
support for primitive arrays limited them to 64K, and this would allow
large items of sequential data to be stored without the hassle
programmers used to endure. The requirement to make a vector's storage
contiguous would probably (depending on what "contiguous" means) render
such a library implementation non-conforming. (I wasn't personally
involved with the development and haven't investigated this case in
detail, so if there's a flaw in the logic about implementing vector on
this platform, please inform rather than flaming.)
This exact example is somewhat of an aside, but I feel that it
illustrates a general point. The containers provided by the standard
library at the moment are expressed in logical terms, albeit often
tailored to an expected underlying data structure. The suggestion to
force a vector's storage to be contiguous seems to force an underlying
array storage method (please correct me if I've missed some alternative)
and effectively mandates the implementation of the container and not
just its interface. That breaks down horribly in the face of unusual
architectures, be they old ones such as the above example, specialised
hardware outside the mainstream, or as yet unknown new systems that C++
will support in future. I hope no-one here would disagree with the idea
that separating interface from implementation is a Good Thing.
If the standards committee feels that the omission of a contiguity
requirement is a serious failing (I've yet to be convinced), and that it
is appropriate to restrict implementations of the containers in the
library, perhaps a move to a more separated interface and implementation
is in order. We already have adapters such as stack and queue in the
library. If this issue is truly a serious one, then surely it's worth
considering providing a level of concrete data structures (array, singly
linked list, doubly linked list, red-black tree, etc.) and then
providing the standard container interfaces (vector, deque, map, maybe
also the adapters) in terms of these lower-level containers? This
approach has a number of clear advantages over the status quo.
1. It improves visibility; the developer can now write abstract code
using high level interfaces, but have confidence in exactly what's going
on behind the scenes. I *hate* libraries that don't tell me what's going
on behind the scenes *if* I decide I want to know.
2. It separates properties of the containers that are based on their
logical nature (e.g., indexing, associative interfaces, efficiencies,
etc.) from the properties that are really based on their implementations
(e.g., contiguity of storage, whether a list provides forward or
bidirectional iterators, etc.).
3. It provides a more loosely coupled framework, into which more modular
new additions can be placed without affecting anything else. This might
be relevant to the addition of hashed containers to the library, for
example; I can see a lot of potential for needless code duplication if
you have to implement map and hash_map as monolithic entities.
I realise that any changes along these lines would be a major
undertaking, but it seems that the library group are at least prepared
to consider including such undertakings in C++0x. If the
containers/iterators/algorithms in the standard library are intended to
be an extensible framework, perhaps the extra layer suggested here would
make life easier for those who provide such extensions (be they slist
and hash_map, or be they developers on unusual platforms with unique
characteristics).
Obviously this isn't based on a concrete implementation I'm using at
present, though parts of it are inspired by them. I'd just like to know
whether the serious library implementors can see potential here, or
whether they are already at a level where the advantages are illusory
anyway.
Thanks for reading,
Chris
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: Michiel Salters<Michiel.Salters@cmg.nl>
Date: Mon, 17 Dec 2001 16:41:31 GMT Raw View
In article <9vilkv$7jn$1@helle.btinternet.com>, Chris Newton says...
>
>Dear all,
>
>With reference to library issue 69 (the lack of a current requirement
>for vector's storage to be contiguous), I would like to ask for a quick
>"pause for thought". I think the implications of the proposed change are
>deeper than I've seen discussed anywhere at present. I'd like to know
>what those involved in developing the standard library think of the
>reasoning below, particularly if this really has all been considered
>privately before, and rejected for whatever reason.
[SNIP]
>I am aware of at least one suggestion for a standard library
>implementation to run on older Intel x86 processors. For those not
>familiar with it, memory on such machines is divided into 64K segments,
>and addressed by specifying a segment and an offset into it. A frequent
>irritation on such machines was that even if you had more than 64K of
>memory, a native C or C++ array couldn't conveniently be bigger than 64K
>in size, and larger arrays were not supported by most compilers.
>
>As the standard reads today, the suggestion was that a conforming vector
>container could be implemented on such platforms even if the compiler's
>support for primitive arrays limited them to 64K, and this would allow
>large items of sequential data to be stored without the hassle
>programmers used to endure. The requirement to make a vector's storage
>contiguous would probably (depending on what "contiguous" means) render
>such a library implementation non-conforming. (I wasn't personally
>involved with the development and haven't investigated this case in
>detail, so if there's a flaw in the logic about implementing vector on
>this platform, please inform rather than flaming.)
AFAIK, the requirement is on std::vector using std::allocator. A pre-386
C++ implementation can use nonstandard allocators with basicly the same
guarantees except for vector being contiguous, or it can implement pointers
as 32 bits entities ( also known as "huge" meory model on some compilers ).
>This exact example is somewhat of an aside, but I feel that it
>illustrates a general point. The containers provided by the standard
>library at the moment are expressed in logical terms, albeit often
>tailored to an expected underlying data structure. The suggestion to
>force a vector's storage to be contiguous seems to force an underlying
>array storage method (please correct me if I've missed some alternative)
>and effectively mandates the implementation of the container and not
>just its interface. That breaks down horribly in the face of unusual
>architectures, be they old ones such as the above example, specialised
>hardware outside the mainstream, or as yet unknown new systems that C++
>will support in future. I hope no-one here would disagree with the idea
>that separating interface from implementation is a Good Thing.
No - but I _do_ think that a common interface must take precedence over
hypothetical implementation concerns. C++ needs an alternative for
malloc/realloc, and we've decided to call it std::vector<>. People who
don't need the contiguous requirement may consider std::deque which
offers similar complexities. People programming against a C API
which has a ( T* array, size_t array_size ) interface need
std::vector<T> - and I'm one of those people.
>If the standards committee feels that the omission of a contiguity
>requirement is a serious failing (I've yet to be convinced), and that it
>is appropriate to restrict implementations of the containers in the
>library, perhaps a move to a more separated interface and implementation
>is in order. We already have adapters such as stack and queue in the
>library. If this issue is truly a serious one, then surely it's worth
>considering providing a level of concrete data structures (array, singly
>linked list, doubly linked list, red-black tree, etc.) and then
>providing the standard container interfaces (vector, deque, map, maybe
>also the adapters) in terms of these lower-level containers? This
>approach has a number of clear advantages over the status quo.
>
>1. It improves visibility; the developer can now write abstract code
>using high level interfaces, but have confidence in exactly what's going
>on behind the scenes. I *hate* libraries that don't tell me what's going
>on behind the scenes *if* I decide I want to know.
I don't buy that. I think std::sort is an example to the contrary; you
really don't want to establish which sorting routine it uses and when.
That's for the library writer to determine, based on his understanding
of the target.
>2. It separates properties of the containers that are based on their
>logical nature (e.g., indexing, associative interfaces, efficiencies,
>etc.) from the properties that are really based on their implementations
>(e.g., contiguity of storage, whether a list provides forward or
>bidirectional iterators, etc.).
How do you decouple efficiency from implementation? I'd like a list with
O(1) insertion and O(1) indexing, but that's not going to happen. The
properties of a list are based on its typical implementation.
>3. It provides a more loosely coupled framework, into which more modular
>new additions can be placed without affecting anything else. This might
>be relevant to the addition of hashed containers to the library, for
>example; I can see a lot of potential for needless code duplication if
>you have to implement map and hash_map as monolithic entities.
I really think that's a(nother) QOI issue, not something the standard
should tackle. And besides, taking the current containers and the new
hash_map's, with their typical implementations, do you agree there's a
logical 1 to 1 mapping of interface and implementation ? Which parts of
the implementation of container 1 can be reused for container 2? I'd say
std::deque can already reuse std::vector. I guess exposing the underlying
model of std::deque is more likely to break that reuse than improve reuse.
However, I'm not a library implementor, so I certainly can't speak
with autority.
Regards,
--
Michiel Salters
Consultant Technical Software Engineering
CMG Trade, Transport & Industry
Michiel.Salters@cmg.nl
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Tue, 18 Dec 2001 00:51:57 GMT Raw View
Michiel Salters wrote:
>
> In article <9vilkv$7jn$1@helle.btinternet.com>, Chris Newton says...
> >
> >Dear all,
> >
> >With reference to library issue 69 (the lack of a current requirement
> >for vector's storage to be contiguous), I would like to ask for a quick
...
> AFAIK, the requirement is on std::vector using std::allocator. A pre-386
There currently is no requirement of contiguity. The proposed resolution
refers to std::vector<T,Allocator>, for an arbitrary Allocator; it is
not specific to std::allocator, and uses a definition of contiguity
which is applicable to an arbitrary Allocators.
...
> malloc/realloc, and we've decided to call it std::vector<>. People who
> don't need the contiguous requirement may consider std::deque which
> offers similar complexities. People programming against a C API
However, the complexities are not the same; in some cases the complexity
guarantees for std::vector<> are superior to those for std::deque<>.
> which has a ( T* array, size_t array_size ) interface need
> std::vector<T> - and I'm one of those people.
You could use std::valarray<T>, which does provide the contiguity
guarantees you need for that purpose.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Tue, 18 Dec 2001 01:01:40 GMT Raw View
Chris Newton wrote:
...
> With reference to library issue 69 (the lack of a current requirement
> for vector's storage to be contiguous), I would like to ask for a quick
...
> I am aware of at least one suggestion for a standard library
> implementation to run on older Intel x86 processors. For those not
> familiar with it, memory on such machines is divided into 64K segments,
> and addressed by specifying a segment and an offset into it. A frequent
> irritation on such machines was that even if you had more than 64K of
> memory, a native C or C++ array couldn't conveniently be bigger than 64K
> in size, and larger arrays were not supported by most compilers.
This is just part of a more general problem. The proposed resolution
would make it impossible to implement a vector that is too large to be
stored in the largest single block of memory that the give Allocator
instance is capable of allocating. As the standard is currently written,
it's possible to meet all complexity requirements with a vector
consisting of multiple equal-sized blocks, whose total size is larger
than any single block of memory that can be allocated.
It's been suggested that anybody needing such a container should use
std::deque<>, but the complexity guarantees for std::deque<> are
different from those for std::vector<>, and in some cases the guarantees
for std::vector<> are better.
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "Chris Newton" <chrisnewton@no.junk.please.btinternet.com>
Date: Tue, 18 Dec 2001 02:13:03 GMT Raw View
"Michiel Salters" <Michiel.Salters@cmg.nl> wrote...
> AFAIK, the requirement is on std::vector using std::allocator.
> A pre-386 C++ implementation can use nonstandard allocators
> with basicly the same guarantees except for vector being
> contiguous, or it can implement pointers as 32 bits entities
> ( also known as "huge" meory model on some compilers ).
Sorry, you've lost me. AFAIK, there is currently no requirement for
contiguity; that is the motivation for library issue 69.
> I _do_ think that a common interface must take precedence
> over hypothetical implementation concerns. C++ needs an
> alternative for malloc/realloc, and we've decided to call it
> std::vector<>.
Not yet, we haven't; that's kind of the point. The current definition of
vector is broader than that. Its interface is still common, and it
allows implementation concerns that, as demonstrated by my x86 example,
are not hypothetical at all. The proposed change to the standard would
render such a vector implementation non-conforming, which would be a
shame since it solves a genuine problem and currently fits in without
introducing any new ones.
> People who don't need the contiguous requirement may
> consider std::deque which offers similar complexities.
> People programming against a C API which has a
> ( T* array, size_t array_size ) interface need
> std::vector<T> - and I'm one of those people.
Surely you don't need to convert between vector's interface and an array
form that often? Constructing a vector from an array or copying a
vector's contents into an array are both one-liners, so the only concern
here can be efficiency.
On the other hand, you seem to propose (as, IIRC, does Scott Meyers in
"Effective STL") that we pass in something like &*v.begin() to
interfaces such as the above, and let them access the vector's data
directly. If we're going to do that, we might as well abandon any
pretense that the container is an abstract entity, fix the
implementation entirely and save ourselves several pages of standard.
Actually, that's not a bad alternative. If we're not going to truly
abstract the nature of the standard library containers, I'd rather
everything was made explicit. But one way or the other, I'd just like to
see a decision made about what the standard containers (and to an
extent, algorithms) are meant to represent, and then everything changed
to reflect that philosophy. Let's not just fudge the standard because
vector doesn't currently have a convenient property that some people
would like it to have.
> >If the standards committee feels that the omission
> >of a contiguity requirement is a serious failing (I've
> >yet to be convinced), and that it is appropriate to
> >restrict implementations of the containers in the
> >library, perhaps a move to a more separated interface
> >and implementation is in order. We already have
> >adapters such as stack and queue in the library. If
> >this issue is truly a serious one, then surely it's worth
> >considering providing a level of concrete data structures
> >(array, singly linked list, doubly linked list, red-black
> >tree, etc.) and then providing the standard container
> >interfaces (vector, deque, map, maybe also the adapters)
> >in terms of these lower-level containers? This approach
> >has a number of clear advantages over the status quo.
> >
> >1. It improves visibility; the developer can now write
> >abstract code using high level interfaces, but have
> >confidence in exactly what's going on behind the scenes.
> >I *hate* libraries that don't tell me what's going on
> >behind the scenes *if* I decide I want to know.
>
> I don't buy that. I think std::sort is an example to the
> contrary; you really don't want to establish which sorting
> routine it uses and when. That's for the library writer to
> determine, based on his understanding of the target.
On the contrary; this is exactly my point. std::sort is an excellent
example of why I, the programmer writing the application, *must* know
what is going on under the hood. The problem with letting the library
writer decide everything is that I may know things about my data that
the library author didn't, and that knowledge could easily impact the
optimal choice of sort algorithm.
If std::sort is going to do a quicksort variation, then please document
it as such. That way, I can use it if appropriate, and know to choose
another option if not. But don't make vague hints about "N log N
comparisons on the average" and "If the worst case behaviour is
important then [some other vaguely defined algorithms] should be used
instead". Now I have no idea whether my data is likely to be "worst
case", nor do I know how bad that worst case will be. You have removed
my ability to make a sound judgement based on full possession of the
facts, and that cannot be a good thing.
> >2. It separates properties of the containers that are
> >based on their logical nature (e.g., indexing,
> >associative interfaces, efficiencies, etc.) from the
> >properties that are really based on their implementations
> >(e.g., contiguity of storage, whether a list provides
> >forward or bidirectional iterators, etc.).
>
> How do you decouple efficiency from implementation? I'd
> like a list with O(1) insertion and O(1) indexing, but that's
> not going to happen. The properties of a list are based on
> its typical implementation.
Sometimes (see the x86 example again) there are multiple implementations
possible, each within the same complexity, but with differing
properties. Normally, you'd just prefer the fastest option of course,
but in my example you might prefer an alternative that provided a larger
maximum capacity, even though it is slower by a small constant factor.
At present, the abstractions defined in the standard allow for this sort
of flexibility, because they don't directly define the implementation
details. It would be even nicer if a mechanism existed to defer the
choice to the end programmer, but at least the status quo provides the
flexibility. As soon as you start mandating implementation details,
you've taken that away.
> >3. It provides a more loosely coupled framework, into
> >which more modular new additions can be placed
> >without affecting anything else. This might be relevant
> >to the addition of hashed containers to the library, for
> >example; I can see a lot of potential for needless code
> >duplication if you have to implement map and hash_map
> >as monolithic entities.
>
> I really think that's a(nother) QOI issue, not something the
> standard should tackle. And besides, taking the current
> containers and the new hash_map's, with their typical
> implementations, do you agree there's a logical 1 to 1
> mapping of interface and implementation ? Which parts of
> the implementation of container 1 can be reused for
> container 2? I'd say std::deque can already reuse
> std::vector. I guess exposing the underlying model of
> std::deque is more likely to break that reuse than improve
> reuse.
The scope for reuse would obviously depend upon the particular
container, and how many possible implementations for a given abstract
interface were likely to exist. I was envisaging examples where much of
the logic would be common whatever the underlying implementation:
checking for duplicates in associative containers, inserting a sequence
defined by a pair of iterators, range-checking, etc.
The trick is going to be isolating the requirements and guarantees that
depend on the implementation, e.g., do we need a hash function or a
comparison function to implement this version of set? However, looking
at the way templates are used at the moment, and things like Andrei
Alexandrescu's policy class ideas, it struck me that this might not be
so hard after all.
> However, I'm not a library implementor, so I certainly
> can't speak with autority.
Neither am I; that's why I posted. :-)
It's quite possible that I'm chasing ghosts here. I just thought that,
since the issue of how the containers work has been raised anyway, I'd
throw a little wood on the fire. My personal feeling is that the
containers, algorithms and iterators in the library have a lot of
potential, but it's not being realised at the moment because
everything's "partly abstract".
As a result, we have issues like the sort example above, where the
intent is almost clear, but not reliable. How many C++ programmers, even
at the expert level, do you think could actually tell you what
guarantees you had when you used for_each or transform?
IMHO, it would be useful to either separate the abstraction that the
containers represent completely from their implementation, or to drop
the pretense of abstraction and just call things what they are. Either
way, I hope the exact implementations ultimately being used will be
accessible to the programmer if he needs to know, which isn't quite the
case at present.
Regards,
Chris
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]
Author: "David Abrahams" <david.abrahams@rcn.com>
Date: Tue, 18 Dec 2001 04:08:20 GMT Raw View
"Chris Newton" <chrisnewton@no.junk.please.btinternet.com> wrote in message
news:9vilkv$7jn$1@helle.btinternet.com...
> Dear all,
>
> With reference to library issue 69 (the lack of a current requirement
> for vector's storage to be contiguous), I would like to ask for a quick
> "pause for thought". I think the implications of the proposed change are
> deeper than I've seen discussed anywhere at present. I'd like to know
> what those involved in developing the standard library think of the
> reasoning below, particularly if this really has all been considered
> privately before, and rejected for whatever reason.
>
> First of all, I agree that most current library implementations
> implement vector using an array, and therefore storage is contiguous in
> these cases. I also realise that many well-regarded authors already
> write with the assumption that this is the case, rightly or wrongly. In
> this light, standardising the contiguity requirement is an attractive
> option.
>
> However, does tying the logical entity vector to its physical
> requirements so tightly mean that useful flexibility will be sacrificed
> in exchange for little immediate gain?
>
> I am aware of at least one suggestion for a standard library
> implementation to run on older Intel x86 processors. For those not
> familiar with it, memory on such machines is divided into 64K segments,
> and addressed by specifying a segment and an offset into it. A frequent
> irritation on such machines was that even if you had more than 64K of
> memory, a native C or C++ array couldn't conveniently be bigger than 64K
> in size, and larger arrays were not supported by most compilers.
FWIW, I brought that very issue up (without much commitment to it) in the
committee when the contiguity question was raised. It didn't seem to make
any impression, though...
Regards,
Dave
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html ]