Topic: N3533 - C++ Concurrent Queues - Streamlined Interface


Author: Dominic Burghuber <dmb.youcangetme@googlemail.com>
Date: Wed, 3 Apr 2013 05:19:40 -0700 (PDT)
Raw View
------=_Part_37_14755311.1364991580730
Content-Type: text/plain; charset=ISO-8859-1

It appears to me that you can streamline the interface of the n3533
concurrent queue implementation (buffer_queue.h) to forgo the value_pop &
plain push methods.

When dealing with values you are still trying to obtain a value, hence
try_push & try_pop + wait_push & wait_pop (& perhaps the non-blocking
variations) should be sufficient.

value_pop can become and overload of try_pop that takes no parameters &
returns a pair (the value & an op code) for a non-op-code-throwing version,
while throwing the op code can offer a try_pop that returns a default
constructed value (i.e. no op code).

pushing is in effect try_push or wait_push. Regardless there doesn't really
seem to me like much need for a plain push(), & thus not much need for
value_pop & plain push variations polluting the interface.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



------=_Part_37_14755311.1364991580730
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

It appears to me that you can streamline the interface of the n3533 concurr=
ent queue implementation (buffer_queue.h) to forgo the value_pop &amp; plai=
n push methods. <br><br>When dealing with values you are still trying to ob=
tain a value, hence try_push &amp; try_pop + wait_push &amp; wait_pop (&amp=
; perhaps the non-blocking variations) should be sufficient. <br><br>value_=
pop can become and overload of try_pop that takes no parameters &amp; retur=
ns a pair (the value &amp; an op code) for a non-op-code-throwing version, =
while throwing the op code can offer a try_pop that returns a default const=
ructed value (i.e. no op code). &nbsp; <br><br>pushing is in effect try_pus=
h or wait_push. Regardless there doesn't really seem to me like much need f=
or a plain push(), &amp; thus not much need for value_pop &amp; plain push =
variations polluting the interface. <br>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_37_14755311.1364991580730--

.


Author: Dominic Burghuber <dmb.youcangetme@googlemail.com>
Date: Wed, 3 Apr 2013 05:21:49 -0700 (PDT)
Raw View
------=_Part_822_30384883.1364991709894
Content-Type: text/plain; charset=ISO-8859-1



Also, as a side note, there should be a try_emplace, & a
wait_and_emplace(), & perhaps the non-blocking version too...

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



------=_Part_822_30384883.1364991709894
Content-Type: text/html; charset=ISO-8859-1

<br><br>Also, as a side note, there should be a try_emplace, &amp; a wait_and_emplace(), &amp; perhaps the non-blocking version too...<br>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href="http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en">http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en</a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_822_30384883.1364991709894--

.


Author: Lawrence Crowl <crowl@googlers.com>
Date: Thu, 4 Apr 2013 15:52:23 -0700
Raw View
On 4/3/13, Dominic Burghuber <dmb.youcangetme@googlemail.com> wrote:
> It appears to me that you can streamline the interface of the
> n3533 concurrent queue implementation (buffer_queue.h) to forgo
> the value_pop & plain push methods.
>
> When dealing with values you are still trying to obtain a value,
> hence try_push & try_pop + wait_push & wait_pop (& perhaps the
> non-blocking variations) should be sufficient.

I think all we are really talking about is that the differences
between wait_pop/value_pop and push/wait_push are more about how
the caller wants to structure the code than about the functionality
of the queue.

> value_pop can become and overload of try_pop that takes no
> parameters & returns a pair (the value & an op code) for a
> non-op-code-throwing version, while throwing the op code can
> offer a try_pop that returns a default constructed value (i.e. no
> op code).

We have been discussing this in another thread.  The needed
concept is a type that contains a (potentially) optional status
and a (definitely) optional value.  Pairs do not work because they
require a potentially expensive default construction.

> pushing is in effect try_push or wait_push. Regardless there
> doesn't really seem to me like much need for a plain push(), &
> thus not much need for value_pop & plain push variations polluting
> the interface.

Some folks are really going to want to write

  w = q.value_pop();
  x = q.value_pop();
  y = q.value_pop();
  z = q.value_pop();

rather than

  if ( queue_op_status::closed == q.wait_pop(w))
    throw (queue_op_status::closed);
  if ( queue_op_status::closed == q.wait_pop(x))
    throw (queue_op_status::closed);
  if ( queue_op_status::closed == q.wait_pop(y))
    throw (queue_op_status::closed);
  if ( queue_op_status::closed == q.wait_pop(z))
    throw (queue_op_status::closed);

Unless we have an alternative, in the form of something like the
type mentioned above, I advocate keeping the two interfaces.

--
Lawrence Crowl

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



.


Author: Lawrence Crowl <crowl@googlers.com>
Date: Thu, 4 Apr 2013 15:55:00 -0700
Raw View
On 4/3/13, Dominic Burghuber <dmb.youcangetme@googlemail.com> wrote:
> Also, as a side note, there should be a try_emplace, & a
> wait_and_emplace(), & perhaps the non-blocking version too...

Good point.  The current interface requires elements to have default
constructors.  My current implementation default constructs an array
of elements as the storage.  So, emplace would have to destroy the
element and then reinitialize it.  It's not clear to me that the
result is better.  I think we need more discussion.

--
Lawrence Crowl

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



.


Author: Dominic Burghuber <dmb.youcangetme@googlemail.com>
Date: Sat, 6 Apr 2013 21:09:43 -0700 (PDT)
Raw View
------=_Part_3261_27719288.1365307783341
Content-Type: text/plain; charset=ISO-8859-1



On Thursday, April 4, 2013 11:52:23 PM UTC+1, Lawrence Crowl wrote:
>
>
> I think all we are really talking about is that the differences
> between wait_pop/value_pop and push/wait_push are more about how
> the caller wants to structure the code than about the functionality
> of the queue.
>

Granted, flexibility of choice is good. Though I wonder how much use case
there is for *trying* to obtain a value without checking whether you got it
or not before using it. I had implemented an exception throwing overloaded
version of the function too (as you had too).


> We have been discussing this in another thread.  The needed
> concept is a type that contains a (potentially) optional status
> and a (definitely) optional value.  Pairs do not work because they
> require a potentially expensive default construction.
>
>
I wasn't aware of this pair default construction expense, thank you for
highlighting it. I would then hope for some kind of return value
optimisation, but with what the operation is doing, that doesn't appear
like it can be done without forgoing exception safety :-(


>
> Unless we have an alternative, in the form of something like the
> type mentioned above, I advocate keeping the two interfaces.
>
>
>
I've actually ditched the fixed size of elements option from my
implementations, so there is no waiting to push now. I did this because:
- A lot of standard containers  are dynamically sized, and should you want
to impose a restriction on them, the outer scope is typically used to
control this.
- Again this trims the container interface (not essential, but typically
nice, & less mental expense during usage).
- Though fixing size may be an initial optimisation, hitting the limit will
cause slow down. After profiling I could fix a size bigger then that ever
possibly reached, but then I'm wasting memory. Also, with the right
allocator diverting away from OS-level allocation is still possible. With
this method I'm going to have a more responsive queue during run-time,
avoid expense on the non-fixed usage, and likely end up saving on my memory
budget anyway.


Another side note (I made one earlier in this thread): I'm preferring now
to avoid a concurrent queue/stack synchronised with mutexes.

For the benefits of some readers; The problem with a queue protected with
mutual exclusion is that when you try to push an item and hit contention,
your thread swaps out. Then the popping thread (e.g. a background worker
picking items off the queue) gets an item and your thread swaps back in.
Thread swapping in and out just to push/add an item creates unnecessary
thrashing due to switching. It may not be a issue in some systems, but if
you're looking for performance it's not ideal, especially if the queue is
on a critical path in your code.

& Lawrence; I think that if it came to a choice of picking one
implementation over the other to push for standardisation of a concurrent
queue, the lockless version is the way to go (please!)

Regards,


--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



------=_Part_3261_27719288.1365307783341
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br>On Thursday, April 4, 2013 11:52:23 PM UTC+1, Lawrence Crowl wrote:=
<blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.8ex;bor=
der-left: 1px #ccc solid;padding-left: 1ex;"><br>I think all we are really =
talking about is that the differences
<br>between wait_pop/value_pop and push/wait_push are more about how
<br>the caller wants to structure the code than about the functionality
<br>of the queue.
<br>
</blockquote><div><br>Granted, flexibility of choice is good. Though I wond=
er how much use case there is for <u>trying</u> to obtain a value without c=
hecking whether you got it or not before using it. I had implemented an exc=
eption throwing overloaded version of the function too (as you had too).<br=
>&nbsp;</div><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-le=
ft: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">We have been disc=
ussing this in another thread. &nbsp;The needed
<br>concept is a type that contains a (potentially) optional status
<br>and a (definitely) optional value. &nbsp;Pairs do not work because they
<br>require a potentially expensive default construction.
<br>
<br></blockquote><div>&nbsp;<br>I wasn't aware of this pair default constru=
ction expense, thank you for highlighting it. I would then hope for some ki=
nd of return value optimisation, but with what the operation is doing, that=
 doesn't appear like it can be done without forgoing exception safety :-(<b=
r><br></div><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-lef=
t: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">
<br>
<br>Unless we have an alternative, in the form of something like the
<br>type mentioned above, I advocate keeping the two interfaces.
<br>
<br><br></blockquote><div><br>I've actually ditched the fixed size of eleme=
nts option from my implementations, so there is no waiting to push now. I d=
id this because:<br>- A lot of standard containers&nbsp; are dynamically si=
zed, and should you want to impose a restriction on them, the outer scope i=
s typically used to control this.<br>- Again this trims the container inter=
face (not essential, but typically nice, &amp; less mental expense during u=
sage).<br>- Though fixing size may be an initial optimisation, hitting the =
limit will cause slow down. After profiling I could fix a size bigger then =
that ever possibly reached, but then I'm wasting memory. Also, with the rig=
ht allocator diverting away from OS-level allocation is still possible. Wit=
h this method I'm going to have a more responsive queue during run-time, av=
oid expense on the non-fixed usage, and likely end up saving on my memory b=
udget anyway.<br><br><br>Another side note (I made one earlier in this thre=
ad): I'm preferring now to avoid a concurrent queue/stack synchronised with=
 mutexes.<br><br>For the benefits of some readers; The problem with a queue=
 protected with mutual exclusion is that when you try to push an item and h=
it contention, your thread swaps out. Then the popping thread (e.g. a backg=
round worker picking items off the queue) gets an item and your thread swap=
s back in. Thread swapping in and out just to push/add an item creates unne=
cessary thrashing due to switching. It may not be a issue in some systems, =
but if you're looking for performance it's not ideal, especially if the que=
ue is on a critical path in your code.&nbsp; <br><br>&amp; Lawrence; I thin=
k that if it came to a choice of picking one implementation over the other =
to push for standardisation of a concurrent queue, the lockless version is =
the way to go (please!)<br><br>Regards,<br>&nbsp;</div>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_3261_27719288.1365307783341--

.


Author: Dominic Burghuber <dmb.youcangetme@googlemail.com>
Date: Sat, 6 Apr 2013 21:37:40 -0700 (PDT)
Raw View
------=_Part_3277_11266403.1365309460283
Content-Type: text/plain; charset=ISO-8859-1



On Thursday, April 4, 2013 11:55:00 PM UTC+1, Lawrence Crowl wrote:
>
> On 4/3/13, Dominic Burghuber <dmb.you...@googlemail.com <javascript:>>
> wrote:
> > Also, as a side note, there should be a try_emplace, & a
> > wait_and_emplace(), & perhaps the non-blocking version too...
>
> Good point.  The current interface requires elements to have default
> constructors.  My current implementation default constructs an array
> of elements as the storage.  So, emplace would have to destroy the
> element and then reinitialize it.  It's not clear to me that the
> result is better.  I think we need more discussion.
>
>
>
I suppose a choice here may be to expose more user control relating to
allocation of your underlying buffer store, as per for example, std::vector
with reserve, construction overload for elements of a default value, maybe
initialiser list construction, etc. In other words, default construction is
an empty std::concurrent_queue (or whatever the name would be) wrapper,
while other constructor versions allow for more user control of the
underlying allocation. So the underlying buffer allocation itself is
delayed/dependent on user instruction.  Emplacement then is there and part
of that choice too - it would be on the userif they decided to default
constructed elements, then destroy the element and then reinitialize it as
a result of emplacement.

Of course some of what I said there depends on how you are allocating for
the backing store. With a raw buffer it can provide options pretty much
like a vector, which I like but is somewhat tied to the functionality of
having a fixed size queue. If you didn't want that feature then you would
perhaps prefer a deque like (e.g. multiple vectors allocating chunks).
Further more, if node-based, things like reservation can't really apply,
however with a list-like implementation you may be able to get finer
grained locking control on the head/tail, than you could with a raw buffer.

Forgive me if I went a little off on topic, but back on it:  I think
emplacement is a pretty good/nice option to have as a user. Whether you
think it is worth refactoring your design over is of course up to you. I
personally probably would try given an underlying contiguous buffer
implementation.

Regards,


--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



------=_Part_3277_11266403.1365309460283
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br>On Thursday, April 4, 2013 11:55:00 PM UTC+1, Lawrence Crowl wrote:=
<blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.8ex;bor=
der-left: 1px #ccc solid;padding-left: 1ex;">On 4/3/13, Dominic Burghuber &=
lt;<a href=3D"javascript:" target=3D"_blank" gdf-obfuscated-mailto=3D"2VrGX=
T2AYYgJ">dmb.you...@googlemail.<wbr>com</a>&gt; wrote:
<br>&gt; Also, as a side note, there should be a try_emplace, &amp; a
<br>&gt; wait_and_emplace(), &amp; perhaps the non-blocking version too...
<br>
<br>Good point. &nbsp;The current interface requires elements to have defau=
lt
<br>constructors. &nbsp;My current implementation default constructs an arr=
ay
<br>of elements as the storage. &nbsp;So, emplace would have to destroy the
<br>element and then reinitialize it. &nbsp;It's not clear to me that the
<br>result is better. &nbsp;I think we need more discussion.
<br>
<br><br></blockquote><div><br>I suppose a choice here may be to expose more=
 user control relating to allocation of your underlying buffer store, as pe=
r for example, std::vector with reserve, construction overload for elements=
 of a default value, maybe initialiser list construction, etc. In other wor=
ds, default construction is an empty std::concurrent_queue (or whatever the=
 name would be) wrapper, while other constructor versions allow for more us=
er control of the underlying allocation. So the underlying buffer allocatio=
n itself is delayed/dependent on user instruction.&nbsp; Emplacement then i=
s there and part of that choice too - it would be on the userif they decide=
d to default constructed elements, then destroy the element and then reinit=
ialize it as a result of emplacement.<br><br>Of course some of what I said =
there depends on how you are allocating for the backing store. With a raw b=
uffer it can provide options pretty much like a vector, which I like but is=
 somewhat tied to the functionality of having a fixed size queue. If you di=
dn't want that feature then you would perhaps prefer a deque like (e.g. mul=
tiple vectors allocating chunks). Further more, if node-based, things like =
reservation can't really apply, however with a=20
list-like implementation you may be able to get finer grained locking=20
control on the head/tail, than you could with a raw buffer.<br><br>Forgive =
me if I went a little off on topic, but back on it:&nbsp; I think emplaceme=
nt is a pretty good/nice option to have as a user. Whether you think it is =
worth refactoring your design over is of course up to you. I personally pro=
bably would try given an underlying contiguous buffer implementation.<br><b=
r>Regards,<br>&nbsp;</div>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_3277_11266403.1365309460283--

.


Author: "Vicente J. Botet Escriba" <vicente.botet@wanadoo.fr>
Date: Sun, 07 Apr 2013 21:40:27 +0200
Raw View
This is a multi-part message in MIME format.
--------------070906090106080106060709
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Le 07/04/13 06:09, Dominic Burghuber a =E9crit :
>
>
> On Thursday, April 4, 2013 11:52:23 PM UTC+1, Lawrence Crowl wrote:
>
>
>     I think all we are really talking about is that the differences
>     between wait_pop/value_pop and push/wait_push are more about how
>     the caller wants to structure the code than about the functionality
>     of the queue.
>
>
> Granted, flexibility of choice is good. Though I wonder how much use=20
> case there is for _trying_ to obtain a value without checking whether=20
> you got it or not before using it. I had implemented an exception=20
> throwing overloaded version of the function too (as you had too).
wait_pop and pop_value have the additional difference that they can be=20
applied to different types (DefaultConstructible/NoThrowMoveConstructible)
>
>     We have been discussing this in another thread.  The needed
>     concept is a type that contains a (potentially) optional status
>     and a (definitely) optional value.  Pairs do not work because they
>     require a potentially expensive default construction.
>
>
> I wasn't aware of this pair default construction expense, thank you=20
> for highlighting it. I would then hope for some kind of return value=20
> optimisation, but with what the operation is doing, that doesn't=20
> appear like it can be done without forgoing exception safety :-(
>
Exception-safety must be ensured by requiring tha the type is no throw=20
move constructible.
>
>
>
>     Unless we have an alternative, in the form of something like the
>     type mentioned above, I advocate keeping the two interfaces.
>
>
>
> I've actually ditched the fixed size of elements option from my=20
> implementations, so there is no waiting to push now. I did this because:
> - A lot of standard containers  are dynamically sized, and should you=20
> want to impose a restriction on them, the outer scope is typically=20
> used to control this.
> - Again this trims the container interface (not essential, but=20
> typically nice, & less mental expense during usage).
> - Though fixing size may be an initial optimisation, hitting the limit=20
> will cause slow down. After profiling I could fix a size bigger then=20
> that ever possibly reached, but then I'm wasting memory. Also, with=20
> the right allocator diverting away from OS-level allocation is still=20
> possible. With this method I'm going to have a more responsive queue=20
> during run-time, avoid expense on the non-fixed usage, and likely end=20
> up saving on my memory budget anyway.
>
Could you show your interface?
>
> Another side note (I made one earlier in this thread): I'm preferring=20
> now to avoid a concurrent queue/stack synchronised with mutexes.
>
> For the benefits of some readers; The problem with a queue protected=20
> with mutual exclusion is that when you try to push an item and hit=20
> contention, your thread swaps out. Then the popping thread (e.g. a=20
> background worker picking items off the queue) gets an item and your=20
> thread swaps back in. Thread swapping in and out just to push/add an=20
> item creates unnecessary thrashing due to switching. It may not be a=20
> issue in some systems, but if you're looking for performance it's not=20
> ideal, especially if the queue is on a critical path in your code.

A variation that solves this issue consists in adding congested state,=20
and exiting from the congested state only when there is enough empty=20
places on the queue. I don't know if this can be seen as QOI or needs to=20
be specified.

>
> & Lawrence; I think that if it came to a choice of picking one=20
> implementation over the other to push for standardisation of a=20
> concurrent queue, the lockless version is the way to go (please!)
>
I don't know if this is inherent to lock-free queues but Boost.LockFree=20
introduces some additional requirements to the stored type

  *

    T must have a trivial assignment operator

  *

    T must have a trivial destructor

In addition, it is not yet clear how to provide lock-free=20
implementations for unbounded queues, at least on the worst case.

--=20

---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.



--------------070906090106080106060709
Content-Type: text/html; charset=ISO-8859-1

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Le 07/04/13 06:09, Dominic Burghuber a
      &eacute;crit&nbsp;:<br>
    </div>
    <blockquote
      cite="mid:518a55e8-0613-4364-879a-fb42d6300b27@isocpp.org"
      type="cite"><br>
      <br>
      On Thursday, April 4, 2013 11:52:23 PM UTC+1, Lawrence Crowl
      wrote:
      <blockquote class="gmail_quote" style="margin: 0;margin-left:
        0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;"><br>
        I think all we are really talking about is that the differences
        <br>
        between wait_pop/value_pop and push/wait_push are more about how
        <br>
        the caller wants to structure the code than about the
        functionality
        <br>
        of the queue.
        <br>
      </blockquote>
      <div><br>
        Granted, flexibility of choice is good. Though I wonder how much
        use case there is for <u>trying</u> to obtain a value without
        checking whether you got it or not before using it. I had
        implemented an exception throwing overloaded version of the
        function too (as you had too).<br>
      </div>
    </blockquote>
    wait_pop and pop_value have the additional difference that they can
    be applied to different types
    (DefaultConstructible/NoThrowMoveConstructible)<br>
    <blockquote
      cite="mid:518a55e8-0613-4364-879a-fb42d6300b27@isocpp.org"
      type="cite">
      <div>&nbsp;</div>
      <blockquote class="gmail_quote" style="margin: 0;margin-left:
        0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">We have
        been discussing this in another thread. &nbsp;The needed
        <br>
        concept is a type that contains a (potentially) optional status
        <br>
        and a (definitely) optional value. &nbsp;Pairs do not work because
        they
        <br>
        require a potentially expensive default construction.
        <br>
        <br>
      </blockquote>
      <div>&nbsp;<br>
        I wasn't aware of this pair default construction expense, thank
        you for highlighting it. I would then hope for some kind of
        return value optimisation, but with what the operation is doing,
        that doesn't appear like it can be done without forgoing
        exception safety :-(<br>
        <br>
      </div>
    </blockquote>
    Exception-safety must be ensured by requiring tha the type is no
    throw move constructible.<br>
    <blockquote
      cite="mid:518a55e8-0613-4364-879a-fb42d6300b27@isocpp.org"
      type="cite">
      <blockquote class="gmail_quote" style="margin: 0;margin-left:
        0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">
        <br>
        <br>
        Unless we have an alternative, in the form of something like the
        <br>
        type mentioned above, I advocate keeping the two interfaces.
        <br>
        <br>
        <br>
      </blockquote>
      <div><br>
        I've actually ditched the fixed size of elements option from my
        implementations, so there is no waiting to push now. I did this
        because:<br>
        - A lot of standard containers&nbsp; are dynamically sized, and
        should you want to impose a restriction on them, the outer scope
        is typically used to control this.<br>
        - Again this trims the container interface (not essential, but
        typically nice, &amp; less mental expense during usage).<br>
        - Though fixing size may be an initial optimisation, hitting the
        limit will cause slow down. After profiling I could fix a size
        bigger then that ever possibly reached, but then I'm wasting
        memory. Also, with the right allocator diverting away from
        OS-level allocation is still possible. With this method I'm
        going to have a more responsive queue during run-time, avoid
        expense on the non-fixed usage, and likely end up saving on my
        memory budget anyway.<br>
        <br>
      </div>
    </blockquote>
    Could you show your interface?<br>
    <blockquote
      cite="mid:518a55e8-0613-4364-879a-fb42d6300b27@isocpp.org"
      type="cite">
      <div><br>
        Another side note (I made one earlier in this thread): I'm
        preferring now to avoid a concurrent queue/stack synchronised
        with mutexes.<br>
        <br>
        For the benefits of some readers; The problem with a queue
        protected with mutual exclusion is that when you try to push an
        item and hit contention, your thread swaps out. Then the popping
        thread (e.g. a background worker picking items off the queue)
        gets an item and your thread swaps back in. Thread swapping in
        and out just to push/add an item creates unnecessary thrashing
        due to switching. It may not be a issue in some systems, but if
        you're looking for performance it's not ideal, especially if the
        queue is on a critical path in your code.&nbsp; <br>
      </div>
    </blockquote>
    <br>
    A variation that solves this issue consists in adding congested
    state, and exiting from the congested state only when there is
    enough empty places on the queue. I don't know if this can be seen
    as QOI or needs to be specified.<br>
    <br>
    <blockquote
      cite="mid:518a55e8-0613-4364-879a-fb42d6300b27@isocpp.org"
      type="cite">
      <div><br>
        &amp; Lawrence; I think that if it came to a choice of picking
        one implementation over the other to push for standardisation of
        a concurrent queue, the lockless version is the way to go
        (please!)<br>
        <br>
      </div>
    </blockquote>
    I don't know if this is inherent to lock-free queues but
    Boost.LockFree introduces some additional requirements to the stored
    type<br>
    <br>
  </body>
  <ul>
    <li class="listitem">
      <p>T must have a trivial assignment operator</p>
    </li>
    <li class="listitem">
      <p>T must have a trivial destructor </p>
    </li>
  </ul>
  <body bgcolor="#FFFFFF" text="#000000">
    In addition, it is not yet clear how to provide lock-free
    implementations for unbounded queues, at least on the worst case.<br>
  </body>
</html>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href="http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en">http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en</a>.<br />
&nbsp;<br />
&nbsp;<br />

--------------070906090106080106060709--

.


Author: Lawrence Crowl <crowl@googlers.com>
Date: Mon, 8 Apr 2013 17:29:41 -0700
Raw View
On 4/6/13, Dominic Burghuber <dmb.youcangetme@googlemail.com> wrote:
> I've actually ditched the fixed size of elements option from
> my implementations, so there is no waiting to push now. I did
> this because:
>
> - A lot of standard containers are dynamically sized, and should
> you want to impose a restriction on them, the outer scope is
> typically used to control this.
>
> - Again this trims the container interface (not essential, but
> typically nice, & less mental expense during usage).
>
> - Though fixing size may be an initial optimisation, hitting the
> limit will cause slow down. After profiling I could fix a size
> bigger then that ever possibly reached, but then I'm wasting
> memory. Also, with the right allocator diverting away from
> OS-level allocation is still possible. With this method I'm going
> to have a more responsive queue during run-time, avoid expense
> on the non-fixed usage, and likely end up saving on my memory
> budget anyway.

My paper proposed a queue concept that admits both fixed-size and
unbounded queues.  The reason I did that is because allocation can
be a significant run-time cost and can itself block.  Sometimes
preallocating is the way to go.

Having a need for fixed-size queues does not imply excluding
unbounded queues.

> Another side note (I made one earlier in this thread): I'm
> preferring now to avoid a concurrent queue/stack synchronised
> with mutexes.
>
> For the benefits of some readers; The problem with a queue
> protected with mutual exclusion is that when you try to push an
> item and hit contention, your thread swaps out. Then the popping
> thread (e.g. a background worker picking items off the queue)
> gets an item and your thread swaps back in.  Thread swapping in
> and out just to push/add an item creates unnecessary thrashing
> due to switching. It may not be a issue in some systems, but if
> you're looking for performance it's not ideal, especially if the
> queue is on a critical path in your code.

We also have an implementation of a lock-free queue.  In either the
mutex-based implementations or the lock-free implementations, you
avoid swapping out the thread by using the non-blocking operations.

> & Lawrence; I think that if it came to a choice of picking one
> implementation over the other to push for standardisation of a
> concurrent queue, the lockless version is the way to go (please!)

I would be very much surprised if we could get away with only one
standard implementation.

--
Lawrence Crowl

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



.


Author: Dominic Burghuber <dmb.youcangetme@googlemail.com>
Date: Thu, 11 Apr 2013 06:26:33 -0700 (PDT)
Raw View
------=_Part_131_29159740.1365686793551
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable


Sorry for a delayed reply & thank you for the reply.

On Sunday, April 7, 2013 8:40:27 PM UTC+1, Vicente J. Botet Escriba wrote:
>
>  Le 07/04/13 06:09, Dominic Burghuber a =E9crit :
> =20
>
wait_pop and pop_value have the additional difference that they can be=20
> applied to different types (DefaultConstructible/NoThrowMoveConstructible=
)
>

....=20

Exception-safety must be ensured by requiring tha the type is no throw move=
=20
> constructible.
>

There in lies the key reason the for value_pop() -  allowing movement out=
=20
of the queue as opposed for copying. Having reviewed n3533 I must have=20
overlooked it initially.  =20

Could you show your interface?
>

I could but now in enlightenment of the reason for value_pop(), it won't=20
differ far from the implementation accompanying n3533.  Where I am now is=
=20
looking to move on (there's much to be implemented in the project I'm=20
working on). The solution I have gone for is a quick wrap of the std::queue=
=20
interface inside an concurrent_queue object, allowing for std::list/deque=
=20
underlying container, & allocators. The queue is optionally bounded (i.e. 0=
=20
for number of elements member means no limit on size).  It now has value=20
semantics in addition to wait & try, & the ability to open & close. Also as=
=20
it wraps existing std containers I can utilising emplace_back to add to the=
=20
queue. Then you have queries (empty, full, size, limit, closed) & the=20
ability to reset the limit (this makes push/emplace_back commands fail (if=
=20
try) or wait (if wait_push or simply push/emplace) until enough has been=20
popped that a waiting thread can be notified by the not_full condition.

I'd like the time to tackle a direct implementation but the solution I've=
=20
gone for provides enough to move on out in the trenches here. It's not=20
ideal, but if it bites later in profiling I can then look to change its=20
guts. Hopefully when standardised it will be simple to change in the=20
language's version as well.=20

I adding a lockfree queue soon, but...


> A variation that solves this issue consists in adding congested state, an=
d=20
> exiting from the congested state only when there is enough empty places o=
n=20
> the queue. I don't know if this can be seen as QOI or needs to be specifi=
ed.
>
> =20
.... that point (above) is very interesting. I'm going to look into it so=20
maybe will be tackling a direct implementation after all. I'll have to=20
review it and evaluate. Thank you for pointing it out to me.


> I don't know if this is inherent to lock-free queues but Boost.LockFree=
=20
> introduces some additional requirements to the stored type
>
> =20
>    -=20
>   =20
>    T must have a trivial assignment operator
>     -=20
>   =20
>    T must have a trivial destructor=20
>   =20
> =20
I'll also look into these requirements as I evaluate the lock-free=20
solution.=20

Regards,

Dominic

--=20

---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.



------=_Part_131_29159740.1365686793551
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br>Sorry for a delayed reply &amp; thank you for the reply.<br><br>On Sund=
ay, April 7, 2013 8:40:27 PM UTC+1, Vicente J. Botet Escriba wrote:<blockqu=
ote class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.8ex;border-left=
: 1px #ccc solid;padding-left: 1ex;">
 =20
   =20
 =20
  <div>
    <div>Le 07/04/13 06:09, Dominic Burghuber a
      =E9crit&nbsp;:<br>
    </div>&nbsp;</div></blockquote><blockquote class=3D"gmail_quote" style=
=3D"margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: =
1ex;"><div bgcolor=3D"#FFFFFF" text=3D"#000000">
    wait_pop and pop_value have the additional difference that they can
    be applied to different types
    (DefaultConstructible/<wbr>NoThrowMoveConstructible)<br></div></blockqu=
ote><div><br>... <br><br></div><blockquote class=3D"gmail_quote" style=3D"m=
argin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;"=
><div bgcolor=3D"#FFFFFF" text=3D"#000000">
   =20
    Exception-safety must be ensured by requiring tha the type is no
    throw move constructible.<br>
    </div></blockquote><div><br>There in lies the key reason the for value_=
pop() -&nbsp; allowing movement out of the queue as opposed for copying. Ha=
ving reviewed n3533 I must have overlooked it initially. &nbsp; <br><br></d=
iv><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.8ex;=
border-left: 1px #ccc solid;padding-left: 1ex;"><div bgcolor=3D"#FFFFFF" te=
xt=3D"#000000">
    Could you show your interface?<br></div></blockquote><div><br>I could b=
ut now in enlightenment of the reason for value_pop(), it won't differ far =
from the implementation accompanying n3533.&nbsp; Where I am now is looking=
 to move on (there's much to be implemented in the project I'm working on).=
 The solution I have gone for is a quick wrap of the std::queue interface i=
nside an concurrent_queue object, allowing for std::list/deque underlying c=
ontainer, &amp; allocators. The queue is optionally bounded (i.e. 0 for num=
ber of elements member means no limit on size).&nbsp; It now has value sema=
ntics in addition to wait &amp; try, &amp; the ability to open &amp; close.=
 Also as it wraps existing std containers I can utilising emplace_back to a=
dd to the queue. Then you have queries (empty, full, size, limit, closed) &=
amp; the ability to reset the limit (this makes push/emplace_back commands =
fail (if try) or wait (if wait_push or simply push/emplace) until enough ha=
s been popped that a waiting thread can be notified by the not_full conditi=
on.<br><br>I'd like the time to tackle a direct implementation but the solu=
tion I've gone for provides enough to move on out in the trenches here. It'=
s not ideal, but if it bites later in profiling I can then look to change i=
ts guts. Hopefully when standardised it will be simple to change in the lan=
guage's version as well. <br><br>I adding a lockfree queue soon, but...<br>=
<br></div><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-left:=
 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;"><div bgcolor=3D"#FFF=
FFF" text=3D"#000000">
    <br>
   =20
    A variation that solves this issue consists in adding congested
    state, and exiting from the congested state only when there is
    enough empty places on the queue. I don't know if this can be seen
    as QOI or needs to be specified.<br>
    <br>
    </div></blockquote><div>&nbsp;<br>... that point (above) is very intere=
sting. I'm going to look into it so maybe will be tackling a direct impleme=
ntation after all. I'll have to review it and evaluate. Thank you for point=
ing it out to me.<br><br></div><blockquote class=3D"gmail_quote" style=3D"m=
argin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;"=
><div bgcolor=3D"#FFFFFF" text=3D"#000000"><br>
    I don't know if this is inherent to lock-free queues but
    Boost.LockFree introduces some additional requirements to the stored
    type<br>
    <br>
  </div>
  <ul>
    <li>
      <p>T must have a trivial assignment operator</p>
    </li>
    <li>
      <p>T must have a trivial destructor </p></li></ul></blockquote><div>&=
nbsp;<br>I'll also look into these requirements as I evaluate the lock-free=
 solution. <br><br>Regards,<br><br>Dominic<br></div>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_131_29159740.1365686793551--

.


Author: Dominic Burghuber <dmb.youcangetme@googlemail.com>
Date: Thu, 11 Apr 2013 06:57:30 -0700 (PDT)
Raw View
------=_Part_852_3664757.1365688650903
Content-Type: text/plain; charset=ISO-8859-1



On Tuesday, April 9, 2013 1:29:41 AM UTC+1, Lawrence Crowl wrote:


> My paper proposed a queue concept that admits both fixed-size and
> unbounded queues.  The reason I did that is because allocation can
> be a significant run-time cost and can itself block.  Sometimes
> preallocating is the way to go.
>
> Having a need for fixed-size queues does not imply excluding
> unbounded queues.
>
>
I saw that in my implementation, now in yours. In my arena (soft real-time
systems) pre-allocation often is the way to go.


>
> We also have an implementation of a lock-free queue.  In either the
> mutex-based implementations or the lock-free implementations, you
> avoid swapping out the thread by using the non-blocking operations.
>
>
 But you try & perhaps without success, then try again later, & so on.  You
look to the lock-free solution to "fire & forget" in a sense. I kept my
mutually excluded implementation implementation around, as it will probably
have its place.

Providing usage keeps a mutex out of contention then cost is minimal. Say
you've got multiple threads dropping stuff on the queue  (& perhaps
multiple threads picking stuff out of it). The lockless queue should just
get you swapping threads only when out of work, not to allow access to a
blocked shared state.


> I would be very much surprised if we could get away with only one
> standard implementation.
>
>
 That is good to hear.

Thanks for your time Lawrence.


--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.



------=_Part_852_3664757.1365688650903
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br>On Tuesday, April 9, 2013 1:29:41 AM UTC+1, Lawrence Crowl wrote:<d=
iv>&nbsp;</div><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-=
left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">My paper propos=
ed a queue concept that admits both fixed-size and
<br>unbounded queues. &nbsp;The reason I did that is because allocation can
<br>be a significant run-time cost and can itself block. &nbsp;Sometimes
<br>preallocating is the way to go.
<br>
<br>Having a need for fixed-size queues does not imply excluding
<br>unbounded queues.
<br>
<br></blockquote><div><br>I saw that in my implementation, now in yours. In=
 my arena (soft real-time systems) pre-allocation often is the way to go. <=
br>&nbsp;</div><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-=
left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;"><br>We also hav=
e an implementation of a lock-free queue. &nbsp;In either the
<br>mutex-based implementations or the lock-free implementations, you
<br>avoid swapping out the thread by using the non-blocking operations.
<br>
<br></blockquote><div>&nbsp;<br>&nbsp;But you try &amp; perhaps without suc=
cess, then try again later, &amp; so on.&nbsp; You look to the lock-free so=
lution to "fire &amp; forget" in a sense. I kept my mutually excluded imple=
mentation implementation around, as it will probably have its place. <br><b=
r>Providing usage keeps a mutex out of contention then cost is minimal. Say=
 you've got multiple threads dropping stuff on the queue&nbsp; (&amp; perha=
ps multiple threads picking stuff out of it). The lockless queue should jus=
t get you swapping threads only when out of work, not to allow access to a =
blocked shared state.<br></div><div>&nbsp;</div><blockquote class=3D"gmail_=
quote" style=3D"margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;pa=
dding-left: 1ex;">I would be very much surprised if we could get away with =
only one
<br>standard implementation.
<br>
<br></blockquote><div><br>&nbsp;That is good to hear. <br><br>Thanks for yo=
ur time Lawrence. <br><br><br></div>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_852_3664757.1365688650903--

.