Topic: Comments on N3533 - C++ Concurrent Queues
Author: "Vicente J. Botet Escriba" <vicente.botet@wanadoo.fr>
Date: Sat, 23 Mar 2013 16:55:32 +0100
Raw View
This is a multi-part message in MIME format.
--------------070806030907000105010003
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi,
I would like to share some thought about this proposal.
*Closed queues*
While closing a queue is a user action, the threads that see this state
need to take an exceptional action, and so I wonder if it wouldn't be
better to signal this state using an exception.
*Non-waiting operations*
This allows to change the queue_op_status returned by the proposed
operations by a bool returning if the operation succeeds or not.
|bool try_push(const Element&);|
|bool try_push(Element&&);|
|bool try_pop(Element&);|
*Non-blocking operations*
The case for queue_op_status::busy could be conveyed by the bool without
too much lost of information for the user.
Concerning the name of the nonblocking operations we could use a
no_block_tag
bool try_push(no_block_tag, const |Element|& elem);
bool try_pop(no_block_tag, |Element|& elem);
or use it as a suffix
bool try_push_no_block(const |Element|& elem);
bool try_pop_no_block(|Element|& elem);
*Blocking operations*
I suggest to rename the wait_ope as wait_and_ope
|void wait_and_push(const Element&);|
|void wait_and_push(Element&&);|
|void wait_and_pop(Element&);|||
There will be no need for push operation.
value_pop should not be available if the Element type is not throw movable.
|Element queue::value_pop();|
I wonder if in order to have a uniform naming the operation shouldn't be
renamed as wait_and_pop.
For types that can throw while moving the return type could be a
shared_ptr. The drawback is of course the needed allocation.
I would be not against removing the wait_and_ prefix
|void push(const Element&);|
|
|void push(Element&&);|
void pop(Element&);|||
|Element pop();|
*Observers*
Instead of using is_empty/is_full/is_closed it would be better to use
empty/full/closed to follow the current C++ standard naming usage*.**
*A size() function would make it possible to observe from outside the
evolution of the queue size.
*
**Bounded queues*
For queues with a know capacity I would suggest the addition of a
capacity function.
*
**Unbounded queues
*For unbounded queues the wait_and_push operation name is less
meaningful as there is no wait.*
*Best regards,
Vicente
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
--------------070806030907000105010003
Content-Type: text/html; charset=ISO-8859-1
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
I would like to share some thought about this proposal.<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<b>Closed queues</b><br>
While closing a queue is a user action, the threads that see this
state need to take an exceptional action, and so I wonder if it
wouldn't be better to signal this state using an exception. <br>
<br>
<b>Non-waiting operations</b><br>
This allows to change the queue_op_status returned by the proposed
operations by a bool returning if the operation succeeds or not. <br>
<br>
<dt><code>bool try_push(const <var>Element</var>&);</code></dt>
<dt><code>bool try_push(<var>Element</var>&&);</code></dt>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<dt><code>bool try_pop(<var>Element</var>&);</code></dt>
<br>
<b>Non-blocking operations</b><br>
<br>
The case for queue_op_status::busy could be conveyed by the bool
without too much lost of information for the user.<br>
<br>
Concerning the name of the nonblocking operations we could use a
no_block_tag<br>
<br>
bool try_push(no_block_tag, const <code><var>Element</var></code>&
elem);<br>
bool try_pop(no_block_tag, <code><var>Element</var></code>&
elem);<br>
<br>
or use it as a suffix<br>
<br>
bool try_push_no_block(const <code><var>Element</var></code>&
elem);<br>
bool try_pop_no_block(<code><var>Element</var></code>& elem);<br>
<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<b>Blocking operations</b><br>
<br>
I suggest to rename the wait_ope as wait_and_ope<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<dt><code>void wait_and_push(const <var>Element</var>&);</code></dt>
<dt><code>void wait_and_push(<var>Element</var>&&);</code></dt>
<code>void wait_and_pop(<var>Element</var>&);</code><code></code><dt></dt>
<br>
There will be no need for push operation.<br>
<br>
value_pop should not be available if the Element type is not throw
movable.<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<dt><code><var>Element</var> <var>queue</var>::value_pop();</code></dt>
<br>
I wonder if in order to have a uniform naming the operation
shouldn't be renamed as wait_and_pop.<br>
<br>
For types that can throw while moving the return type could be a
shared_ptr. The drawback is of course the needed allocation. <br>
<br>
<br>
I would be not against removing the wait_and_ prefix<br>
<br>
<dt><code>void push(const <var>Element</var>&);</code></dt>
<code><dt><code>void push(<var>Element</var>&&);</code></dt>
<dt></dt>
<dt></dt>
void pop(<var>Element</var>&);</code><code></code><br>
<code><var>Element</var> pop();</code><dd><br>
</dd>
<dd><br>
</dd>
<dd><br>
</dd>
<dd><br>
</dd>
<dd><br>
</dd>
<dd><br>
</dd>
<dd><br>
</dd>
<dd><br>
</dd>
<b>Observers</b><br>
<br>
Instead of using is_empty/is_full/is_closed it would be better to
use empty/full/closed to follow the current C++ standard naming
usage<b>.</b><b><br>
<br>
</b>A size() function would make it possible to observe from outside
the evolution of the queue size.<br>
<b><br>
</b><b>Bounded queues</b><br>
<br>
For queues with a know capacity I would suggest the addition of a
capacity function.<br>
<b><br>
</b><b>Unbounded queues<br>
<br>
</b>For unbounded queues the wait_and_push operation name is less
meaningful as there is no wait.<b><br>
<br>
</b>Best regards,<br>
Vicente<br>
</body>
</html>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href="http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en">http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en</a>.<br />
<br />
<br />
--------------070806030907000105010003--
.
Author: =?UTF-8?Q?Klaim_=2D_Jo=C3=ABl_Lamotte?= <mjklaim@gmail.com>
Date: Sat, 23 Mar 2013 17:47:30 +0100
Raw View
--e89a8ff1c2fcddeb7004d89a53b3
Content-Type: text/plain; charset=ISO-8859-1
On Sat, Mar 23, 2013 at 4:55 PM, Vicente J. Botet Escriba <
vicente.botet@wanadoo.fr> wrote:
> *Closed queues*
> While closing a queue is a user action, the threads that see this state
> need to take an exceptional action, and so I wonder if it wouldn't be
> better to signal this state using an exception.
>
>
I agree on the signaling, I disagree with the exception. Closing a queue is
not like making the queue an invalid object.
If some kind of signal library is accepted, it would be better to use it.
Or at least use a way to register a callback for this instead of throwing
an exception.
The callback would then be called when trying to pop the closed queue.
Joel Lamotte
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
--e89a8ff1c2fcddeb7004d89a53b3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr"><br><div class=3D"gmail_extra"><br><br><div class=3D"gmail=
_quote">On Sat, Mar 23, 2013 at 4:55 PM, Vicente J. Botet Escriba <span dir=
=3D"ltr"><<a href=3D"mailto:vicente.botet@wanadoo.fr" target=3D"_blank">=
vicente.botet@wanadoo.fr</a>></span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
=20
=20
=20
<div bgcolor=3D"#FFFFFF" text=3D"#000000"><b>Closed queues</b><br>
While closing a queue is a user action, the threads that see this
state need to take an exceptional action, and so I wonder if it
wouldn't be better to signal this state using an exception. <br>
<br></div></blockquote><div><br></div><div style>I agree on the signali=
ng, I disagree with the exception. Closing a queue is not like making the q=
ueue an invalid object.</div><div style>If some kind of signal library is a=
ccepted, it would be better to use it. Or at least use a way to register a =
callback for this instead of throwing an exception.</div>
<div style>The callback would then be called when trying to pop the closed =
queue.=A0</div><div style><br></div><div style>Joel Lamotte</div><div style=
><br></div></div></div></div>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
<br />
<br />
--e89a8ff1c2fcddeb7004d89a53b3--
.
Author: "Vicente J. Botet Escriba" <vicente.botet@wanadoo.fr>
Date: Sat, 23 Mar 2013 18:40:57 +0100
Raw View
This is a multi-part message in MIME format.
--------------090303040503060708030703
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Le 23/03/13 17:47, Klaim - Jo=C4=97l Lamotte a =C3=A9crit :
>
>
>
> On Sat, Mar 23, 2013 at 4:55 PM, Vicente J. Botet Escriba=20
> <vicente.botet@wanadoo.fr <mailto:vicente.botet@wanadoo.fr>> wrote:
>
> *Closed queues*
> While closing a queue is a user action, the threads that see this
> state need to take an exceptional action, and so I wonder if it
> wouldn't be better to signal this state using an exception.
>
>
> I agree on the signaling, I disagree with the exception. Closing a=20
> queue is not like making the queue an invalid object.
I agree. Note however that the current proposal includes functions as
|void queue::push(const Element&);|
|void queue::push(Element&&);|
Push the |Element| onto the queue.
|Element queue::value_pop();|
Pop an |Element| from the queue. The element will be moved out of
the queue in preference to being copied.
which could notify a closed queue only using an exception. I have not=20
identified yet any major argument for mixing both approaches.
> If some kind of signal library is accepted, it would be better to use=20
> it. Or at least use a way to register a callback for this instead of=20
> throwing an exception.
> The callback would then be called when trying to pop the closed queue.
I have not analyzed too much the signal or callback approach. For the=20
time been I don't see a need this kind of notification as the user of=20
the queue would be aware as soon as it uses it. Anyway we don't have a=20
signal library and the callback approach should be cumbersome and makes=20
the implementation more complex than needed.
Best,
Vicente
--=20
---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.
--------------090303040503060708030703
Content-Type: text/html; charset=ISO-8859-1
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Le 23/03/13 17:47, Klaim - Joël Lamotte
a écrit :<br>
</div>
<blockquote
cite="mid:CAOU91OOFe=qaVbkGLs2a=zW8iMydJ-5Jv0QomsNb28EiMELD7Q@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Sat, Mar 23, 2013 at 4:55 PM,
Vicente J. Botet Escriba <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:vicente.botet@wanadoo.fr" target="_blank">vicente.botet@wanadoo.fr</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><b>Closed queues</b><br>
While closing a queue is a user action, the threads that
see this state need to take an exceptional action, and
so I wonder if it wouldn't be better to signal this
state using an exception. <br>
<br>
</div>
</blockquote>
<div><br>
</div>
<div style="">I agree on the signaling, I disagree with the
exception. Closing a queue is not like making the queue an
invalid object.</div>
</div>
</div>
</div>
</blockquote>
I agree. Note however that the current proposal includes functions
as<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<dl>
<dt><code>void
<var>queue</var>::push(const <var>Element</var>&);</code></dt>
<dt><code>void
<var>queue</var>::push(<var>Element</var>&&);</code></dt>
<dd>
<p>
Push the <code><var>Element</var></code> onto the queue.
</p>
</dd>
<dt><code><var>Element</var> <var>queue</var>::value_pop();</code></dt>
<dd>
<p>
Pop an <code><var>Element</var></code> from the queue.
The element will be moved out of the queue in preference to
being copied.
</p>
</dd>
<dt>which could notify a closed queue only using an exception. I
have not identified yet any major argument for mixing both
approaches.<br>
</dt>
</dl>
<blockquote
cite="mid:CAOU91OOFe=qaVbkGLs2a=zW8iMydJ-5Jv0QomsNb28EiMELD7Q@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div style="">If some kind of signal library is accepted, it
would be better to use it. Or at least use a way to
register a callback for this instead of throwing an
exception.</div>
<div style="">The callback would then be called when trying
to pop the closed queue. <br>
</div>
</div>
</div>
</div>
</blockquote>
I have not analyzed too much the signal or callback approach. For
the time been I don't see a need this kind of notification as the
user of the queue would be aware as soon as it uses it. Anyway we
don't have a signal library and the callback approach should be
cumbersome and makes the implementation more complex than needed. <br>
<br>
Best,<br>
Vicente<br>
</body>
</html>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href="http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en">http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en</a>.<br />
<br />
<br />
--------------090303040503060708030703--
.
Author: Oliver Kowalke <oliver.kowalke@gmail.com>
Date: Sat, 23 Mar 2013 19:19:00 +0100
Raw View
--e89a8f923cd216f64f04d89b9be3
Content-Type: text/plain; charset=ISO-8859-1
2013/3/23 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
>
> Element queue::value_pop();
>
> Pop an Element from the queue. The element will be moved out of the queue
> in preference to being copied.
> which could notify a closed queue only using an exception. I have not
> identified yet any major argument for mixing both approaches.
>
maybe returning std::optional<>: std::optional< R > queue::value_pop() ->
an invalid optional would indicate a closed queue
or bool queue::value_pop( T&)?
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
--e89a8f923cd216f64f04d89b9be3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<div class=3D"gmail_quote">2013/3/23 Vicente J. Botet Escriba <span dir=3D"=
ltr"><<a href=3D"mailto:vicente.botet@wanadoo.fr" target=3D"_blank">vice=
nte.botet@wanadoo.fr</a>></span><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor=3D"#FFFFFF" text=3D"#000000"><dl><dt><code><var>Element</var> =
<var>queue</var>::value_pop();</code></dt>
<dd>
<p>
Pop an <code><var>Element</var></code> from the queue.
The element will be moved out of the queue in preference to
being copied.
</p>
</dd>
<dt>which could notify a closed queue only using an exception. I
have not identified yet any major argument for mixing both
approaches.<br></dt></dl></div></blockquote></div>maybe returning s=
td::optional<>: std::optional< R > queue::value_pop() -> an =
invalid optional would indicate a closed queue<br>or bool queue::value_pop(=
T&)?<br>
<div style id=3D"__af745f8f43-e961-4b88-8424-80b67790c964__"></div>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
<br />
<br />
--e89a8f923cd216f64f04d89b9be3--
.
Author: "Vicente J. Botet Escriba" <vicente.botet@wanadoo.fr>
Date: Sat, 23 Mar 2013 22:49:59 +0100
Raw View
This is a multi-part message in MIME format.
--------------050607040206080509050001
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable
Le 23/03/13 19:19, Oliver Kowalke a =E9crit :
> 2013/3/23 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr=20
> <mailto:vicente.botet@wanadoo.fr>>
>
> |Element queue::value_pop();|
>
> Pop an |Element| from the queue. The element will be moved out
> of the queue in preference to being copied.
>
> which could notify a closed queue only using an exception. I have
> not identified yet any major argument for mixing both approaches.
>
> maybe returning std::optional<>: std::optional< R > queue::value_pop()=20
> -> an invalid optional would indicate a closed queue
The problem with optional is that its is unable to ensure no throw move.=20
This is why the operation should be disabled otherwise the user risk to=20
loss enqueued elements, which IMO is not acceptable.
Vicente
--=20
---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.
--------------050607040206080509050001
Content-Type: text/html; charset=ISO-8859-1
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Le 23/03/13 19:19, Oliver Kowalke a
écrit :<br>
</div>
<blockquote
cite="mid:CA+wfc19Bu9vpk33S1WN7hravTThH4G3UyA-ZR__6+kv5zKNQ2g@mail.gmail.com"
type="cite">
<div class="gmail_quote">2013/3/23 Vicente J. Botet Escriba <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:vicente.botet@wanadoo.fr" target="_blank">vicente.botet@wanadoo.fr</a>></span>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<dl>
<dt><code><var>Element</var> <var>queue</var>::value_pop();</code></dt>
<dd>
<p> Pop an <code><var>Element</var></code> from the
queue. The element will be moved out of the queue in
preference to being copied. </p>
</dd>
<dt>which could notify a closed queue only using an
exception. I have not identified yet any major argument
for mixing both approaches.<br>
</dt>
</dl>
</div>
</blockquote>
</div>
maybe returning std::optional<>: std::optional< R >
queue::value_pop() -> an invalid optional would indicate a
closed queue<br>
</blockquote>
The problem with optional is that its is unable to ensure no throw
move. This is why the operation should be disabled otherwise the
user risk to loss enqueued elements, which IMO is not acceptable.<br>
<br>
Vicente<br>
</body>
</html>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href="http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en">http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en</a>.<br />
<br />
<br />
--------------050607040206080509050001--
.
Author: Lawrence Crowl <crowl@googlers.com>
Date: Sun, 24 Mar 2013 17:49:34 -0700
Raw View
On 3/23/13, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
> I would like to share some thought about this proposal.
>
> *Closed queues*
> While closing a queue is a user action, the threads that see this state
> need to take an exceptional action, and so I wonder if it wouldn't be
> better to signal this state using an exception.
That is possible.
>
> *Non-waiting operations*
> This allows to change the queue_op_status returned by the proposed
> operations by a bool returning if the operation succeeds or not.
>
> |bool try_push(const Element&);|
> |bool try_push(Element&&);|
> |bool try_pop(Element&);|
I do not understand this comment.
>
> *Non-blocking operations*
>
> The case for queue_op_status::busy could be conveyed by the bool without
> too much lost of information for the user.
There is a significant pragmatic difference between a status of
empty and a status of busy. An busy status will likely be resolved
quickly. An empty status may be resolved only after an extended
time. The difference may effect what one does in the meantime.
>
> Concerning the name of the nonblocking operations we could use a
> no_block_tag
>
> bool try_push(no_block_tag, const |Element|& elem);
> bool try_pop(no_block_tag, |Element|& elem);
>
> or use it as a suffix
>
> bool try_push_no_block(const |Element|& elem);
> bool try_pop_no_block(|Element|& elem);
Both of these seem unnecessarily more verbose to me.
>
> *Blocking operations*
>
> I suggest to rename the wait_ope as wait_and_ope
>
> |void wait_and_push(const Element&);|
> |void wait_and_push(Element&&);|
> |void wait_and_pop(Element&);|||
>
> There will be no need for push operation.
>
> value_pop should not be available if the Element type is not throw movable.
The distinction between, e.g., value_pop and wait_pop is there to
support different patterns of use. If you do not expect to close
the queue, then wait_pop induces a burden on the programmer to
check it. On the other hand, if you do expect to close the queue,
then value_pop induces exception handling overhead.
>
> |Element queue::value_pop();|
>
> I wonder if in order to have a uniform naming the operation shouldn't be
> renamed as wait_and_pop.
There is a naming issue exposed by this paper, which will affect
many of the interfaces we create for other types as well.
>
> For types that can throw while moving the return type could be a
> shared_ptr. The drawback is of course the needed allocation.
Are you suggesting that the programmer do that, or that the library
force it with sfinae?
>
>
> I would be not against removing the wait_and_ prefix
>
> |void push(const Element&);|
> |
> |void push(Element&&);|
> void pop(Element&);|||
> |Element pop();|
One concern was confusion with existing names that do something
different.
>
> *Observers*
>
> Instead of using is_empty/is_full/is_closed it would be better to use
> empty/full/closed to follow the current C++ standard naming usage*.**
Such naming is possible, but there are problems with the current
usage. In particular, empty is both an action and an attribute,
and so the effect of calling empty is unclear. On the other
hand, is_empty is unambiguously an attribute test. (Consider the
corresponding action 'be_empty'.)
> *A size() function would make it possible to observe from outside the
> evolution of the queue size.
Yes, but we imagine high-performance queues for which the number
of elements in the queue is unknowable. I would rather not exclude
those implementations.
> *
> **Bounded queues*
>
> For queues with a know capacity I would suggest the addition of a
> capacity function.
That make sense on a per-queue basis, but I don't think it makes
sense as part of the concept.
> *
> **Unbounded queues
>
> *For unbounded queues the wait_and_push operation name is less
> meaningful as there is no wait.*
True, and then some of the operations would be identical.
--
Lawrence Crowl
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Lawrence Crowl <crowl@googlers.com>
Date: Sun, 24 Mar 2013 17:54:57 -0700
Raw View
On 3/23/13, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
> Le 23/03/13 19:19, Oliver Kowalke a =E9crit :
>> 2013/3/23 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
>>
>> |Element queue::value_pop();|
>>
>> Pop an |Element| from the queue. The element will be moved out
>> of the queue in preference to being copied.
>>
>> which could notify a closed queue only using an exception. I have
>> not identified yet any major argument for mixing both approaches.
>>
>> maybe returning std::optional<>: std::optional< R > queue::value_pop()
>> -> an invalid optional would indicate a closed queue
>
> The problem with optional is that its is unable to ensure no throw move.
> This is why the operation should be disabled otherwise the user risk to
> loss enqueued elements, which IMO is not acceptable.
There was discussion on this topic at the last meeting. The idea
is to return an object that carries either the status or the value.
I think such an approach has merit, but as no concrete proposal
appeared before mailing deadline, we had to go with the existing
multi-member approach.
With a status/value proposal in hand, we could rework our proposal.
We would probably rework several other proposals as well.
Even so, there are still at least three different pop functions,
one that waits when the queue is empty, one that blocks when the
queue is non-empty but another operation is in progress, and one
that will not block.
--=20
Lawrence Crowl
--=20
---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.
.
Author: "Vicente J. Botet Escriba" <vicente.botet@wanadoo.fr>
Date: Tue, 26 Mar 2013 00:57:50 +0100
Raw View
Le 25/03/13 01:49, Lawrence Crowl a =E9crit :
> On 3/23/13, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
>> I would like to share some thought about this proposal.
>>
>> *Closed queues*
>> While closing a queue is a user action, the threads that see this state
>> need to take an exceptional action, and so I wonder if it wouldn't be
>> better to signal this state using an exception.
> That is possible.
I suspect that I missed the difference between push and wait_push.
>
>> *Non-waiting operations*
>> This allows to change the queue_op_status returned by the proposed
>> operations by a bool returning if the operation succeeds or not.
>>
>> |bool try_push(const Element&);|
>> |bool try_push(Element&&);|
>> |bool try_pop(Element&);|
> I do not understand this comment.
The idea is in the same way push/pop_value throws an exception when the=20
queue is closed, why don't have try_push/try_pop operations that throws=20
on the same conditions.
>
>> *Non-blocking operations*
>>
>> The case for queue_op_status::busy could be conveyed by the bool without
>> too much lost of information for the user.
> There is a significant pragmatic difference between a status of
> empty and a status of busy. An busy status will likely be resolved
> quickly. An empty status may be resolved only after an extended
> time. The difference may effect what one does in the meantime.
Maybe you are right. I have not enough experience with this kind of=20
refinement on the conditions and usage of non-blocking operations.
>
>> *Blocking operations*
>>
>> I suggest to rename the wait_ope as wait_and_ope
>>
>> |void wait_and_push(const Element&);|
>> |void wait_and_push(Element&&);|
>> |void wait_and_pop(Element&);|||
>>
>> There will be no need for push operation.
>>
>> value_pop should not be available if the Element type is not throw movab=
le.
> The distinction between, e.g., value_pop and wait_pop is there to
> support different patterns of use. If you do not expect to close
> the queue, then wait_pop induces a burden on the programmer to
> check it. On the other hand, if you do expect to close the queue,
> then value_pop induces exception handling overhead.
Sorry, I missed the important difference.
The use case I have in mind was a case in which the queue is closed only=20
on exceptional situations. Would value_pop induce an overhead on this=20
case? In which cases it would be interesting to close a queue if it is=20
not on exceptional situations?
In case the system has other mechanism to signal an end of=20
communication, have you considered if having concurrent queues that can=20
not be closed, could mean some performance improvement?
>> For types that can throw while moving the return type could be a
>> shared_ptr. The drawback is of course the needed allocation.
> Are you suggesting that the programmer do that, or that the library
> force it with sfinae?
That the library force it.
>
>> *Observers*
>>
>> Instead of using is_empty/is_full/is_closed it would be better to use
>> empty/full/closed to follow the current C++ standard naming usage*.**
> Such naming is possible, but there are problems with the current
> usage. In particular, empty is both an action and an attribute,
> and so the effect of calling empty is unclear. On the other
> hand, is_empty is unambiguously an attribute test. (Consider the
> corresponding action 'be_empty'.)
But there is no empty modifier. Never mind, this is not too important.
Thanks for pointing clearly on the distinction I missed.
Vicente
--=20
---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.
.
Author: Olaf van der Spek <olafvdspek@gmail.com>
Date: Tue, 26 Mar 2013 05:07:01 -0700 (PDT)
Raw View
------=_Part_1909_27831443.1364299621161
Content-Type: text/plain; charset=ISO-8859-1
What's the rational for non-blocking operations? Are locking issues really
a big problem?The terms blocking and waiting have kind of the same meaning.
Are better terms available?
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
------=_Part_1909_27831443.1364299621161
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<h3></h3><h3><font color=3D"#1155cc" face=3D"Times New Roman"><span style=
=3D"line-height: normal;">What's the rational for non-blocking operations? =
Are locking issues really a big problem?</span></font></h3><h3><font color=
=3D"#1155cc" face=3D"Times New Roman"><span style=3D"line-height: normal;">=
The terms blocking and waiting have kind of the same meaning. Are better te=
rms available?</span></font></h3><div style=3D"color: rgb(0, 0, 0); font-fa=
mily: 'Times New Roman'; line-height: normal;"><br></div>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
<br />
<br />
------=_Part_1909_27831443.1364299621161--
.
Author: Ville Voutilainen <ville.voutilainen@gmail.com>
Date: Tue, 26 Mar 2013 14:14:58 +0200
Raw View
On 26 March 2013 14:07, Olaf van der Spek <olafvdspek@gmail.com> wrote:
> What's the rational for non-blocking operations? Are locking issues really a
> big problem?
Yes, locking issues are really a big problem for efficient
concurrency. If you want
more details, read Anthony Williams's book C++ Concurrency in Action.
> The terms blocking and waiting have kind of the same meaning. Are better
> terms available?
Not really, and the paper already explains the difference.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Olaf van der Spek <olafvdspek@gmail.com>
Date: Tue, 26 Mar 2013 13:22:17 +0100
Raw View
On Tue, Mar 26, 2013 at 1:14 PM, Ville Voutilainen
<ville.voutilainen@gmail.com> wrote:
> On 26 March 2013 14:07, Olaf van der Spek <olafvdspek@gmail.com> wrote:
>> What's the rational for non-blocking operations? Are locking issues really a
>> big problem?
>
> Yes, locking issues are really a big problem for efficient
> concurrency. If you want
> more details, read Anthony Williams's book C++ Concurrency in Action.
I know locking is a big issue in general, I was wondering specifically
about the queue operations.
I expected the non-blocking behavior to be unnecessary as the push and
pop operations should be sufficiently fast (if not atomic).
>> The terms blocking and waiting have kind of the same meaning. Are better
>> terms available?
>
> Not really, and the paper already explains the difference.
I (now) understand the difference, but in networking the term blocking
refers to not blocking on data / buffers being (un)available, while
here it refers to not (blocking on) syncing. I think this
inconsistency is unfortunate.
--
Olaf
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Ville Voutilainen <ville.voutilainen@gmail.com>
Date: Tue, 26 Mar 2013 14:36:25 +0200
Raw View
On 26 March 2013 14:22, Olaf van der Spek <olafvdspek@gmail.com> wrote:
> I know locking is a big issue in general, I was wondering specifically
> about the queue operations.
> I expected the non-blocking behavior to be unnecessary as the push and
> pop operations should be sufficiently fast (if not atomic).
That's not the case for a queue that isn't lock-free.
> I (now) understand the difference, but in networking the term blocking
> refers to not blocking on data / buffers being (un)available, while
> here it refers to not (blocking on) syncing. I think this
> inconsistency is unfortunate.
Blocking in this paper seems consistent with such concepts. The additional
concept is non-waiting, which may block for sync but will not wait for
data. Non-blocking
seems very much consistent with what you refer to in networking, since it won't
block even for sync.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Olaf van der Spek <olafvdspek@gmail.com>
Date: Tue, 26 Mar 2013 16:37:16 +0100
Raw View
On Tue, Mar 26, 2013 at 1:36 PM, Ville Voutilainen
<ville.voutilainen@gmail.com> wrote:
> On 26 March 2013 14:22, Olaf van der Spek <olafvdspek@gmail.com> wrote:
>> I know locking is a big issue in general, I was wondering specifically
>> about the queue operations.
>> I expected the non-blocking behavior to be unnecessary as the push and
>> pop operations should be sufficiently fast (if not atomic).
>
> That's not the case for a queue that isn't lock-free.
If syncing is so expensive, shouldn't there be push operations taking
multiple elements?
>> I (now) understand the difference, but in networking the term blocking
>> refers to not blocking on data / buffers being (un)available, while
>> here it refers to not (blocking on) syncing. I think this
>> inconsistency is unfortunate.
>
> Blocking in this paper seems consistent with such concepts. The additional
> concept is non-waiting, which may block for sync but will not wait for
> data. Non-blocking
> seems very much consistent with what you refer to in networking, since it won't
> block even for sync.
Hmm, I'd say not waiting for locks is the new concept here, with
non-waiting being equivalent to non-blocking for networking.
--
Olaf
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Ville Voutilainen <ville.voutilainen@gmail.com>
Date: Tue, 26 Mar 2013 17:54:54 +0200
Raw View
On 26 March 2013 17:37, Olaf van der Spek <olafvdspek@gmail.com> wrote:
> If syncing is so expensive, shouldn't there be push operations taking
> multiple elements?
In theory, maybe. Perhaps it's not a use case common enough to have.
>> Blocking in this paper seems consistent with such concepts. The additional
>> concept is non-waiting, which may block for sync but will not wait for
>> data. Non-blocking
>> seems very much consistent with what you refer to in networking, since it won't
>> block even for sync.
> Hmm, I'd say not waiting for locks is the new concept here, with
> non-waiting being equivalent to non-blocking for networking.
I fail to agree with that view, since the non-waiting functions can
block on a lock
for arbitrary amounts of time. I don't see that as non-blocking, and
I'm surprised
if someone does.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Lawrence Crowl <crowl@googlers.com>
Date: Tue, 26 Mar 2013 14:34:02 -0700
Raw View
On 3/25/13, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
> Le 25/03/13 01:49, Lawrence Crowl a =E9crit :
> > On 3/23/13, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
> > > *Non-waiting operations*
> > > This allows to change the queue_op_status returned by the
> > > proposed operations by a bool returning if the operation
> > > succeeds or not.
> > >
> > > |bool try_push(const Element&);|
> > > |bool try_push(Element&&);|
> > > |bool try_pop(Element&);|
> >
> > I do not understand this comment.
>
> The idea is in the same way push/pop_value throws an exception when
> the queue is closed, why don't have try_push/try_pop operations
> that throws on the same conditions.
I understand now. The reason was that checking a bool return
value is most of the way to checking an enum value. Which means
the benefit of having such a function is lower than with, e.g.,
push versus wait_push. I felt that the extra value was not worth
making the interface larger. Others might choose differently.
> The use case I have in mind was a case in which the queue is
> closed only on exceptional situations. Would value_pop induce an
> overhead on this case? In which cases it would be interesting
> to close a queue if it is not on exceptional situations?
Whenever you have a finite data stream, you need to signal the end of
that stream. For files, we use EOF. The normal queue operations do
not have that support. For concurrency, using an external mechanism
complicates the code. So, we build the end-of-data support into
the queue.
> In case the system has other mechanism to signal an end of
> communication, have you considered if having concurrent queues
> that can not be closed, could mean some performance improvement?
There would be some improvement. The queue may get smaller by one
byte, depending on alignment issues. That byte no longer needs to
be initialized when constructing the queue. The most significant
costs, though, are one additional test of that bool when popping
from an empty queue and when pushing to a queue. That cost is
small in comparison to the cost of the mutex.
> > > For types that can throw while moving the return type could be
> > > a shared_ptr. The drawback is of course the needed allocation.
> >
> > Are you suggesting that the programmer do that, or that the
> > library force it with sfinae?
>
> That the library force it.
It occurs to me that if the copy/move may throw, how would you
reliably get the value into a shared_ptr? I do not think we can
legislate a fix here, particularly as the issue also occurs when
copying from the parameter into a queue slot.
As it stands, exceptions here at most lose the one element.
The queue itself is exception safe.
--=20
Lawrence Crowl
--=20
---=20
You received this message because you are subscribed to the Google Groups "=
ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposa=
ls/?hl=3Den.
.
Author: Lawrence Crowl <crowl@googlers.com>
Date: Tue, 26 Mar 2013 14:38:18 -0700
Raw View
On 3/26/13, Ville Voutilainen <ville.voutilainen@gmail.com> wrote:
> On 26 March 2013 14:22, Olaf van der Spek <olafvdspek@gmail.com> wrote:
> > I know locking is a big issue in general, I was wondering
> > specifically about the queue operations. I expected the
> > non-blocking behavior to be unnecessary as the push and pop
> > operations should be sufficiently fast (if not atomic).
>
> That's not the case for a queue that isn't lock-free.
>
> > I (now) understand the difference, but in networking the
> > term blocking refers to not blocking on data / buffers being
> > (un)available, while here it refers to not (blocking on)
> > syncing. I think this inconsistency is unfortunate.
>
> Blocking in this paper seems consistent with such concepts. The
> additional concept is non-waiting, which may block for sync but
> will not wait for data. Non-blocking seems very much consistent
> with what you refer to in networking, since it won't block even
> for sync.
I think that the main issue here is that we should carefully
consider what terminology we use for our concurrent libraries.
In particular, the try_ operations have different synchronization
effects for concurrent queues (both in this proposal and in TBB)
than they do for mutexes. Maybe we should be consistent.
--
Lawrence Crowl
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Tony V E <tvaneerd@gmail.com>
Date: Wed, 27 Mar 2013 12:51:04 -0400
Raw View
On Tue, Mar 26, 2013 at 5:34 PM, Lawrence Crowl <crowl@googlers.com> wrote:
>
>> In case the system has other mechanism to signal an end of
>> communication, have you considered if having concurrent queues
>> that can not be closed, could mean some performance improvement?
>
> There would be some improvement. The queue may get smaller by one
> byte, depending on alignment issues. That byte no longer needs to
> be initialized when constructing the queue. The most significant
> costs, though, are one additional test of that bool when popping
> from an empty queue and when pushing to a queue. That cost is
> small in comparison to the cost of the mutex.
>
Sorry if I haven't been following the discussion closely enough and
maybe misunderstand the context, but:
A lock-free queue that needs to check a 'closed' flag on push and/or
pop will then lose much of its scalability - you have made this flag,
likely an atomic, a bottleneck degrading all your other lock-free
efforts.
Unless I misunderstand...
Tony
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Jeffrey Yasskin <jyasskin@google.com>
Date: Wed, 27 Mar 2013 10:46:21 -0700
Raw View
On Wed, Mar 27, 2013 at 9:51 AM, Tony V E <tvaneerd@gmail.com> wrote:
> On Tue, Mar 26, 2013 at 5:34 PM, Lawrence Crowl <crowl@googlers.com> wrote:
>>
>>> In case the system has other mechanism to signal an end of
>>> communication, have you considered if having concurrent queues
>>> that can not be closed, could mean some performance improvement?
>>
>> There would be some improvement. The queue may get smaller by one
>> byte, depending on alignment issues. That byte no longer needs to
>> be initialized when constructing the queue. The most significant
>> costs, though, are one additional test of that bool when popping
>> from an empty queue and when pushing to a queue. That cost is
>> small in comparison to the cost of the mutex.
>>
>
> Sorry if I haven't been following the discussion closely enough and
> maybe misunderstand the context, but:
>
> A lock-free queue that needs to check a 'closed' flag on push and/or
> pop will then lose much of its scalability - you have made this flag,
> likely an atomic, a bottleneck degrading all your other lock-free
> efforts.
Could you elaborate? A word that changes exactly once can live in each
processor's cache and not be a bottleneck at all, given a processor
with branch prediction. Being atomic doesn't change that on common
processors.
The 'closed' flag _does_ cost a whole extra cache line of memory in
order to be efficient, since (unless I'm forgetting something) the
rest of the queue's member data does change, and that cost could be
something to worry about.
Jeffrey
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Tony V E <tvaneerd@gmail.com>
Date: Wed, 27 Mar 2013 14:38:27 -0400
Raw View
On Wed, Mar 27, 2013 at 1:46 PM, Jeffrey Yasskin <jyasskin@google.com> wrote:
> On Wed, Mar 27, 2013 at 9:51 AM, Tony V E <tvaneerd@gmail.com> wrote:
>> On Tue, Mar 26, 2013 at 5:34 PM, Lawrence Crowl <crowl@googlers.com> wrote:
>>>
>>>> In case the system has other mechanism to signal an end of
>>>> communication, have you considered if having concurrent queues
>>>> that can not be closed, could mean some performance improvement?
>>>
>>> There would be some improvement. The queue may get smaller by one
>>> byte, depending on alignment issues. That byte no longer needs to
>>> be initialized when constructing the queue. The most significant
>>> costs, though, are one additional test of that bool when popping
>>> from an empty queue and when pushing to a queue. That cost is
>>> small in comparison to the cost of the mutex.
>>>
>>
>> Sorry if I haven't been following the discussion closely enough and
>> maybe misunderstand the context, but:
>>
>> A lock-free queue that needs to check a 'closed' flag on push and/or
>> pop will then lose much of its scalability - you have made this flag,
>> likely an atomic, a bottleneck degrading all your other lock-free
>> efforts.
>
> Could you elaborate? A word that changes exactly once can live in each
> processor's cache and not be a bottleneck at all, given a processor
> with branch prediction. Being atomic doesn't change that on common
> processors.
>
> The 'closed' flag _does_ cost a whole extra cache line of memory in
> order to be efficient, since (unless I'm forgetting something) the
> rest of the queue's member data does change, and that cost could be
> something to worry about.
>
> Jeffrey
>
> --
I'm concerned about when the flag needs to be checked - on every
push/pull? And in the cache is not necessarily that quick. An atomic
read/write can be 100x slower than non-atomic due to the way it stalls
read/write queues and what not.
So two other thoughts:
- I may still be misunderstanding the context - maybe it is not on
every push/pull, etc.
- if it is just part of the spec, an implementation isn't forced to
make it a separate word - it could be possibly done as part of the
stream in some cases.
One example to consider, and maybe this example wouldn't meet the
requirements of the spec for various reasons, but is a "lowest case"
example:
a fixed length queue (using circular buffer) of ints *where the ints
are NOT references to other data (ie indexes) but the data itself* (ie
maybe keystrokes is a good example)
with one reader, one writer
with 0 being an invalid data value
....doesn't even need memory ordering to be correct - it can use
relaxed atomics. (If the ints reference other data, *then* you need
happens-before guarantees)
(Rough outline: readers 'see' either 0 or value, so know if data is
ready - on their processor - or not. Readers write back 0 after
reading - mark the index as free. Writers see value (that they wrote)
or 0 (eventually - relaxed atomics) as to whether the spot in the
queue is ready or not. Reader and Writer have separate pointers,
never modify the other's pointer, thus no atomics there.)
So any constraints beyond that will harm the performance of the queue.
Of course it is a specialized queue, so you can always write your own
instead of using a std one.
But it is an example of probably the lowest overhead queue, and from
there you build up features while bringing down performance (which is
quite possibly a reasonable tradeoff). So with that starting queue in
mind, I think 'closed' adds significant overhead.
For more complex queues, the relative overhead obviously lessens, but
in general each new atomic read/write that is required is a measurable
performance degradation on a lock-free queue. Once you get up to a
small number of atomics, you are often better off just wrapping a
std::queue with a mutex.
Tony
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Jeffrey Yasskin <jyasskin@google.com>
Date: Wed, 27 Mar 2013 11:44:47 -0700
Raw View
--089e01538b7ca6c62a04d8ec6e7f
Content-Type: text/plain; charset=ISO-8859-1
On Mar 27, 2013 11:38 AM, "Tony V E" <tvaneerd@gmail.com> wrote:
>
> On Wed, Mar 27, 2013 at 1:46 PM, Jeffrey Yasskin <jyasskin@google.com>
wrote:
> > On Wed, Mar 27, 2013 at 9:51 AM, Tony V E <tvaneerd@gmail.com> wrote:
> >> On Tue, Mar 26, 2013 at 5:34 PM, Lawrence Crowl <crowl@googlers.com>
wrote:
> >>>
> >>>> In case the system has other mechanism to signal an end of
> >>>> communication, have you considered if having concurrent queues
> >>>> that can not be closed, could mean some performance improvement?
> >>>
> >>> There would be some improvement. The queue may get smaller by one
> >>> byte, depending on alignment issues. That byte no longer needs to
> >>> be initialized when constructing the queue. The most significant
> >>> costs, though, are one additional test of that bool when popping
> >>> from an empty queue and when pushing to a queue. That cost is
> >>> small in comparison to the cost of the mutex.
> >>>
> >>
> >> Sorry if I haven't been following the discussion closely enough and
> >> maybe misunderstand the context, but:
> >>
> >> A lock-free queue that needs to check a 'closed' flag on push and/or
> >> pop will then lose much of its scalability - you have made this flag,
> >> likely an atomic, a bottleneck degrading all your other lock-free
> >> efforts.
> >
> > Could you elaborate? A word that changes exactly once can live in each
> > processor's cache and not be a bottleneck at all, given a processor
> > with branch prediction. Being atomic doesn't change that on common
> > processors.
> >
> > The 'closed' flag _does_ cost a whole extra cache line of memory in
> > order to be efficient, since (unless I'm forgetting something) the
> > rest of the queue's member data does change, and that cost could be
> > something to worry about.
> >
> > Jeffrey
> >
> > --
>
>
> I'm concerned about when the flag needs to be checked - on every
> push/pull?
I believe yes.
> And in the cache is not necessarily that quick. An atomic
> read/write can be 100x slower than non-atomic due to the way it stalls
> read/write queues and what not.
Go measure:
if (m_closed.load(relaxed)) fence (acquire);
I think you don't know what you're talking about.
> So two other thoughts:
> - I may still be misunderstanding the context - maybe it is not on
> every push/pull, etc.
> - if it is just part of the spec, an implementation isn't forced to
> make it a separate word - it could be possibly done as part of the
> stream in some cases.
>
>
> One example to consider, and maybe this example wouldn't meet the
> requirements of the spec for various reasons, but is a "lowest case"
> example:
>
> a fixed length queue (using circular buffer) of ints *where the ints
> are NOT references to other data (ie indexes) but the data itself* (ie
> maybe keystrokes is a good example)
> with one reader, one writer
> with 0 being an invalid data value
>
> ...doesn't even need memory ordering to be correct - it can use
> relaxed atomics. (If the ints reference other data, *then* you need
> happens-before guarantees)
>
> (Rough outline: readers 'see' either 0 or value, so know if data is
> ready - on their processor - or not. Readers write back 0 after
> reading - mark the index as free. Writers see value (that they wrote)
> or 0 (eventually - relaxed atomics) as to whether the spot in the
> queue is ready or not. Reader and Writer have separate pointers,
> never modify the other's pointer, thus no atomics there.)
>
>
> So any constraints beyond that will harm the performance of the queue.
> Of course it is a specialized queue, so you can always write your own
> instead of using a std one.
>
> But it is an example of probably the lowest overhead queue, and from
> there you build up features while bringing down performance (which is
> quite possibly a reasonable tradeoff). So with that starting queue in
> mind, I think 'closed' adds significant overhead.
>
> For more complex queues, the relative overhead obviously lessens, but
> in general each new atomic read/write that is required is a measurable
> performance degradation on a lock-free queue. Once you get up to a
> small number of atomics, you are often better off just wrapping a
> std::queue with a mutex.
>
> Tony
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
"ISO C++ Standard - Future Proposals" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to std-proposals+unsubscribe@isocpp.org.
> To post to this group, send email to std-proposals@isocpp.org.
> Visit this group at
http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
>
>
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
--089e01538b7ca6c62a04d8ec6e7f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<p dir=3D"ltr">On Mar 27, 2013 11:38 AM, "Tony V E" <<a href=
=3D"mailto:tvaneerd@gmail.com">tvaneerd@gmail.com</a>> wrote:<br>
><br>
> On Wed, Mar 27, 2013 at 1:46 PM, Jeffrey Yasskin <<a href=3D"mailto=
:jyasskin@google.com">jyasskin@google.com</a>> wrote:<br>
> > On Wed, Mar 27, 2013 at 9:51 AM, Tony V E <<a href=3D"mailto:t=
vaneerd@gmail.com">tvaneerd@gmail.com</a>> wrote:<br>
> >> On Tue, Mar 26, 2013 at 5:34 PM, Lawrence Crowl <<a href=
=3D"mailto:crowl@googlers.com">crowl@googlers.com</a>> wrote:<br>
> >>><br>
> >>>> In case the system has other mechanism to signal an e=
nd of<br>
> >>>> communication, have you considered if having concurre=
nt queues<br>
> >>>> that can not be closed, could mean some performance i=
mprovement?<br>
> >>><br>
> >>> There would be some improvement. =A0The queue may get sma=
ller by one<br>
> >>> byte, depending on alignment issues. =A0That byte no long=
er needs to<br>
> >>> be initialized when constructing the queue. =A0The most s=
ignificant<br>
> >>> costs, though, are one additional test of that bool when =
popping<br>
> >>> from an empty queue and when pushing to a queue. =A0That =
cost is<br>
> >>> small in comparison to the cost of the mutex.<br>
> >>><br>
> >><br>
> >> Sorry if I haven't been following the discussion closely =
enough and<br>
> >> maybe misunderstand the context, but:<br>
> >><br>
> >> A lock-free queue that needs to check a 'closed' flag=
on push and/or<br>
> >> pop will then lose much of its scalability - you have made th=
is flag,<br>
> >> likely an atomic, a bottleneck degrading all your other lock-=
free<br>
> >> efforts.<br>
> ><br>
> > Could you elaborate? A word that changes exactly once can live in=
each<br>
> > processor's cache and not be a bottleneck at all, given a pro=
cessor<br>
> > with branch prediction. Being atomic doesn't change that on c=
ommon<br>
> > processors.<br>
> ><br>
> > The 'closed' flag _does_ cost a whole extra cache line of=
memory in<br>
> > order to be efficient, since (unless I'm forgetting something=
) the<br>
> > rest of the queue's member data does change, and that cost co=
uld be<br>
> > something to worry about.<br>
> ><br>
> > Jeffrey<br>
> ><br>
> > --<br>
><br>
><br>
> I'm concerned about when the flag needs to be checked - on every<b=
r>
> push/pull? </p>
<p dir=3D"ltr">I believe yes.</p>
<p dir=3D"ltr">>=A0And in the cache is not necessarily that quick. =A0An=
atomic<br>
> read/write can be 100x slower than non-atomic due to the way it stalls=
<br>
> read/write queues and what not.</p>
<p dir=3D"ltr">Go measure:</p>
<p dir=3D"ltr">if (m_closed.load(relaxed)) fence (acquire);</p>
<p dir=3D"ltr">I think you don't know what you're talking about.</p=
>
<p dir=3D"ltr">> So two other thoughts:<br>
> - I may still be misunderstanding the context - maybe it is not on<br>
> every push/pull, etc.<br>
> - if it is just part of the spec, an implementation isn't forced t=
o<br>
> make it a separate word - it could be possibly done as part of the<br>
> stream in some cases.<br>
><br>
><br>
> One example to consider, and maybe this example wouldn't meet the<=
br>
> requirements of the spec for various reasons, but is a "lowest ca=
se"<br>
> example:<br>
><br>
> a fixed length queue (using circular buffer) of ints *where the ints<b=
r>
> are NOT references to other data (ie indexes) but the data itself* (ie=
<br>
> maybe keystrokes is a good example)<br>
> with one reader, one writer<br>
> with 0 being an invalid data value<br>
><br>
> ...doesn't even need memory ordering to be correct - it can use<br=
>
> relaxed atomics. (If the ints reference other data, *then* you need<br=
>
> happens-before guarantees)<br>
><br>
> (Rough outline: readers 'see' either 0 or value, so know if da=
ta is<br>
> ready - on their processor - or not. =A0Readers write back 0 after<br>
> reading - mark the index as free. =A0Writers see value (that they wrot=
e)<br>
> or 0 (eventually - relaxed atomics) as to whether the spot in the<br>
> queue is ready or not. =A0Reader and Writer have separate pointers,<br=
>
> never modify the other's pointer, thus no atomics there.)<br>
><br>
><br>
> So any constraints beyond that will harm the performance of the queue.=
<br>
> =A0Of course it is a specialized queue, so you can always write your o=
wn<br>
> instead of using a std one.<br>
><br>
> But it is an example of probably the lowest overhead queue, and from<b=
r>
> there you build up features while bringing down performance (which is<=
br>
> quite possibly a reasonable tradeoff). So with that starting queue in<=
br>
> mind, I think 'closed' adds significant overhead.<br>
><br>
> For more complex queues, the relative overhead obviously lessens, but<=
br>
> in general each new atomic read/write that is required is a measurable=
<br>
> performance degradation on a lock-free queue. =A0Once you get up to a<=
br>
> small number of atomics, you are often better off just wrapping a<br>
> std::queue with a mutex.<br>
><br>
> Tony<br>
><br>
> --<br>
><br>
> ---<br>
> You received this message because you are subscribed to the Google Gro=
ups "ISO C++ Standard - Future Proposals" group.<br>
> To unsubscribe from this group and stop receiving emails from it, send=
an email to <a href=3D"mailto:std-proposals%2Bunsubscribe@isocpp.org">std-=
proposals+unsubscribe@isocpp.org</a>.<br>
> To post to this group, send email to <a href=3D"mailto:std-proposals@i=
socpp.org">std-proposals@isocpp.org</a>.<br>
> Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/g=
roup/std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/st=
d-proposals/?hl=3Den</a>.<br>
><br>
><br>
</p>
<p></p>
-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/?hl=3Den">http://groups.google.com/a/isocpp.org/group/std-pro=
posals/?hl=3Den</a>.<br />
<br />
<br />
--089e01538b7ca6c62a04d8ec6e7f--
.
Author: Tony V E <tvaneerd@gmail.com>
Date: Wed, 27 Mar 2013 15:53:45 -0400
Raw View
On Wed, Mar 27, 2013 at 2:44 PM, Jeffrey Yasskin <jyasskin@google.com> wrote:
> On Mar 27, 2013 11:38 AM, "Tony V E" <tvaneerd@gmail.com> wrote:
>> And in the cache is not necessarily that quick. An atomic
>> read/write can be 100x slower than non-atomic due to the way it stalls
>> read/write queues and what not.
>
> Go measure:
>
> if (m_closed.load(relaxed)) fence (acquire);
>
> I think you don't know what you're talking about.
>
Shh. Don't tell anyone. :-) I previously did lots of lock-free
programming, but it is no longer my "day job". I still give talks at
BoostCon about lockfree programming - but I always disavow being an
expert when giving talks.
It has been a long time since I've measured these things. But I have
measured CAS vs raw reads. And once upon I time had researched what
numbers others had found (I can try to find references). The 100
number is a number I've seen quite a few times. It is really register
vs memory or CPU vs memory - ie the reason we have caches. Once upon
a time the CPU and memory ran at the same speed. The CPU has become
faster when memory hasn't.
I wasn't thinking of relaxed read followed by fence. That makes sense
- it's like the 'test and test and set' trick. That would definitely
lower the cost.
I just instinctively worry about each additional atomic (non-relaxed)
operation. I found for something like a lock-free allocator, the
number of CAS's was the main performance factor.
Tony
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Tony V E <tvaneerd@gmail.com>
Date: Wed, 27 Mar 2013 15:58:32 -0400
Raw View
On Wed, Mar 27, 2013 at 3:53 PM, Tony V E <tvaneerd@gmail.com> wrote:
> On Wed, Mar 27, 2013 at 2:44 PM, Jeffrey Yasskin <jyasskin@google.com> wrote:
>> On Mar 27, 2013 11:38 AM, "Tony V E" <tvaneerd@gmail.com> wrote:
>
>>> And in the cache is not necessarily that quick. An atomic
>>> read/write can be 100x slower than non-atomic due to the way it stalls
>>> read/write queues and what not.
>>
>> Go measure:
>>
>> if (m_closed.load(relaxed)) fence (acquire);
>>
>> I think you don't know what you're talking about.
>>
>
> Shh. Don't tell anyone. :-) I previously did lots of lock-free
> programming, but it is no longer my "day job". I still give talks at
> BoostCon about lockfree programming - but I always disavow being an
> expert when giving talks.
>
> It has been a long time since I've measured these things. But I have
> measured CAS vs raw reads. And once upon I time had researched what
> numbers others had found (I can try to find references). The 100
> number is a number I've seen quite a few times. It is really register
> vs memory or CPU vs memory - ie the reason we have caches. Once upon
> a time the CPU and memory ran at the same speed. The CPU has become
> faster when memory hasn't.
.... hasn't at the same rate as the CPU. Obviously memory has become
faster. There is just a wide chasm.
>
> I wasn't thinking of relaxed read followed by fence. That makes sense
> - it's like the 'test and test and set' trick. That would definitely
> lower the cost.
>
> I just instinctively worry about each additional atomic (non-relaxed)
> operation. I found for something like a lock-free allocator, the
> number of CAS's was the main performance factor.
>
> Tony
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Lawrence Crowl <crowl@googlers.com>
Date: Wed, 27 Mar 2013 13:54:57 -0700
Raw View
On 3/27/13, Tony V E <tvaneerd@gmail.com> wrote:
> On Mar 27, 2013 Jeffrey Yasskin <jyasskin@google.com> wrote:
> > On Mar 27, 2013 Tony V E <tvaneerd@gmail.com> wrote:
> > > On Mar 26, 2013 Lawrence Crowl <crowl@googlers.com> wrote:
> > > > > In case the system has other mechanism to signal an end
> > > > > of communication, have you considered if having concurrent
> > > > > queues that can not be closed, could mean some performance
> > > > > improvement?
> > > >
> > > > There would be some improvement. The queue may get smaller
> > > > by one byte, depending on alignment issues. That byte no
> > > > longer needs to be initialized when constructing the queue.
> > > > The most significant costs, though, are one additional test
> > > > of that bool when popping from an empty queue and when
> > > > pushing to a queue. That cost is small in comparison to
> > > > the cost of the mutex.
> > >
> > > Sorry if I haven't been following the discussion closely
> > > enough and maybe misunderstand the context, but:
> > >
> > > A lock-free queue that needs to check a 'closed' flag on push
> > > and/or pop will then lose much of its scalability - you have
> > > made this flag, likely an atomic, a bottleneck degrading all
> > > your other lock-free efforts.
> >
> > Could you elaborate? A word that changes exactly once can live
> > in each processor's cache and not be a bottleneck at all, given
> > a processor with branch prediction. Being atomic doesn't change
> > that on common processors.
> >
> > The 'closed' flag _does_ cost a whole extra cache line of
> > memory in order to be efficient, since (unless I'm forgetting
> > something) the rest of the queue's member data does change,
> > and that cost could be something to worry about.
>
> I'm concerned about when the flag needs to be checked - on every
> push/pull?
On every push and on pops that find the queue empty.
> And in the cache is not necessarily that quick. An atomic
> read/write can be 100x slower than non-atomic due to the way it
> stalls read/write queues and what not.
For a mutex implementation, it is a simple byte read within the
critical section. For others, it's a relaxed read, which is about as
fast as you're going to get and still communicate between processors.
> So two other thoughts:
> - I may still be misunderstanding the context - maybe it is not
> on every push/pull, etc.
Every push, not every pull/pop.
> - if it is just part of the spec, an implementation isn't forced
> to make it a separate word - it could be possibly done as part
> of the stream in some cases.
I do not understand that comment.
> One example to consider, and maybe this example wouldn't meet the
> requirements of the spec for various reasons, but is a "lowest
> case" example:
>
> a fixed length queue (using circular buffer) of ints *where the
> ints are NOT references to other data (ie indexes) but the data
> itself* (ie maybe keystrokes is a good example)
> with one reader, one writer
> with 0 being an invalid data value
>
> ...doesn't even need memory ordering to be correct - it can use
> relaxed atomics. (If the ints reference other data, *then* you
> need happens-before guarantees)
>
> (Rough outline: readers 'see' either 0 or value, so know if data is
> ready - on their processor - or not. Readers write back 0 after
> reading - mark the index as free. Writers see value (that they
> wrote) or 0 (eventually - relaxed atomics) as to whether the spot
> in the queue is ready or not. Reader and Writer have separate
> pointers, never modify the other's pointer, thus no atomics there.)
They cannot use the ready data if they only use relaxed atomics.
There must be some memory synchronization and relaxed atomics do
not provide it.
The paper requires synchronization from the thread that does a push
to the thread that pops _that_ _value_. I think this requirement
is pretty minimal for most uses. I'm not sure I know of any queues
of general utility that have weaker requirements.
> So any constraints beyond that will harm the performance of the
> queue. Of course it is a specialized queue, so you can always
> write your own instead of using a std one.
Well, part of the purpose here is to make libraries available for
which someone has already done the heavy work necessary to get a
concurrent library correct and performant.
> But it is an example of probably the lowest overhead queue, and
> from there you build up features while bringing down performance
> (which is quite possibly a reasonable tradeoff). So with that
> starting queue in mind, I think 'closed' adds significant overhead.
I won't dispute that there is overhead, I just don't think it is
all that high, and certainly not high in comparison to out-of-band
techniques.
> For more complex queues, the relative overhead obviously lessens,
> but in general each new atomic read/write that is required
> is a measurable performance degradation on a lock-free queue.
> Once you get up to a small number of atomics, you are often better
> off just wrapping a std::queue with a mutex.
One of the reasons I want a concept that works for both is so that
programmers can shift implementations.
In any event, except for scalability concerns, the biggest advantage
I see to lock-free structures is that they are less vulnerable to
sick threads.
--
Lawrence Crowl
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Tony V E <tvaneerd@gmail.com>
Date: Wed, 27 Mar 2013 17:44:06 -0400
Raw View
On Wed, Mar 27, 2013 at 4:54 PM, Lawrence Crowl <crowl@googlers.com> wrote:
> On 3/27/13, Tony V E <tvaneerd@gmail.com> wrote:
>> > >
>> > > Sorry if I haven't been following the discussion closely
>> > > enough and maybe misunderstand the context, but:
>> > >
>> > > A lock-free queue that needs to check a 'closed' flag on push
>> > > and/or pop will then lose much of its scalability - you have
>> > > made this flag, likely an atomic, a bottleneck degrading all
>> > > your other lock-free efforts.
>> >
>> > Could you elaborate? A word that changes exactly once can live
>> > in each processor's cache and not be a bottleneck at all, given
>> > a processor with branch prediction. Being atomic doesn't change
>> > that on common processors.
>> >
>> > The 'closed' flag _does_ cost a whole extra cache line of
>> > memory in order to be efficient, since (unless I'm forgetting
>> > something) the rest of the queue's member data does change,
>> > and that cost could be something to worry about.
>>
>> I'm concerned about when the flag needs to be checked - on every
>> push/pull?
>
> On every push and on pops that find the queue empty.
>
So a close() while things are still in the queue obviously means empty
the queue first, then deal with the close.
OK, that means checking for closed only happens when you weren't doing
anything anyhow (at least on the pull side).
Is that the same for push? If push only checks when empty, then a
pulling thread that can't quite keep up can never close the queue?
ie push always see at least one item in the queue, so adds another
(doesn't check closed). Pull maybe sets closed, but still keeps
processing whatever is there (even new stuff) because it can never
quite get to empty for pull to see that it is closed. Or....?
>
>> - if it is just part of the spec, an implementation isn't forced
>> to make it a separate word - it could be possibly done as part
>> of the stream in some cases.
>
> I do not understand that comment.
>
Just thinking out loud. Might not make sense (sorry). But a queue
that can encode "closed" as part of the data (as otherwise invalid
data, ie 0 or EOF) could maybe only look at the data, not a separate
closed flag. At least on one side - ie push closing and pull seeing
closed. But doesn't work going the other way. (And only looking when
empty gives a similar optimization anyhow.)
>> One example to consider, and maybe this example wouldn't meet the
>> requirements of the spec for various reasons, but is a "lowest
>> case" example:
>>
>> a fixed length queue (using circular buffer) of ints *where the
>> ints are NOT references to other data (ie indexes) but the data
>> itself* (ie maybe keystrokes is a good example)
>> with one reader, one writer
>> with 0 being an invalid data value
>>
>> ...doesn't even need memory ordering to be correct - it can use
>> relaxed atomics. (If the ints reference other data, *then* you
>> need happens-before guarantees)
>>
>> (Rough outline: readers 'see' either 0 or value, so know if data is
>> ready - on their processor - or not. Readers write back 0 after
>> reading - mark the index as free. Writers see value (that they
>> wrote) or 0 (eventually - relaxed atomics) as to whether the spot
>> in the queue is ready or not. Reader and Writer have separate
>> pointers, never modify the other's pointer, thus no atomics there.)
>
> They cannot use the ready data if they only use relaxed atomics.
> There must be some memory synchronization and relaxed atomics do
> not provide it.
>
You only need memory synchronization if there is *other* memory (data)
to synchronize with besides the relaxed atomics in the queue (assuming
queue is fixed array, not a linked list, where the atomics would be
the list pointers).
ie a queue of ints that make up keystrokes. There may be no other
dependencies to the keystroke ints. They don't reference some other
struct of data.
If the program has implications like: *if* I see an X key, then I can
assume something else happened - then there is a dependency requiring
synchronization.
But if I only require this int means process this int, then I don't
need synchronization - synchronization is for synchronizing with
*other* memory operations. If the int I read contains all the
information I need, then I can use relaxed.
This is obviously a very limited example, and most people using it
would eventually forget what can be assumed and what can't, so I agree
that ensuring 'normal' memory order guarantees is what the spec should
enforce.
> The paper requires synchronization from the thread that does a push
> to the thread that pops _that_ _value_. I think this requirement
> is pretty minimal for most uses. I'm not sure I know of any queues
> of general utility that have weaker requirements.
>
Yeah, it might not be purely minimal, but agreed "for most uses".
Like I said, a queue that only requires the
atomic-int-of-data-and-nothing-more wouldn't need the synchronization,
but it is not very general utility.
>> So any constraints beyond that will harm the performance of the
>> queue. Of course it is a specialized queue, so you can always
>> write your own instead of using a std one.
>
> Well, part of the purpose here is to make libraries available for
> which someone has already done the heavy work necessary to get a
> concurrent library correct and performant.
>
yes, "write your own" typically isn't pleasant. :-) My worry with all
lock-free stuff is that many lock-free coders _will_ still do it by
hand. They moved to lock-free in order to get that last bit of
performance. Any generic solution still leaves a bit of performance
on the table. It is one reason why I think we should have a number of
lock-free structures - with the various tradeoffs - ie SPSC vs MPMC,
fixed length vs dynamic, etc etc. There are too many combinations
however. Can we make something that satisfies 80% of the 1% that
decided to build their own? Who are we targeting? Why not just use
std::queue with a mutex wrapping it? (That's what I recommend until
someone can prove they need otherwise.) I guess we are looking for
that sweet spot that satisfies most people looking for more
performance, but not all. And the ones that decide to do it
themselves will always get more performance, and always more bugs :-)
>> But it is an example of probably the lowest overhead queue, and
>> from there you build up features while bringing down performance
>> (which is quite possibly a reasonable tradeoff). So with that
>> starting queue in mind, I think 'closed' adds significant overhead.
>
> I won't dispute that there is overhead, I just don't think it is
> all that high, and certainly not high in comparison to out-of-band
> techniques.
>
OK.
>> For more complex queues, the relative overhead obviously lessens,
>> but in general each new atomic read/write that is required
>> is a measurable performance degradation on a lock-free queue.
>> Once you get up to a small number of atomics, you are often better
>> off just wrapping a std::queue with a mutex.
>
> One of the reasons I want a concept that works for both is so that
> programmers can shift implementations.
>
> In any event, except for scalability concerns, the biggest advantage
> I see to lock-free structures is that they are less vulnerable to
> sick threads.
>
agreed!
I think of it as a 'spectrum of locking-size':
On one end we have single-threaded or one big lock around your whole
program - dead-lock free!
Move along the spectrum by breaking the code into separate smaller
locks - more performance, but chance of deadlock increases.
Continue to make lock regions smaller and smaller until it becomes
atomic and lockfree - other end of the spectrum and dead-lock free
again.
Unfortunately most code lives in that middle region. :-(
> --
> Lawrence Crowl
>
Tony
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Tony V E <tvaneerd@gmail.com>
Date: Wed, 27 Mar 2013 17:53:41 -0400
Raw View
On Wed, Mar 27, 2013 at 5:44 PM, Tony V E <tvaneerd@gmail.com> wrote:
>>
>>> - if it is just part of the spec, an implementation isn't forced
>>> to make it a separate word - it could be possibly done as part
>>> of the stream in some cases.
>>
>> I do not understand that comment.
>>
>
> Just thinking out loud. Might not make sense (sorry). But a queue
> that can encode "closed" as part of the data (as otherwise invalid
> data, ie 0 or EOF) could maybe only look at the data, not a separate
> closed flag. At least on one side - ie push closing and pull seeing
> closed. But doesn't work going the other way. (And only looking when
> empty gives a similar optimization anyhow.)
>
And just to be pedantic, a fixed array queue of ints where both 0 and
-1 are invalid values, could use the array itself to communicate
closure. Readers typically write 0 into each slot after consuming it.
Writers use 0 to see the empty slot. Either can write -1 into slots
to indicate closed. Eventually reader will see -1 on reading, knows
that the queue is closed. Writer can see -1 in the slot it is about
to write, know that either it or the reader at some point marked the
queue as closed.
I'll stop talking about that queue now.
Tony
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.
Author: Lawrence Crowl <crowl@googlers.com>
Date: Wed, 27 Mar 2013 18:03:55 -0700
Raw View
On 3/27/13, Tony V E <tvaneerd@gmail.com> wrote:
> On Mar 27, 2013 Lawrence Crowl <crowl@googlers.com> wrote:
> > On 3/27/13, Tony V E <tvaneerd@gmail.com> wrote:
> > > > > Sorry if I haven't been following the discussion closely
> > > > > enough and maybe misunderstand the context, but:
> > > > >
> > > > > A lock-free queue that needs to check a 'closed' flag on push
> > > > > and/or pop will then lose much of its scalability - you have
> > > > > made this flag, likely an atomic, a bottleneck degrading all
> > > > > your other lock-free efforts.
> > > >
> > > > Could you elaborate? A word that changes exactly once can live
> > > > in each processor's cache and not be a bottleneck at all, given
> > > > a processor with branch prediction. Being atomic doesn't change
> > > > that on common processors.
> > > >
> > > > The 'closed' flag _does_ cost a whole extra cache line of
> > > > memory in order to be efficient, since (unless I'm forgetting
> > > > something) the rest of the queue's member data does change,
> > > > and that cost could be something to worry about.
> > >
> > > I'm concerned about when the flag needs to be checked - on every
> > > push/pull?
> >
> > On every push and on pops that find the queue empty.
>
> So a close() while things are still in the queue obviously means
> empty the queue first, then deal with the close. OK, that means
> checking for closed only happens when you weren't doing anything
> anyhow (at least on the pull side).
>
> Is that the same for push? If push only checks when empty, then a
> pulling thread that can't quite keep up can never close the queue?
> ie push always see at least one item in the queue, so adds another
> (doesn't check closed). Pull maybe sets closed, but still keeps
> processing whatever is there (even new stuff) because it can never
> quite get to empty for pull to see that it is closed. Or....?
Every push must check for closed. When a queue is closed, you
cannot add new elements but you can remove existing elements.
> > > - if it is just part of the spec, an implementation isn't
> > > forced to make it a separate word - it could be possibly done
> > > as part of the stream in some cases.
> >
> > I do not understand that comment.
>
> Just thinking out loud. Might not make sense (sorry). But a
> queue that can encode "closed" as part of the data (as otherwise
> invalid data, ie 0 or EOF) could maybe only look at the data, not
> a separate closed flag. At least on one side - ie push closing
> and pull seeing closed. But doesn't work going the other way.
> (And only looking when empty gives a similar optimization anyhow.)
Ah, I understand. I have been resisting such approaches because they
do not play well with templates. For and arbitrary type T, you would
need something it its value space to carry sideband information.
That extra information excludes those values from being used for
regular purposes.
That approach has its place, but I don't think it's place is in
the standard library.
> > > One example to consider, and maybe this example wouldn't meet
> > > the requirements of the spec for various reasons, but is a
> > > "lowest case" example:
> > >
> > > a fixed length queue (using circular buffer) of ints *where
> > > the ints are NOT references to other data (ie indexes) but
> > > the data itself* (ie maybe keystrokes is a good example)
> > > with one reader, one writer with 0 being an invalid data value
> > >
> > > ...doesn't even need memory ordering to be correct - it can
> > > use relaxed atomics. (If the ints reference other data, *then*
> > > you need happens-before guarantees)
> > >
> > > (Rough outline: readers 'see' either 0 or value, so know if
> > > data is ready - on their processor - or not. Readers write
> > > back 0 after reading - mark the index as free. Writers see
> > > value (that they wrote) or 0 (eventually - relaxed atomics) as
> > > to whether the spot in the queue is ready or not. Reader and
> > > Writer have separate pointers, never modify the other's
> > > pointer, thus no atomics there.)
> >
> > They cannot use the ready data if they only use relaxed atomics.
> > There must be some memory synchronization and relaxed atomics
> > do not provide it.
>
> You only need memory synchronization if there is *other* memory
> (data) to synchronize with besides the relaxed atomics in the
> queue (assuming queue is fixed array, not a linked list, where
> the atomics would be the list pointers). ie a queue of ints that
> make up keystrokes. There may be no other dependencies to the
> keystroke ints. They don't reference some other struct of data.
Yes, that has no need for memory synchronization.
> If the program has implications like: *if* I see an X key,
> then I can assume something else happened - then there is a
> dependency requiring synchronization. But if I only require this
> int means process this int, then I don't need synchronization
> - synchronization is for synchronizing with *other* memory
> operations. If the int I read contains all the information I need,
> then I can use relaxed.
Agreed. My concern is not that such situations exist, but whether
or not we should complicate the standard to support them.
> This is obviously a very limited example, and most people using
> it would eventually forget what can be assumed and what can't,
> so I agree that ensuring 'normal' memory order guarantees is what
> the spec should enforce.
>
> > The paper requires synchronization from the thread that does
> > a push to the thread that pops _that_ _value_. I think this
> > requirement is pretty minimal for most uses. I'm not sure I know
> > of any queues of general utility that have weaker requirements.
>
> Yeah, it might not be purely minimal, but agreed "for
> most uses". Like I said, a queue that only requires the
> atomic-int-of-data-and-nothing-more wouldn't need the
> synchronization, but it is not very general utility.
>
> > > So any constraints beyond that will harm the performance of
> > > the queue. Of course it is a specialized queue, so you can
> > > always write your own instead of using a std one.
> >
> > Well, part of the purpose here is to make libraries available
> > for which someone has already done the heavy work necessary to
> > get a concurrent library correct and performant.
>
> yes, "write your own" typically isn't pleasant. :-) My worry
> with all lock-free stuff is that many lock-free coders _will_
> still do it by hand. They moved to lock-free in order to get
> that last bit of performance. Any generic solution still leaves
> a bit of performance on the table.
The C++ memory model is as it is so that programmers can go after
that last bit of performance. We want people that need it to be
able to get it. On the other hand, the standard library needs
to be _generally_ useful. It cannot reasonably handle extreme
performance needs.
> It is one reason why I think we should have a number of lock-free
> structures - with the various tradeoffs - ie SPSC vs MPMC, fixed
> length vs dynamic, etc etc. There are too many combinations
> however. Can we make something that satisfies 80% of the 1%
> that decided to build their own?
I hope so.
> Who are we targeting?
Probably more than just one kind of programmer.
> Why not just use std::queue with a mutex wrapping it? (That's what
> I recommend until someone can prove they need otherwise.)
The problem with mutex wrapping a queue is what you do about empty
or full queues. If you want to wait, you will need a couple of
condition variables, and suddenly the wrapper starts looking less
generic.
> I guess we are looking for that sweet spot that satisfies most
> people looking for more performance, but not all. And the ones
> that decide to do it themselves will always get more performance,
> and always more bugs :-)
Agreed.
> > > But it is an example of probably the lowest overhead
> > > queue, and from there you build up features while bringing
> > > down performance (which is quite possibly a reasonable
> > > tradeoff). So with that starting queue in mind, I think
> > > 'closed' adds significant overhead.
> >
> > I won't dispute that there is overhead, I just don't think
> > it is all that high, and certainly not high in comparison to
> > out-of-band techniques.
>
> OK.
>
> > > For more complex queues, the relative overhead obviously
> > > lessens, but in general each new atomic read/write that is
> > > required is a measurable performance degradation on a lock-free
> > > queue. Once you get up to a small number of atomics, you
> > > are often better off just wrapping a std::queue with a mutex.
> >
> > One of the reasons I want a concept that works for both is so
> > that programmers can shift implementations.
> >
> > In any event, except for scalability concerns, the biggest
> > advantage I see to lock-free structures is that they are less
> > vulnerable to sick threads.
>
> agreed!
>
> I think of it as a 'spectrum of locking-size': On one end we
> have single-threaded or one big lock around your whole program -
> dead-lock free! Move along the spectrum by breaking the code
> into separate smaller locks - more performance, but chance of
> deadlock increases. Continue to make lock regions smaller and
> smaller until it becomes atomic and lockfree - other end of the
> spectrum and dead-lock free again.
>
> Unfortunately most code lives in that middle region. :-(
The curse of the multicore age.
--
Lawrence Crowl
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/?hl=en.
.