Topic: MPI and the standard library


Author: VinceRev <vince.rev@gmail.com>
Date: Tue, 25 Jun 2013 06:29:52 -0700 (PDT)
Raw View
------=_Part_53_27525599.1372166992780
Content-Type: text/plain; charset=ISO-8859-1

Hello everyone.

I open this thread just to ask about the current status of MPI (Message
Passing Interface) regarding the standardization process.
Since C++11 we have a very convenient standard threading library to work on
machines with shared memory.
In C++17, we may have a standard networking library.
So a next step regarding the standardization of parallelism would be to
have a library to work on heterogeneous architectures with distributed
memory.
On supercomputers, this is currently done with MPI libraries that exist for
quite a long time now. Problems are well-known, the communications schemes
are well tested and mastered, and boost already propose an interface to MPI.

So my question is : has MPI already been considered by the committee for
standardization ?
If yes, why related proposals have been rejected ?
If no, do you think that would be a good long-term project and why ?

Thanks.

--
Vincent Reverdy*
Phd Student @ Laboratory Universe and Theories**
Cosmology and General Relativity Group
Observatory of Paris-Meudon, France*

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.



------=_Part_53_27525599.1372166992780
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello everyone.<br><br>I open this thread just to ask about the current sta=
tus of MPI (Message Passing Interface) regarding the standardization proces=
s.<br>Since C++11 we have a very convenient standard threading library to w=
ork on machines with shared memory.<br>In C++17, we may have a standard net=
working library.<br>So a next step regarding the standardization of paralle=
lism would be to have a library to work on heterogeneous architectures with=
 distributed memory. <br>On supercomputers, this is currently done with MPI=
 libraries that exist for quite a long time now. Problems are well-known, t=
he communications schemes are well tested and mastered, and boost already p=
ropose an interface to MPI.<br><br>So my question is : has MPI already been=
 considered by the committee for standardization ?<br>If yes, why related p=
roposals have been rejected ?<br>If no, do you think that would be a good l=
ong-term project and why ?<br><br>Thanks.<br><br>-- <br>Vincent Reverdy<i><=
br>Phd Student @ Laboratory Universe and Theories</i><i><br>Cosmology and G=
eneral Relativity Group<br>Observatory of Paris-Meudon, France</i><br>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/">http://groups.google.com/a/isocpp.org/group/std-proposals/<=
/a>.<br />
&nbsp;<br />
&nbsp;<br />

------=_Part_53_27525599.1372166992780--

.


Author: Ville Voutilainen <ville.voutilainen@gmail.com>
Date: Tue, 25 Jun 2013 16:47:13 +0300
Raw View
--047d7bdc939630b61704dffac4de
Content-Type: text/plain; charset=ISO-8859-1

On 25 June 2013 16:29, VinceRev <vince.rev@gmail.com> wrote:

> Hello everyone.
>
> I open this thread just to ask about the current status of MPI (Message
> Passing Interface) regarding the standardization process.
> Since C++11 we have a very convenient standard threading library to work
> on machines with shared memory.
> In C++17, we may have a standard networking library.
> So a next step regarding the standardization of parallelism would be to
> have a library to work on heterogeneous architectures with distributed
> memory.
> On supercomputers, this is currently done with MPI libraries that exist
> for quite a long time now. Problems are well-known, the communications
> schemes are well tested and mastered, and boost already propose an
> interface to MPI.
>
> So my question is : has MPI already been considered by the committee for
> standardization ?
> If yes, why related proposals have been rejected ?
> If no, do you think that would be a good long-term project and why ?
>
> I don't recall seeing any recent papers/discussion about it. I can't say
much about MPI's feasibility, I have
no experience on it.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.



--047d7bdc939630b61704dffac4de
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><div class=3D"gmail_extra"><br><br><div class=3D"gmail=
_quote">On 25 June 2013 16:29, VinceRev <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:vince.rev@gmail.com" target=3D"_blank">vince.rev@gmail.com</a>&gt;</sp=
an> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Hello everyone.<br><br>I open this thread ju=
st to ask about the current status of MPI (Message Passing Interface) regar=
ding the standardization process.<br>
Since C++11 we have a very convenient standard threading library to work on=
 machines with shared memory.<br>In C++17, we may have a standard networkin=
g library.<br>So a next step regarding the standardization of parallelism w=
ould be to have a library to work on heterogeneous architectures with distr=
ibuted memory. <br>
On supercomputers, this is currently done with MPI libraries that exist for=
 quite a long time now. Problems are well-known, the communications schemes=
 are well tested and mastered, and boost already propose an interface to MP=
I.<br>
<br>So my question is : has MPI already been considered by the committee fo=
r standardization ?<br>If yes, why related proposals have been rejected ?<b=
r>If no, do you think that would be a good long-term project and why ?<br>
<br></blockquote><div>I don&#39;t recall seeing any recent papers/discussio=
n about it. I can&#39;t say much about MPI&#39;s feasibility, I have<br>no =
experience on it. <br></div></div><br></div></div>

<p></p>

-- <br />
&nbsp;<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to std-proposals+unsubscribe@isocpp.org.<br />
To post to this group, send email to std-proposals@isocpp.org.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/">http://groups.google.com/a/isocpp.org/group/std-proposals/<=
/a>.<br />
&nbsp;<br />
&nbsp;<br />

--047d7bdc939630b61704dffac4de--

.


Author: Chris Jefferson <chris@bubblescope.net>
Date: Tue, 25 Jun 2013 15:19:32 +0100
Raw View
On 25/06/13 14:29, VinceRev wrote:
> Hello everyone.
>
> I open this thread just to ask about the current status of MPI
> (Message Passing Interface) regarding the standardization process.
> Since C++11 we have a very convenient standard threading library to
> work on machines with shared memory.
> In C++17, we may have a standard networking library.
> So a next step regarding the standardization of parallelism would be
> to have a library to work on heterogeneous architectures with
> distributed memory.
> On supercomputers, this is currently done with MPI libraries that
> exist for quite a long time now. Problems are well-known, the
> communications schemes are well tested and mastered, and boost already
> propose an interface to MPI.
>
> So my question is : has MPI already been considered by the committee
> for standardization ?
> If yes, why related proposals have been rejected ?
> If no, do you think that would be a good long-term project and why ?

What benefits would merging MPI into the C++ standard give? As far as I
aware (I am a limited user of MPI), MPI has it's own standardisation
group, and aims to support multiple languages, and there are multiple
implementations of MPI available.

Do the MPI developers themselves want MPI pulled into the C++ standard?

Chris

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.



.


Author: Lawrence Crowl <crowl@googlers.com>
Date: Tue, 25 Jun 2013 18:31:33 -0700
Raw View
On 6/25/13, Chris Jefferson <chris@bubblescope.net> wrote:
> On 25/06/13 14:29, VinceRev wrote:
> > I open this thread just to ask about the current status of MPI
> > (Message Passing Interface) regarding the standardization
> > process.  Since C++11 we have a very convenient standard
> > threading library to work on machines with shared memory.
> > In C++17, we may have a standard networking library.  So a
> > next step regarding the standardization of parallelism would
> > be to have a library to work on heterogeneous architectures
> > with distributed memory.  On supercomputers, this is currently
> > done with MPI libraries that exist for quite a long time now.
> > Problems are well-known, the communications schemes are well
> > tested and mastered, and boost already propose an interface
> > to MPI.
> >
> > So my question is : has MPI already been considered by the
> > committee for standardization ?

Not to my knowledge.

> > If yes, why related proposals have been rejected ?

N/A

> > If no, do you think that would be a good long-term project and
> > why ?
>
> What benefits would merging MPI into the C++ standard give?
> As far as I aware (I am a limited user of MPI), MPI has it's own
> standardisation group, and aims to support multiple languages,
> and there are multiple implementations of MPI available.

MPI is pretty low level.  The advantage of C++ would be to handle the
mapping from higher-level types on to the MPI standard as it exists.

> Do the MPI developers themselves want MPI pulled into the C++
> standard?

I doubt they would want ot reliquish MPI, but they may well like some
help simplifying its use in C++.

However, before we go that route, do we should generalize the
problem and see if we might want some newer technology.  For example,
see http://www.gpi-site.com/gpi2/.

--
Lawrence Crowl

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.



.


Author: me@ryanlewis.net
Date: Sun, 19 Oct 2014 11:32:55 -0700 (PDT)
Raw View
------=_Part_569_1930255456.1413743575320
Content-Type: text/plain; charset=UTF-8

Hi,

This is something I am interested in as well. I am all for generalizing the
problem of message passing and trying to develop a library around it.
Anyone have suggestions on how to start?

On Tuesday, June 25, 2013 6:29:52 AM UTC-7, Vincent Reverdy wrote:
>
> Hello everyone.
>
> I open this thread just to ask about the current status of MPI (Message
> Passing Interface) regarding the standardization process.
> Since C++11 we have a very convenient standard threading library to work
> on machines with shared memory.
> In C++17, we may have a standard networking library.
> So a next step regarding the standardization of parallelism would be to
> have a library to work on heterogeneous architectures with distributed
> memory.
> On supercomputers, this is currently done with MPI libraries that exist
> for quite a long time now. Problems are well-known, the communications
> schemes are well tested and mastered, and boost already propose an
> interface to MPI.
>
> So my question is : has MPI already been considered by the committee for
> standardization ?
> If yes, why related proposals have been rejected ?
> If no, do you think that would be a good long-term project and why ?
>
> Thanks.
>
> --
> Vincent Reverdy
> *Phd Student @ Laboratory Universe and Theories*
>
> *Cosmology and General Relativity GroupObservatory of Paris-Meudon, France*
>

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.

------=_Part_569_1930255456.1413743575320
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>This is something I am interested i=
n as well. I am all for generalizing the problem of message passing and try=
ing to develop a library around it. Anyone have suggestions on how to start=
?</div><div><br>On Tuesday, June 25, 2013 6:29:52 AM UTC-7, Vincent Reverdy=
 wrote:<blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.=
8ex;border-left: 1px #ccc solid;padding-left: 1ex;">Hello everyone.<br><br>=
I open this thread just to ask about the current status of MPI (Message Pas=
sing Interface) regarding the standardization process.<br>Since C++11 we ha=
ve a very convenient standard threading library to work on machines with sh=
ared memory.<br>In C++17, we may have a standard networking library.<br>So =
a next step regarding the standardization of parallelism would be to have a=
 library to work on heterogeneous architectures with distributed memory. <b=
r>On supercomputers, this is currently done with MPI libraries that exist f=
or quite a long time now. Problems are well-known, the communications schem=
es are well tested and mastered, and boost already propose an interface to =
MPI.<br><br>So my question is : has MPI already been considered by the comm=
ittee for standardization ?<br>If yes, why related proposals have been reje=
cted ?<br>If no, do you think that would be a good long-term project and wh=
y ?<br><br>Thanks.<br><br>-- <br>Vincent Reverdy<i><br>Phd Student @ Labora=
tory Universe and Theories</i><i><br>Cosmology and General Relativity Group=
<br>Observatory of Paris-Meudon, France</i><br></blockquote></div></div>

<p></p>

-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to <a href=3D"mailto:std-proposals+unsubscribe@isocpp.org">std-proposa=
ls+unsubscribe@isocpp.org</a>.<br />
To post to this group, send email to <a href=3D"mailto:std-proposals@isocpp=
..org">std-proposals@isocpp.org</a>.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/">http://groups.google.com/a/isocpp.org/group/std-proposals/<=
/a>.<br />

------=_Part_569_1930255456.1413743575320--

.


Author: me@ryanlewis.net
Date: Sun, 19 Oct 2014 17:55:04 -0700 (PDT)
Raw View
------=_Part_2666_1506891348.1413766504293
Content-Type: text/plain; charset=UTF-8

Hi,

I don't think GPI2 presents anything "newer,"

I think the concepts behind distributed memory programming make sense, e.g.
send/recieve along with collective communications.

However, the way they are implemented can be standardized, so that

template<  typename T>
std::future<...> send( std::communicator& comm, std::size_t process_id,
const T & t);

template<  typename T>
std::future<...> receive( std::communicator& comm, std::size_t process_id,
T & t);

can be implemented correctly.

In particular, we need networking to be standardized so that we can have a
communicator abstraction and the ability to send bits and bytes.

However, as a larger complication, consider the following two possibilities
for the type T:

1) T = int
2) T = std::vector<  int>.

In situation 1, the use of &t and sizeof( T) is enough to send the content
of type T.
In situation 2, however, this is not true. sizeof( T) = 12 of course
vector.size()*sizeof( Y) can be much, much, larger.

This suggests that the implementation of send(...) should be overloaded
depending on the type of T.

For example, imagine the existance of an object:

namespace std{
template< typename T>
contains_pointer;
}

which returns std::true_type if the type T contains a pointer and
std::false_type otherwise.

with such an object, an abstraction is plausible.

In particular, if std::contains_pointer< T> is false, then we can correctly
implement send by identifying the bits in the range [&t, &t + sizeof( T))
 as the content of a message.

However, an open question remains as to what to do instead.

In particular, compare the following types:

typedef std::pair< int*, char*> Pair;
Pair p;

vs.
typedef std::vector< int> Vector;
Vector v;

both types contain int*, but, their is some ambiguity in both what to send,
and how to send it.

In particular, my personal dogma says that on the left, a correct algorithm
would send the range [&p, &p + sizeof(Pair)) on the left, and on the right,
something equivalent to [v.data(), v.data()+v.size()).

I find some ambiguity because its not clear to me that: send() should rely
on .data() or .size() to exist (e.g. consider the pair above),

Ultimately the problem is that a pointer can point to either a single
instance of an element, or an array of that element. Clearly, the language
does not have a standard for how to identify the length of such an array.

However, what is clear to me, is that compile time introspection is
necessary for implementing things like contain_pointer and therefore send()
and receive().


I've started a preliminary document here: https://github.com/rhl-/mpi
Although I haven't added these thoughts.



On Tuesday, June 25, 2013 6:31:33 PM UTC-7, Lawrence Crowl wrote:
>
> On 6/25/13, Chris Jefferson <ch...@bubblescope.net <javascript:>> wrote:
> > On 25/06/13 14:29, VinceRev wrote:
> > > I open this thread just to ask about the current status of MPI
> > > (Message Passing Interface) regarding the standardization
> > > process.  Since C++11 we have a very convenient standard
> > > threading library to work on machines with shared memory.
> > > In C++17, we may have a standard networking library.  So a
> > > next step regarding the standardization of parallelism would
> > > be to have a library to work on heterogeneous architectures
> > > with distributed memory.  On supercomputers, this is currently
> > > done with MPI libraries that exist for quite a long time now.
> > > Problems are well-known, the communications schemes are well
> > > tested and mastered, and boost already propose an interface
> > > to MPI.
> > >
> > > So my question is : has MPI already been considered by the
> > > committee for standardization ?
>
> Not to my knowledge.
>
> > > If yes, why related proposals have been rejected ?
>
> N/A
>
> > > If no, do you think that would be a good long-term project and
> > > why ?
> >
> > What benefits would merging MPI into the C++ standard give?
> > As far as I aware (I am a limited user of MPI), MPI has it's own
> > standardisation group, and aims to support multiple languages,
> > and there are multiple implementations of MPI available.
>
> MPI is pretty low level.  The advantage of C++ would be to handle the
> mapping from higher-level types on to the MPI standard as it exists.
>
> > Do the MPI developers themselves want MPI pulled into the C++
> > standard?
>
> I doubt they would want ot reliquish MPI, but they may well like some
> help simplifying its use in C++.
>
> However, before we go that route, do we should generalize the
> problem and see if we might want some newer technology.  For example,
> see http://www.gpi-site.com/gpi2/.
>
> --
> Lawrence Crowl
>

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.

------=_Part_2666_1506891348.1413766504293
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I don't think GPI2 presents anythin=
g "newer,"&nbsp;</div><div><br></div><div>I think the concepts behind distr=
ibuted memory programming make sense, e.g. send/recieve along with collecti=
ve communications.&nbsp;</div><div><br></div><div>However, the way they are=
 implemented can be standardized, so that&nbsp;</div><div><br></div><div>te=
mplate&lt; &nbsp;typename T&gt;</div><div>std::future&lt;...&gt; send( std:=
:communicator&amp; comm, std::size_t process_id, const T &amp; t);</div><di=
v><div><br></div><div>template&lt; &nbsp;typename T&gt;</div><div>std::futu=
re&lt;...&gt; receive( std::communicator&amp; comm, std::size_t process_id,=
 T &amp; t);</div></div><div><br></div><div>can be implemented correctly.</=
div><div><br></div><div>In particular, we need networking to be standardize=
d so that we can have a communicator abstraction and the ability to send bi=
ts and bytes.</div><div><br></div><div>However, as a larger complication, c=
onsider the following two possibilities for the type T:</div><div><br></div=
><div>1) T =3D int</div><div>2) T =3D std::vector&lt; &nbsp;int&gt;.&nbsp;<=
/div><div><br></div><div>In situation 1, the use of &amp;t and sizeof( T) i=
s enough to send the content of type T.</div><div>In situation 2, however, =
this is not true. sizeof( T) =3D 12 of course vector.size()*sizeof( Y) can =
be much, much, larger.</div><div><br></div><div>This suggests that the impl=
ementation of send(...) should be overloaded depending on the type of T.</d=
iv><div><br></div><div>For example, imagine the existance of an object:</di=
v><div><br></div><div>namespace std{</div><div>template&lt; typename T&gt;<=
/div><div>contains_pointer;</div><div>}</div><div><br></div><div>which retu=
rns std::true_type if the type T contains a pointer and std::false_type oth=
erwise.</div><div><br></div><div>with such an object, an abstraction is pla=
usible.&nbsp;</div><div><br></div><div>In particular, if std::contains_poin=
ter&lt; T&gt; is false, then we can correctly implement send by identifying=
 the bits in the range [&amp;t, &amp;t + sizeof( T)) &nbsp;as the content o=
f a message.</div><div><br></div><div>However, an open question remains as =
to what to do instead.</div><div><br></div><div>In particular, compare the =
following types:</div><div><br></div><div>typedef std::pair&lt; int*, char*=
&gt; Pair;</div><div>Pair p;&nbsp;</div><div><br></div><div>vs.&nbsp;</div>=
<div>typedef std::vector&lt; int&gt; Vector;</div><div>Vector v;</div><div>=
<br></div><div>both types contain int*, but, their is some ambiguity in bot=
h what to send, and how to send it.</div><div><br></div><div>In particular,=
 my personal dogma says that on the left, a correct algorithm would send th=
e range [&amp;p, &amp;p + sizeof(Pair)) on the left, and on the right,&nbsp=
;</div><div>something equivalent to [v.data(), v.data()+v.size()).</div><di=
v><br></div><div>I find some ambiguity because its not clear to me that: se=
nd() should rely on .data() or .size() to exist (e.g. consider the pair abo=
ve),&nbsp;</div><div><br></div><div>Ultimately the problem is that a pointe=
r can point to either a single instance of an element, or an array of that =
element. Clearly, the language does not have a standard for how to identify=
 the length of such an array.&nbsp;</div><div><br></div><div>However, what =
is clear to me, is that compile time introspection is necessary for impleme=
nting things like contain_pointer and therefore send() and receive().&nbsp;=
</div><div><br></div><div><br></div><div>I've started a preliminary documen=
t here: https://github.com/rhl-/mpi Although I haven't added these thoughts=
..</div><div><br></div><div><br></div><div><br>On Tuesday, June 25, 2013 6:3=
1:33 PM UTC-7, Lawrence Crowl wrote:<blockquote class=3D"gmail_quote" style=
=3D"margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: =
1ex;">On 6/25/13, Chris Jefferson &lt;<a href=3D"javascript:" target=3D"_bl=
ank" gdf-obfuscated-mailto=3D"ghB503knnhoJ" onmousedown=3D"this.href=3D'jav=
ascript:';return true;" onclick=3D"this.href=3D'javascript:';return true;">=
ch...@bubblescope.net</a>&gt; wrote:
<br>&gt; On 25/06/13 14:29, VinceRev wrote:
<br>&gt; &gt; I open this thread just to ask about the current status of MP=
I
<br>&gt; &gt; (Message Passing Interface) regarding the standardization
<br>&gt; &gt; process. &nbsp;Since C++11 we have a very convenient standard
<br>&gt; &gt; threading library to work on machines with shared memory.
<br>&gt; &gt; In C++17, we may have a standard networking library. &nbsp;So=
 a
<br>&gt; &gt; next step regarding the standardization of parallelism would
<br>&gt; &gt; be to have a library to work on heterogeneous architectures
<br>&gt; &gt; with distributed memory. &nbsp;On supercomputers, this is cur=
rently
<br>&gt; &gt; done with MPI libraries that exist for quite a long time now.
<br>&gt; &gt; Problems are well-known, the communications schemes are well
<br>&gt; &gt; tested and mastered, and boost already propose an interface
<br>&gt; &gt; to MPI.
<br>&gt; &gt;
<br>&gt; &gt; So my question is : has MPI already been considered by the
<br>&gt; &gt; committee for standardization ?
<br>
<br>Not to my knowledge.
<br>
<br>&gt; &gt; If yes, why related proposals have been rejected ?
<br>
<br>N/A
<br>
<br>&gt; &gt; If no, do you think that would be a good long-term project an=
d
<br>&gt; &gt; why ?
<br>&gt;
<br>&gt; What benefits would merging MPI into the C++ standard give?
<br>&gt; As far as I aware (I am a limited user of MPI), MPI has it's own
<br>&gt; standardisation group, and aims to support multiple languages,
<br>&gt; and there are multiple implementations of MPI available.
<br>
<br>MPI is pretty low level. &nbsp;The advantage of C++ would be to handle =
the
<br>mapping from higher-level types on to the MPI standard as it exists.
<br>
<br>&gt; Do the MPI developers themselves want MPI pulled into the C++
<br>&gt; standard?
<br>
<br>I doubt they would want ot reliquish MPI, but they may well like some
<br>help simplifying its use in C++.
<br>
<br>However, before we go that route, do we should generalize the
<br>problem and see if we might want some newer technology. &nbsp;For examp=
le,
<br>see <a href=3D"http://www.gpi-site.com/gpi2/" target=3D"_blank" onmouse=
down=3D"this.href=3D'http://www.google.com/url?q\75http%3A%2F%2Fwww.gpi-sit=
e.com%2Fgpi2%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNFwE6BdJP-8csxUxP16e6cu5w=
FIuw';return true;" onclick=3D"this.href=3D'http://www.google.com/url?q\75h=
ttp%3A%2F%2Fwww.gpi-site.com%2Fgpi2%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNF=
wE6BdJP-8csxUxP16e6cu5wFIuw';return true;">http://www.gpi-site.com/gpi2/</a=
>.
<br>
<br>--=20
<br>Lawrence Crowl
<br></blockquote></div></div>

<p></p>

-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to <a href=3D"mailto:std-proposals+unsubscribe@isocpp.org">std-proposa=
ls+unsubscribe@isocpp.org</a>.<br />
To post to this group, send email to <a href=3D"mailto:std-proposals@isocpp=
..org">std-proposals@isocpp.org</a>.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/">http://groups.google.com/a/isocpp.org/group/std-proposals/<=
/a>.<br />

------=_Part_2666_1506891348.1413766504293--

.


Author: Jesse Perla <jesseperla@gmail.com>
Date: Sun, 19 Oct 2014 21:35:13 -0700 (PDT)
Raw View
------=_Part_145_2036894132.1413779713784
Content-Type: text/plain; charset=UTF-8

Take a look at http://www.boost.org/doc/libs/1_56_0/doc/html/mpi.html to
get a sense of what a (relatively) thin layer looks like.

Yes, the compile time reflection is essential here, but I think it is much
more than just contain_pointer, etc.  Take a look at the "User-defined data
types" macros there for serialization to see how much ugly macro nastiness
is necessary.  I don't think it make sense to talk about standardizing this
until (1) reflection is there, and (2) experience with a library utilizing
the expression has been collected.

But, I imagine a paper for the reflection group summarizing static
reflection requirements for MPI serialization could be very valuable for
them.



On Sunday, October 19, 2014 5:55:04 PM UTC-7, m...@ryanlewis.net wrote:
>
> Hi,
>
> I don't think GPI2 presents anything "newer,"
>
> I think the concepts behind distributed memory programming make sense,
> e.g. send/recieve along with collective communications.
>
> However, the way they are implemented can be standardized, so that
>
> template<  typename T>
> std::future<...> send( std::communicator& comm, std::size_t process_id,
> const T & t);
>
> template<  typename T>
> std::future<...> receive( std::communicator& comm, std::size_t process_id,
> T & t);
>
> can be implemented correctly.
>
> In particular, we need networking to be standardized so that we can have a
> communicator abstraction and the ability to send bits and bytes.
>
> However, as a larger complication, consider the following two
> possibilities for the type T:
>
> 1) T = int
> 2) T = std::vector<  int>.
>
> In situation 1, the use of &t and sizeof( T) is enough to send the content
> of type T.
> In situation 2, however, this is not true. sizeof( T) = 12 of course
> vector.size()*sizeof( Y) can be much, much, larger.
>
> This suggests that the implementation of send(...) should be overloaded
> depending on the type of T.
>
> For example, imagine the existance of an object:
>
> namespace std{
> template< typename T>
> contains_pointer;
> }
>
> which returns std::true_type if the type T contains a pointer and
> std::false_type otherwise.
>
> with such an object, an abstraction is plausible.
>
> In particular, if std::contains_pointer< T> is false, then we can
> correctly implement send by identifying the bits in the range [&t, &t +
> sizeof( T))  as the content of a message.
>
> However, an open question remains as to what to do instead.
>
> In particular, compare the following types:
>
> typedef std::pair< int*, char*> Pair;
> Pair p;
>
> vs.
> typedef std::vector< int> Vector;
> Vector v;
>
> both types contain int*, but, their is some ambiguity in both what to
> send, and how to send it.
>
> In particular, my personal dogma says that on the left, a correct
> algorithm would send the range [&p, &p + sizeof(Pair)) on the left, and on
> the right,
> something equivalent to [v.data(), v.data()+v.size()).
>
> I find some ambiguity because its not clear to me that: send() should rely
> on .data() or .size() to exist (e.g. consider the pair above),
>
> Ultimately the problem is that a pointer can point to either a single
> instance of an element, or an array of that element. Clearly, the language
> does not have a standard for how to identify the length of such an array.
>
> However, what is clear to me, is that compile time introspection is
> necessary for implementing things like contain_pointer and therefore send()
> and receive().
>
>
> I've started a preliminary document here: https://github.com/rhl-/mpi
> Although I haven't added these thoughts.
>
>
>
> On Tuesday, June 25, 2013 6:31:33 PM UTC-7, Lawrence Crowl wrote:
>>
>> On 6/25/13, Chris Jefferson <ch...@bubblescope.net> wrote:
>> > On 25/06/13 14:29, VinceRev wrote:
>> > > I open this thread just to ask about the current status of MPI
>> > > (Message Passing Interface) regarding the standardization
>> > > process.  Since C++11 we have a very convenient standard
>> > > threading library to work on machines with shared memory.
>> > > In C++17, we may have a standard networking library.  So a
>> > > next step regarding the standardization of parallelism would
>> > > be to have a library to work on heterogeneous architectures
>> > > with distributed memory.  On supercomputers, this is currently
>> > > done with MPI libraries that exist for quite a long time now.
>> > > Problems are well-known, the communications schemes are well
>> > > tested and mastered, and boost already propose an interface
>> > > to MPI.
>> > >
>> > > So my question is : has MPI already been considered by the
>> > > committee for standardization ?
>>
>> Not to my knowledge.
>>
>> > > If yes, why related proposals have been rejected ?
>>
>> N/A
>>
>> > > If no, do you think that would be a good long-term project and
>> > > why ?
>> >
>> > What benefits would merging MPI into the C++ standard give?
>> > As far as I aware (I am a limited user of MPI), MPI has it's own
>> > standardisation group, and aims to support multiple languages,
>> > and there are multiple implementations of MPI available.
>>
>> MPI is pretty low level.  The advantage of C++ would be to handle the
>> mapping from higher-level types on to the MPI standard as it exists.
>>
>> > Do the MPI developers themselves want MPI pulled into the C++
>> > standard?
>>
>> I doubt they would want ot reliquish MPI, but they may well like some
>> help simplifying its use in C++.
>>
>> However, before we go that route, do we should generalize the
>> problem and see if we might want some newer technology.  For example,
>> see http://www.gpi-site.com/gpi2/.
>>
>> --
>> Lawrence Crowl
>>
>

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.

------=_Part_145_2036894132.1413779713784
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><span>Take a look at <a class=3D"linkclass" href=3D"http:/=
/www.boost.org/doc/libs/1_56_0/doc/html/mpi.html">http://www.boost.org/doc/=
libs/1_56_0/doc/html/mpi.html</a> to get a sense of what a (relatively) thi=
n layer looks like.</span><div><br></div><div>Yes, the compile time reflect=
ion is essential here, but I think it is much more than just contain_pointe=
r, etc. &nbsp;Take a look at the "User-defined data types" macros there for=
 serialization to see how much ugly macro nastiness is necessary. &nbsp;I d=
on't think it make sense to talk about standardizing this until (1) reflect=
ion is there, and (2) experience with a library utilizing the expression ha=
s been collected.</div><div><br></div><div>But, I imagine a paper for the r=
eflection group summarizing static reflection requirements for MPI serializ=
ation could be very valuable for them.</div><div><br></div><div><br></div><=
div><br><span>On Sunday, October 19, 2014 5:55:04 PM UTC-7, <a class=3D"lin=
kclass" href=3D"mailto:m...@ryanlewis.net">m...@ryanlewis.net</a> wrote:</s=
pan><blockquote class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.8ex=
;border-left: 1px #ccc solid;padding-left: 1ex;"><div dir=3D"ltr">Hi,<div><=
br></div><div>I don't think GPI2 presents anything "newer,"&nbsp;</div><div=
><br></div><div>I think the concepts behind distributed memory programming =
make sense, e.g. send/recieve along with collective communications.&nbsp;</=
div><div><br></div><div>However, the way they are implemented can be standa=
rdized, so that&nbsp;</div><div><br></div><div>template&lt; &nbsp;typename =
T&gt;</div><div>std::future&lt;...&gt; send( std::communicator&amp; comm, s=
td::size_t process_id, const T &amp; t);</div><div><div><br></div><div>temp=
late&lt; &nbsp;typename T&gt;</div><div>std::future&lt;...&gt; receive( std=
::communicator&amp; comm, std::size_t process_id, T &amp; t);</div></div><d=
iv><br></div><div>can be implemented correctly.</div><div><br></div><div>In=
 particular, we need networking to be standardized so that we can have a co=
mmunicator abstraction and the ability to send bits and bytes.</div><div><b=
r></div><div>However, as a larger complication, consider the following two =
possibilities for the type T:</div><div><br></div><div>1) T =3D int</div><d=
iv>2) T =3D std::vector&lt; &nbsp;int&gt;.&nbsp;</div><div><br></div><div>I=
n situation 1, the use of &amp;t and sizeof( T) is enough to send the conte=
nt of type T.</div><div>In situation 2, however, this is not true. sizeof( =
T) =3D 12 of course vector.size()*sizeof( Y) can be much, much, larger.</di=
v><div><br></div><div>This suggests that the implementation of send(...) sh=
ould be overloaded depending on the type of T.</div><div><br></div><div>For=
 example, imagine the existance of an object:</div><div><br></div><div>name=
space std{</div><div>template&lt; typename T&gt;</div><div>contains_pointer=
;</div><div>}</div><div><br></div><div>which returns std::true_type if the =
type T contains a pointer and std::false_type otherwise.</div><div><br></di=
v><div>with such an object, an abstraction is plausible.&nbsp;</div><div><b=
r></div><div>In particular, if std::contains_pointer&lt; T&gt; is false, th=
en we can correctly implement send by identifying the bits in the range [&a=
mp;t, &amp;t + sizeof( T)) &nbsp;as the content of a message.</div><div><br=
></div><div>However, an open question remains as to what to do instead.</di=
v><div><br></div><div>In particular, compare the following types:</div><div=
><br></div><div>typedef std::pair&lt; int*, char*&gt; Pair;</div><div>Pair =
p;&nbsp;</div><div><br></div><div>vs.&nbsp;</div><div>typedef std::vector&l=
t; int&gt; Vector;</div><div>Vector v;</div><div><br></div><div>both types =
contain int*, but, their is some ambiguity in both what to send, and how to=
 send it.</div><div><br></div><div>In particular, my personal dogma says th=
at on the left, a correct algorithm would send the range [&amp;p, &amp;p + =
sizeof(Pair)) on the left, and on the right,&nbsp;</div><div>something equi=
valent to [v.data(), v.data()+v.size()).</div><div><br></div><div>I find so=
me ambiguity because its not clear to me that: send() should rely on .data(=
) or .size() to exist (e.g. consider the pair above),&nbsp;</div><div><br><=
/div><div>Ultimately the problem is that a pointer can point to either a si=
ngle instance of an element, or an array of that element. Clearly, the lang=
uage does not have a standard for how to identify the length of such an arr=
ay.&nbsp;</div><div><br></div><div>However, what is clear to me, is that co=
mpile time introspection is necessary for implementing things like contain_=
pointer and therefore send() and receive().&nbsp;</div><div><br></div><div>=
<br></div><div>I've started a preliminary document here: <a href=3D"https:/=
/github.com/rhl-/mpi" target=3D"_blank" onmousedown=3D"this.href=3D'https:/=
/www.google.com/url?q\75https%3A%2F%2Fgithub.com%2Frhl-%2Fmpi\46sa\75D\46sn=
tz\0751\46usg\75AFQjCNF1SICUXrnVQTQVe9qkRD9aRBD-Gw';return true;" onclick=
=3D"this.href=3D'https://www.google.com/url?q\75https%3A%2F%2Fgithub.com%2F=
rhl-%2Fmpi\46sa\75D\46sntz\0751\46usg\75AFQjCNF1SICUXrnVQTQVe9qkRD9aRBD-Gw'=
;return true;">https://github.com/rhl-/mpi</a> Although I haven't added the=
se thoughts.</div><div><br></div><div><br></div><div><br>On Tuesday, June 2=
5, 2013 6:31:33 PM UTC-7, Lawrence Crowl wrote:<blockquote class=3D"gmail_q=
uote" style=3D"margin:0;margin-left:0.8ex;border-left:1px #ccc solid;paddin=
g-left:1ex">On 6/25/13, Chris Jefferson &lt;<a>ch...@bubblescope.net</a>&gt=
; wrote:
<br>&gt; On 25/06/13 14:29, VinceRev wrote:
<br>&gt; &gt; I open this thread just to ask about the current status of MP=
I
<br>&gt; &gt; (Message Passing Interface) regarding the standardization
<br>&gt; &gt; process. &nbsp;Since C++11 we have a very convenient standard
<br>&gt; &gt; threading library to work on machines with shared memory.
<br>&gt; &gt; In C++17, we may have a standard networking library. &nbsp;So=
 a
<br>&gt; &gt; next step regarding the standardization of parallelism would
<br>&gt; &gt; be to have a library to work on heterogeneous architectures
<br>&gt; &gt; with distributed memory. &nbsp;On supercomputers, this is cur=
rently
<br>&gt; &gt; done with MPI libraries that exist for quite a long time now.
<br>&gt; &gt; Problems are well-known, the communications schemes are well
<br>&gt; &gt; tested and mastered, and boost already propose an interface
<br>&gt; &gt; to MPI.
<br>&gt; &gt;
<br>&gt; &gt; So my question is : has MPI already been considered by the
<br>&gt; &gt; committee for standardization ?
<br>
<br>Not to my knowledge.
<br>
<br>&gt; &gt; If yes, why related proposals have been rejected ?
<br>
<br>N/A
<br>
<br>&gt; &gt; If no, do you think that would be a good long-term project an=
d
<br>&gt; &gt; why ?
<br>&gt;
<br>&gt; What benefits would merging MPI into the C++ standard give?
<br>&gt; As far as I aware (I am a limited user of MPI), MPI has it's own
<br>&gt; standardisation group, and aims to support multiple languages,
<br>&gt; and there are multiple implementations of MPI available.
<br>
<br>MPI is pretty low level. &nbsp;The advantage of C++ would be to handle =
the
<br>mapping from higher-level types on to the MPI standard as it exists.
<br>
<br>&gt; Do the MPI developers themselves want MPI pulled into the C++
<br>&gt; standard?
<br>
<br>I doubt they would want ot reliquish MPI, but they may well like some
<br>help simplifying its use in C++.
<br>
<br>However, before we go that route, do we should generalize the
<br>problem and see if we might want some newer technology. &nbsp;For examp=
le,
<br>see <a href=3D"http://www.gpi-site.com/gpi2/" target=3D"_blank" onmouse=
down=3D"this.href=3D'http://www.google.com/url?q\75http%3A%2F%2Fwww.gpi-sit=
e.com%2Fgpi2%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNFwE6BdJP-8csxUxP16e6cu5w=
FIuw';return true;" onclick=3D"this.href=3D'http://www.google.com/url?q\75h=
ttp%3A%2F%2Fwww.gpi-site.com%2Fgpi2%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNF=
wE6BdJP-8csxUxP16e6cu5wFIuw';return true;">http://www.gpi-site.com/gpi2/</a=
>.
<br>
<br>--=20
<br>Lawrence Crowl
<br></blockquote></div></div></blockquote></div></div>

<p></p>

-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to <a href=3D"mailto:std-proposals+unsubscribe@isocpp.org">std-proposa=
ls+unsubscribe@isocpp.org</a>.<br />
To post to this group, send email to <a href=3D"mailto:std-proposals@isocpp=
..org">std-proposals@isocpp.org</a>.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/">http://groups.google.com/a/isocpp.org/group/std-proposals/<=
/a>.<br />

------=_Part_145_2036894132.1413779713784--

.


Author: me@ryanlewis.net
Date: Sun, 19 Oct 2014 22:03:17 -0700 (PDT)
Raw View
------=_Part_2720_605979134.1413781397296
Content-Type: text/plain; charset=UTF-8

Hi,

I agree, settling the requirements for networking and reflection for MPI
would be valuable to ensure that those standardization attempts at least
encompass this use case.

I am aware of Boost::MPI. My contain_pointer is an attempt to generalize
the way is_mpi_datatype is used in Boost::MPI

I find it interesting that the idea of send and receive is very similar to
the idea of copy construction. Infact, modulo the issue of the network, it
is exactly what we are trying to receive.

On Sunday, October 19, 2014 9:35:13 PM UTC-7, Jesse Perla wrote:
>
> Take a look at http://www.boost.org/doc/libs/1_56_0/doc/html/mpi.html to
> get a sense of what a (relatively) thin layer looks like.
>
> Yes, the compile time reflection is essential here, but I think it is much
> more than just contain_pointer, etc.  Take a look at the "User-defined data
> types" macros there for serialization to see how much ugly macro nastiness
> is necessary.  I don't think it make sense to talk about standardizing this
> until (1) reflection is there, and (2) experience with a library utilizing
> the expression has been collected.
>
> But, I imagine a paper for the reflection group summarizing static
> reflection requirements for MPI serialization could be very valuable for
> them.
>
>
>
> On Sunday, October 19, 2014 5:55:04 PM UTC-7, m...@ryanlewis.net wrote:
>>
>> Hi,
>>
>> I don't think GPI2 presents anything "newer,"
>>
>> I think the concepts behind distributed memory programming make sense,
>> e.g. send/recieve along with collective communications.
>>
>> However, the way they are implemented can be standardized, so that
>>
>> template<  typename T>
>> std::future<...> send( std::communicator& comm, std::size_t process_id,
>> const T & t);
>>
>> template<  typename T>
>> std::future<...> receive( std::communicator& comm, std::size_t
>> process_id, T & t);
>>
>> can be implemented correctly.
>>
>> In particular, we need networking to be standardized so that we can have
>> a communicator abstraction and the ability to send bits and bytes.
>>
>> However, as a larger complication, consider the following two
>> possibilities for the type T:
>>
>> 1) T = int
>> 2) T = std::vector<  int>.
>>
>> In situation 1, the use of &t and sizeof( T) is enough to send the
>> content of type T.
>> In situation 2, however, this is not true. sizeof( T) = 12 of course
>> vector.size()*sizeof( Y) can be much, much, larger.
>>
>> This suggests that the implementation of send(...) should be overloaded
>> depending on the type of T.
>>
>> For example, imagine the existance of an object:
>>
>> namespace std{
>> template< typename T>
>> contains_pointer;
>> }
>>
>> which returns std::true_type if the type T contains a pointer and
>> std::false_type otherwise.
>>
>> with such an object, an abstraction is plausible.
>>
>> In particular, if std::contains_pointer< T> is false, then we can
>> correctly implement send by identifying the bits in the range [&t, &t +
>> sizeof( T))  as the content of a message.
>>
>> However, an open question remains as to what to do instead.
>>
>> In particular, compare the following types:
>>
>> typedef std::pair< int*, char*> Pair;
>> Pair p;
>>
>> vs.
>> typedef std::vector< int> Vector;
>> Vector v;
>>
>> both types contain int*, but, their is some ambiguity in both what to
>> send, and how to send it.
>>
>> In particular, my personal dogma says that on the left, a correct
>> algorithm would send the range [&p, &p + sizeof(Pair)) on the left, and on
>> the right,
>> something equivalent to [v.data(), v.data()+v.size()).
>>
>> I find some ambiguity because its not clear to me that: send() should
>> rely on .data() or .size() to exist (e.g. consider the pair above),
>>
>> Ultimately the problem is that a pointer can point to either a single
>> instance of an element, or an array of that element. Clearly, the language
>> does not have a standard for how to identify the length of such an array.
>>
>> However, what is clear to me, is that compile time introspection is
>> necessary for implementing things like contain_pointer and therefore send()
>> and receive().
>>
>>
>> I've started a preliminary document here: https://github.com/rhl-/mpi
>> Although I haven't added these thoughts.
>>
>>
>>
>> On Tuesday, June 25, 2013 6:31:33 PM UTC-7, Lawrence Crowl wrote:
>>>
>>> On 6/25/13, Chris Jefferson <ch...@bubblescope.net> wrote:
>>> > On 25/06/13 14:29, VinceRev wrote:
>>> > > I open this thread just to ask about the current status of MPI
>>> > > (Message Passing Interface) regarding the standardization
>>> > > process.  Since C++11 we have a very convenient standard
>>> > > threading library to work on machines with shared memory.
>>> > > In C++17, we may have a standard networking library.  So a
>>> > > next step regarding the standardization of parallelism would
>>> > > be to have a library to work on heterogeneous architectures
>>> > > with distributed memory.  On supercomputers, this is currently
>>> > > done with MPI libraries that exist for quite a long time now.
>>> > > Problems are well-known, the communications schemes are well
>>> > > tested and mastered, and boost already propose an interface
>>> > > to MPI.
>>> > >
>>> > > So my question is : has MPI already been considered by the
>>> > > committee for standardization ?
>>>
>>> Not to my knowledge.
>>>
>>> > > If yes, why related proposals have been rejected ?
>>>
>>> N/A
>>>
>>> > > If no, do you think that would be a good long-term project and
>>> > > why ?
>>> >
>>> > What benefits would merging MPI into the C++ standard give?
>>> > As far as I aware (I am a limited user of MPI), MPI has it's own
>>> > standardisation group, and aims to support multiple languages,
>>> > and there are multiple implementations of MPI available.
>>>
>>> MPI is pretty low level.  The advantage of C++ would be to handle the
>>> mapping from higher-level types on to the MPI standard as it exists.
>>>
>>> > Do the MPI developers themselves want MPI pulled into the C++
>>> > standard?
>>>
>>> I doubt they would want ot reliquish MPI, but they may well like some
>>> help simplifying its use in C++.
>>>
>>> However, before we go that route, do we should generalize the
>>> problem and see if we might want some newer technology.  For example,
>>> see http://www.gpi-site.com/gpi2/.
>>>
>>> --
>>> Lawrence Crowl
>>>
>>

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-proposals@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.

------=_Part_2720_605979134.1413781397296
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I agree, settling the requirements =
for networking and reflection for MPI would be valuable to ensure that thos=
e standardization attempts at least encompass this use case.&nbsp;<br></div=
><div><br></div><div>I am aware of Boost::MPI. My contain_pointer is an att=
empt to generalize the way is_mpi_datatype is used in Boost::MPI</div><div>=
<br></div><div>I find it interesting that the idea of send and receive is v=
ery similar to the idea of copy construction. Infact, modulo the issue of t=
he network, it is exactly what we are trying to receive.&nbsp;</div><div><b=
r>On Sunday, October 19, 2014 9:35:13 PM UTC-7, Jesse Perla wrote:<blockquo=
te class=3D"gmail_quote" style=3D"margin: 0;margin-left: 0.8ex;border-left:=
 1px #ccc solid;padding-left: 1ex;"><div dir=3D"ltr"><span>Take a look at <=
a href=3D"http://www.boost.org/doc/libs/1_56_0/doc/html/mpi.html" target=3D=
"_blank" onmousedown=3D"this.href=3D'http://www.google.com/url?q\75http%3A%=
2F%2Fwww.boost.org%2Fdoc%2Flibs%2F1_56_0%2Fdoc%2Fhtml%2Fmpi.html\46sa\75D\4=
6sntz\0751\46usg\75AFQjCNH1gdqA8J9y0zT93-3XJ4cRnqSjxA';return true;" onclic=
k=3D"this.href=3D'http://www.google.com/url?q\75http%3A%2F%2Fwww.boost.org%=
2Fdoc%2Flibs%2F1_56_0%2Fdoc%2Fhtml%2Fmpi.html\46sa\75D\46sntz\0751\46usg\75=
AFQjCNH1gdqA8J9y0zT93-3XJ4cRnqSjxA';return true;">http://www.boost.org/doc/=
libs/<wbr>1_56_0/doc/html/mpi.html</a> to get a sense of what a (relatively=
) thin layer looks like.</span><div><br></div><div>Yes, the compile time re=
flection is essential here, but I think it is much more than just contain_p=
ointer, etc. &nbsp;Take a look at the "User-defined data types" macros ther=
e for serialization to see how much ugly macro nastiness is necessary. &nbs=
p;I don't think it make sense to talk about standardizing this until (1) re=
flection is there, and (2) experience with a library utilizing the expressi=
on has been collected.</div><div><br></div><div>But, I imagine a paper for =
the reflection group summarizing static reflection requirements for MPI ser=
ialization could be very valuable for them.</div><div><br></div><div><br></=
div><div><br><span>On Sunday, October 19, 2014 5:55:04 PM UTC-7, <a>m...@ry=
anlewis.net</a> wrote:</span><blockquote class=3D"gmail_quote" style=3D"mar=
gin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex"><div d=
ir=3D"ltr">Hi,<div><br></div><div>I don't think GPI2 presents anything "new=
er,"&nbsp;</div><div><br></div><div>I think the concepts behind distributed=
 memory programming make sense, e.g. send/recieve along with collective com=
munications.&nbsp;</div><div><br></div><div>However, the way they are imple=
mented can be standardized, so that&nbsp;</div><div><br></div><div>template=
&lt; &nbsp;typename T&gt;</div><div>std::future&lt;...&gt; send( std::commu=
nicator&amp; comm, std::size_t process_id, const T &amp; t);</div><div><div=
><br></div><div>template&lt; &nbsp;typename T&gt;</div><div>std::future&lt;=
....&gt; receive( std::communicator&amp; comm, std::size_t process_id, T &am=
p; t);</div></div><div><br></div><div>can be implemented correctly.</div><d=
iv><br></div><div>In particular, we need networking to be standardized so t=
hat we can have a communicator abstraction and the ability to send bits and=
 bytes.</div><div><br></div><div>However, as a larger complication, conside=
r the following two possibilities for the type T:</div><div><br></div><div>=
1) T =3D int</div><div>2) T =3D std::vector&lt; &nbsp;int&gt;.&nbsp;</div><=
div><br></div><div>In situation 1, the use of &amp;t and sizeof( T) is enou=
gh to send the content of type T.</div><div>In situation 2, however, this i=
s not true. sizeof( T) =3D 12 of course vector.size()*sizeof( Y) can be muc=
h, much, larger.</div><div><br></div><div>This suggests that the implementa=
tion of send(...) should be overloaded depending on the type of T.</div><di=
v><br></div><div>For example, imagine the existance of an object:</div><div=
><br></div><div>namespace std{</div><div>template&lt; typename T&gt;</div><=
div>contains_pointer;</div><div>}</div><div><br></div><div>which returns st=
d::true_type if the type T contains a pointer and std::false_type otherwise=
..</div><div><br></div><div>with such an object, an abstraction is plausible=
..&nbsp;</div><div><br></div><div>In particular, if std::contains_pointer&lt=
; T&gt; is false, then we can correctly implement send by identifying the b=
its in the range [&amp;t, &amp;t + sizeof( T)) &nbsp;as the content of a me=
ssage.</div><div><br></div><div>However, an open question remains as to wha=
t to do instead.</div><div><br></div><div>In particular, compare the follow=
ing types:</div><div><br></div><div>typedef std::pair&lt; int*, char*&gt; P=
air;</div><div>Pair p;&nbsp;</div><div><br></div><div>vs.&nbsp;</div><div>t=
ypedef std::vector&lt; int&gt; Vector;</div><div>Vector v;</div><div><br></=
div><div>both types contain int*, but, their is some ambiguity in both what=
 to send, and how to send it.</div><div><br></div><div>In particular, my pe=
rsonal dogma says that on the left, a correct algorithm would send the rang=
e [&amp;p, &amp;p + sizeof(Pair)) on the left, and on the right,&nbsp;</div=
><div>something equivalent to [v.data(), v.data()+v.size()).</div><div><br>=
</div><div>I find some ambiguity because its not clear to me that: send() s=
hould rely on .data() or .size() to exist (e.g. consider the pair above),&n=
bsp;</div><div><br></div><div>Ultimately the problem is that a pointer can =
point to either a single instance of an element, or an array of that elemen=
t. Clearly, the language does not have a standard for how to identify the l=
ength of such an array.&nbsp;</div><div><br></div><div>However, what is cle=
ar to me, is that compile time introspection is necessary for implementing =
things like contain_pointer and therefore send() and receive().&nbsp;</div>=
<div><br></div><div><br></div><div>I've started a preliminary document here=
: <a href=3D"https://github.com/rhl-/mpi" target=3D"_blank" onmousedown=3D"=
this.href=3D'https://www.google.com/url?q\75https%3A%2F%2Fgithub.com%2Frhl-=
%2Fmpi\46sa\75D\46sntz\0751\46usg\75AFQjCNF1SICUXrnVQTQVe9qkRD9aRBD-Gw';ret=
urn true;" onclick=3D"this.href=3D'https://www.google.com/url?q\75https%3A%=
2F%2Fgithub.com%2Frhl-%2Fmpi\46sa\75D\46sntz\0751\46usg\75AFQjCNF1SICUXrnVQ=
TQVe9qkRD9aRBD-Gw';return true;">https://github.com/rhl-/mpi</a> Although I=
 haven't added these thoughts.</div><div><br></div><div><br></div><div><br>=
On Tuesday, June 25, 2013 6:31:33 PM UTC-7, Lawrence Crowl wrote:<blockquot=
e class=3D"gmail_quote" style=3D"margin:0;margin-left:0.8ex;border-left:1px=
 #ccc solid;padding-left:1ex">On 6/25/13, Chris Jefferson &lt;<a>ch...@bubb=
lescope.net</a>&gt; wrote:
<br>&gt; On 25/06/13 14:29, VinceRev wrote:
<br>&gt; &gt; I open this thread just to ask about the current status of MP=
I
<br>&gt; &gt; (Message Passing Interface) regarding the standardization
<br>&gt; &gt; process. &nbsp;Since C++11 we have a very convenient standard
<br>&gt; &gt; threading library to work on machines with shared memory.
<br>&gt; &gt; In C++17, we may have a standard networking library. &nbsp;So=
 a
<br>&gt; &gt; next step regarding the standardization of parallelism would
<br>&gt; &gt; be to have a library to work on heterogeneous architectures
<br>&gt; &gt; with distributed memory. &nbsp;On supercomputers, this is cur=
rently
<br>&gt; &gt; done with MPI libraries that exist for quite a long time now.
<br>&gt; &gt; Problems are well-known, the communications schemes are well
<br>&gt; &gt; tested and mastered, and boost already propose an interface
<br>&gt; &gt; to MPI.
<br>&gt; &gt;
<br>&gt; &gt; So my question is : has MPI already been considered by the
<br>&gt; &gt; committee for standardization ?
<br>
<br>Not to my knowledge.
<br>
<br>&gt; &gt; If yes, why related proposals have been rejected ?
<br>
<br>N/A
<br>
<br>&gt; &gt; If no, do you think that would be a good long-term project an=
d
<br>&gt; &gt; why ?
<br>&gt;
<br>&gt; What benefits would merging MPI into the C++ standard give?
<br>&gt; As far as I aware (I am a limited user of MPI), MPI has it's own
<br>&gt; standardisation group, and aims to support multiple languages,
<br>&gt; and there are multiple implementations of MPI available.
<br>
<br>MPI is pretty low level. &nbsp;The advantage of C++ would be to handle =
the
<br>mapping from higher-level types on to the MPI standard as it exists.
<br>
<br>&gt; Do the MPI developers themselves want MPI pulled into the C++
<br>&gt; standard?
<br>
<br>I doubt they would want ot reliquish MPI, but they may well like some
<br>help simplifying its use in C++.
<br>
<br>However, before we go that route, do we should generalize the
<br>problem and see if we might want some newer technology. &nbsp;For examp=
le,
<br>see <a href=3D"http://www.gpi-site.com/gpi2/" target=3D"_blank" onmouse=
down=3D"this.href=3D'http://www.google.com/url?q\75http%3A%2F%2Fwww.gpi-sit=
e.com%2Fgpi2%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNFwE6BdJP-8csxUxP16e6cu5w=
FIuw';return true;" onclick=3D"this.href=3D'http://www.google.com/url?q\75h=
ttp%3A%2F%2Fwww.gpi-site.com%2Fgpi2%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNF=
wE6BdJP-8csxUxP16e6cu5wFIuw';return true;">http://www.gpi-site.com/gpi2/</a=
>.
<br>
<br>--=20
<br>Lawrence Crowl
<br></blockquote></div></div></blockquote></div></div></blockquote></div></=
div>

<p></p>

-- <br />
<br />
--- <br />
You received this message because you are subscribed to the Google Groups &=
quot;ISO C++ Standard - Future Proposals&quot; group.<br />
To unsubscribe from this group and stop receiving emails from it, send an e=
mail to <a href=3D"mailto:std-proposals+unsubscribe@isocpp.org">std-proposa=
ls+unsubscribe@isocpp.org</a>.<br />
To post to this group, send email to <a href=3D"mailto:std-proposals@isocpp=
..org">std-proposals@isocpp.org</a>.<br />
Visit this group at <a href=3D"http://groups.google.com/a/isocpp.org/group/=
std-proposals/">http://groups.google.com/a/isocpp.org/group/std-proposals/<=
/a>.<br />

------=_Part_2720_605979134.1413781397296--

.