Topic: Virtually_Destructible<>
Author: unoriginal_username@yahoo.com (Le Chaud Lapin)
Date: Thu, 17 Jun 2004 17:15:52 +0000 (UTC) Raw View
nagle@animats.com (John Nagle) wrote in message news:<K6_zc.23$SC7.2@newssvr25.news.prodigy.com>...
> What you're describing is a dynamic marshalling system.
> DCOM, CORBA, and Java RMI support such things. Should C++
> have language support for it?
I threw in the part about dynamically putting arguments on the stack
to drive home the point that this particular use of DLL's is,
fundamentally, a run-time mechanism and cannot be regarded as a form
of delayed linking.
Maybe we should open up a thread on your question? I think this is a
very important topic in the context of C++, just as traditional I/O
mechanisms are not part of the langauge proper but cannot be regarded
as something entirely separate.
I should warn you though. I have never been a fan of universal object
models. I like my rigidity ;)
-Chaud Lapin-
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: nagle@animats.com (John Nagle)
Date: Sun, 20 Jun 2004 05:37:01 +0000 (UTC) Raw View
Le Chaud Lapin wrote:
> nagle@animats.com (John Nagle) wrote in message news:<K6_zc.23$SC7.2@newssvr25.news.prodigy.com>...
>
>>What you're describing is a dynamic marshalling system.
>>DCOM, CORBA, and Java RMI support such things. Should C++
>>have language support for it?
>
>
> I threw in the part about dynamically putting arguments on the stack
> to drive home the point that this particular use of DLL's is,
> fundamentally, a run-time mechanism and cannot be regarded as a form
> of delayed linking.
>
> Maybe we should open up a thread on your question? I think this is a
> very important topic in the context of C++, just as traditional I/O
> mechanisms are not part of the langauge proper but cannot be regarded
> as something entirely separate.
There's an argument that C++ should have enough introspection
support to allow generation of marshalling code. The idea would
be to allow support of interoperation with DCOM, CORBA, RMI etc.
using templates where you put in an object and you get out a
serialization function for it.
This would have been a really good feature to add around
1996 or so. But today, it's too late. C++ has missed the window.
Java, DCOM, and .NET have their own systems for doing this, and
they're not going to convert.
John Nagle
Animats
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: bouncer@dev.null (Wil Evers)
Date: Mon, 14 Jun 2004 19:36:15 +0000 (UTC) Raw View
In article <VA9aOIhRLeyAFwpH@robinton.demon.co.uk>, Francis Glassborow
wrote:
> In article <cabvb3$h2u$1@news.cistron.nl>, Wil Evers <bouncer@dev.null>
> writes
>>But what if we can achieve portability across systems without breaking
>>currently conforming code, and with the added benefit of portability
>>between dynamically and statically linked programs?
>
> Fine, so who is going to show us how to achieve that.
Perhaps you should have a look at existing practice. Implementations that
conform to the linkage rules in the current standard while supporting both
static and deferred linking not only exist, but have been in use for years,
and they're portable to all target platforms with a sufficiently equipped
dynamic linker. I should add that I'm not sure if Windows DLL loader would
qualify as such, but at the very least, the existence of prior art on other
platforms shows that it can be done.
> What is really
> frustrating is that some people spend time saying they do not like a
> discussion paper. Fine but how are we supposed to address a subject if
> not by discussion. The paper in question was asked for by the Evolution
> work group of WG21 exactly because very few people understand the
> different models currently in use.
I'm probably not in any position to answer that, but speaking for myself, as
an outsider, and for whatever it's worth: I don't believe there is anything
wrong with asking for such a paper, and while I may not like everything it
proposes, it certainly helps in facilitating this discussion.
> So unix has this mechanism that effectively ignores the idea of dynamic
> libraries
I've heard this before, but to me, this is far from obvious. What do you
mean by ignoring the idea of dynamic libraries?
> and simply uses delayed linking. That has some dangers as well
> as some advantages. When I statically link and deliver an application I
> know what libraries it was linked with and can take (indeed should take)
> responsibility for the results. However if it is dynamically linked in
> the way that unix does I would be very foolhardy to take responsibility
> for the consequences because I have no way to ensure that extern linked
> names I use mean what I intended them to do.
I'd say that's part of the fundamental weakness in any use of dynamic
linking. Since it's impossible to predict the future, we can only hope
that the libraries that are brought in when the program is run will behave
as expected. It is up to library maintainers to make sure that they do.
- Wil
--
Wil Evers, DOOSYS R&D, Utrecht, Holland
[Wil underscore Evers at doosys dot com]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: unoriginal_username@yahoo.com (Le Chaud Lapin)
Date: Tue, 15 Jun 2004 18:12:44 +0000 (UTC) Raw View
francis@robinton.demon.co.uk (Francis Glassborow) wrote in message news:<VA9aOIhRLeyAFwpH@robinton.demon.co.uk>...
[snipped]
> So unix has this mechanism that effectively ignores the idea of dynamic
> libraries and simply uses delayed linking. That has some dangers as well
> as some advantages. When I statically link and deliver an application I
> know what libraries it was linked with and can take (indeed should take)
> responsibility for the results. However if it is dynamically linked in
> the way that unix does I would be very foolhardy to take responsibility
> for the consequences because I have no way to ensure that extern linked
> names I use mean what I intended them to do.
>
> Note that the last paragraph does not make unix wrong but it does mean
> that there is room even in a unix environment for something that is more
> than delayed static linking. I would be very happy if OSs such as the
> Windows family supported delayed 'static' linking. However I would also
> be very happy if unix also supported a Windows like DLL based system. I
> would be even happier if both would also support something like SOM (for
> those that can remember that concept from IBM)
I have been waiting for someone to expound upon a critical point
implied above:
One must be careful in defining exactly what is meant by "dynamic
linking."
Conder the following Windows "program" script that asks the user to:
1. Enter the path of DLL
2. Enter the name of a function within the DLL
3. Enter the arguments that the DLL takes (assume they are all
scalars)
The "program" would then:
1. Call LoadLibrary() on the path to the DLL to load it
2. Call GetProcAddress() to get a pointer to the function in DLL
3. Push entered arguments on stack and invoke function.
4. Call FreeLibrary() on the DLL
The user could then enter the path of an entirely new DLL,
or...replace the current DLL with a new DLL just downloaded from the
Internet, and try the same function with same arguments but different
results, all while the "program" remains in memory.
Thinking about the example above, two questions immediately arise:
1. Is the program the blob of code that remains in memory all by
itself with no DLL's? After all, it is entirely conceivable that no
DLL's ever become part of the "program".
2. When a DLL is loaded, does it become part of the "program?" After
all, a DLL shares the virtual address space of the module that loaded
it, a module which, by the way, may be only one of many in a chain 20
levels deep. Do all 20 DLL's become part of the "program"?
Therefore, I think we should be very specific about what we mean when
speaking of things like the one-definition rule in context of DLLs.
As James Kanze has implied in several of his posts, if we are to
retain this peculiar feature of DLLs, then there is automatically the
implicit assumption that there is a certain behavior we expect, and
prefer that the compiler/linker did not interfere.
-Chaud Lapin-
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: llewelly <llewelly.at@xmission.dot.com>
Date: Tue, 15 Jun 2004 23:34:21 +0000 (UTC) Raw View
kanze@gabi-soft.fr writes:
> Wil Evers <bouncer@dev.null> wrote in message
> news:<ca4e8g$rlc$1@news.cistron.nl>...
> > In article <d6652001.0406040046.15e2d2da@posting.google.com>,
> > kanze@gabi-soft.fr wrote:
>
> > > Wil Evers <bouncer@dev.null> wrote in message
> > > news:<c9i1ei$vqn$1@news.cistron.nl>...
>
> > > > In article <d6652001.0405280506.37f30ecc@posting.google.com>,
> > > > kanze@gabi-soft.fr wrote:
>
> > > > > There's nothing wrong with having (or wanting) such a feature,
> > > > > but I don't see the relevance with regards to standardization.
>
> > > > IMHO, the relevance is this: (1) Will the standard allow me to
> > > > use the same source code without having to specify which linking
> > > > model is used?
>
> > > The "linking model", as you seem to think of it, is an
> > > implementation issue. The standard current supports what you seem
> > > to be asking for; supports, in the sense of allowing it as a legal
> > > implementation technique, but not requiring it. If I understand
> > > you correctly, you want dynamic linking to work exactly like static
> > > linking, except to take place later. When the link takes place (as
> > > long as it takes place before the execution of the program, or acts
> > > as if it did) is an implementation issue, and not governed by the
> > > standard.
>
> > I agree. I never thought of the programs and libraries I routinely
> > build for Linux as non-compliant just because they use dynamic
> > linking.
>
> As long as it is the implicit dynamic linking, and you don't invoke
> dlopen explicitly.
>From n1496, the paper to which I referred:
# This paper currently only addresses deferred linking; loadable
# libraries are important, however, and their omission is
# temporary.
I originally took this to mean that the paper was not about code
which invoked dlopen explicitly.
>
> 2.1/9 strongly suggests, if not requires, that linking be finished
> before program execution starts. As a programmer, however, program
> execution only starts when the start-up routine reachs your code; an
> implemention which only saved the source code, and then did a compile,
> link and go when you invoked your code, would be fully compliant.
>
> I'll admit that I don't see much use for such dynamic linking, but it is
> currently conforming, if the implementation decides to offer it.
>
> > > The only reason I can see why the standard might want to address
> > > the issue is to offer something different.
>
> > That is what appears to be happening in N1496; please see below.
>
> That's not the impression I got from a quick reading. The Unix
> implementation of differed linking (only doing the final parts of the
> linking at program start-up) is currently conforming, and I don't see
> anything in N1496 which would change that.
>
> Personally, of course, I don't consider this "dynamic" linking, but
> "differed" linking; it is, as far as my program is concerned, a static
> link, which simply happens to take place later and in a different
> environment than the traditional static link.
>
> > > > (2): Will I be able to use that code to build DLLs/shared
> > > > libraries on all major platforms?
>
> > > On those that support it. That's the current situation.
>
> > > If I understand you correctly, what you want is a particular
> > > implementation option that is fully conformant with the current
> > > standard. Unix offers this option, Windows doesn't, so you want
> > > the standard to somehow "require" it. The problem is, that isn't
> > > the role of the standard. The standard specifies what is a
> > > conforming program. It doesn't specify how you invoke the
> > > compiler, linker, etc., to get from your sources to the executable.
>
> > But that will change when the suggestions in N1496 are adopted by the
> > standard, because it places additional requirements on currently
> > conforming code, if that code is used to build dynamic libraries.
>
> N1496 only speaks about a totally new feature. It doesn't make any
> currently conforming code non conforming.
Again, see the text I just quoted.
> If you don't like the new
> feature, don't use it. There's nothing in N1496 which would make the
> current differed linking used in Unix non-conformant.
'deferred linking' is really the heart of the trouble; AFAICT, n1496
is *only* about 'deferred linking'. What is 'deferred linking'? I
thought I knew, but now it seems to me that you and I and Will
Evers and Pete Becker have 4 different ideas of what the paper
actually says. Since Pete wrote the paper, I've been reading and
re-reading his posts, trying to figure out what he meant - but, I
am sorry to say, I still do not understand them.
I had thought that unix shared objects were a kind of deferred
linking. But the more I read it, the more it seems that the only
reasonable interpretation of n1496's notion of 'deferred linking'
is wholly independent of unix shared objects, and could coexist
with unix shared objects, if that were desirable. If this is so,
I withdraw my previous objections, and only say that I found the
paper confusing.
> serious vendors take seriously.
>
> > This is not a problem for Windows programmers, because they've never
> > been able to use strictly conforming code to build DLLs.
>
> That's because DLL's are designed to address a different problem. They
> have only a vague relationship with Unix's shared objects.
>
> > However, it *is* a problem for Unix programmers, because they will
> > have to change currently conforming code to meet these additional
> > requirements - at least if the code, and the way it is used, is to
> > remain conforming.
>
> If the code is conforming today, it will be conforming tomorrow.
>
> > From that point on, we can no longer use to same code to build both
> > static and shared libraries.
>
> That's an OS decision, not one for the standard's committee. The
> standard talks about conformity of a single program. Whether parts of
> the code of that program are, or can be, physically shared with another
> program is beyond the scope of the standard.
>
> For the moment, the only think I see in N1492 is a proposal to introduce
> a new type of linkage, and even that is rather vague. I would certainly
> hope that some new linkage wouldn't misteriously appear in my current
> programs without my doing something.
>
> > I could probably live with these code changes if I had a tool (say,
> > the build-time linker) that would tell me about any missing
> > imports/exports, so I could add the required declarations. However,
> > some of the semantic changes in N1496, especially when it comes to
> > implicit template instantiations, are silent: the program will still
> > link, but it will behave differently. Which brings me to my final
> > complaint: I can only fix this by using explicit template
> > instantiation and assign each such instantiation to a specific shared
> > library. To put it mildly: for a large application that heavily uses
> > templates, that's *a lot* of work.
>
> I still don't see the problem. But then, I'll admit that about the only
> use I've found for differed linking is to reducd program robustness, by
> introducing addition unknowns in the linking process.
With the most common result that the runtime linker emits error
messages when one attempts to run the program.
I don't much like the common unix uses of shared libraries. But
people do expect them to keep working.
> And I don't see anything in the proposal which would render the current
> differed linking in Unix non conform.
The trouble is, the paper is explicitly about 'deferred linking'. It
seems the only reasonable interpretation is that the paper's
definition of 'deferred linking' is quite different from what a
naive unix programmer like me expects.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: nagle@animats.com (John Nagle)
Date: Wed, 16 Jun 2004 16:23:50 +0000 (UTC) Raw View
What you're describing is a dynamic marshalling system.
DCOM, CORBA, and Java RMI support such things. Should C++
have language support for it?
John Nagle
Animats
Le Chaud Lapin wrote:
> francis@robinton.demon.co.uk (Francis Glassborow) wrote in message news:<VA9aOIhRLeyAFwpH@robinton.demon.co.uk>...
> [snipped]
>
>
>>So unix has this mechanism that effectively ignores the idea of dynamic
>>libraries and simply uses delayed linking. That has some dangers as well
>>as some advantages. When I statically link and deliver an application I
>>know what libraries it was linked with and can take (indeed should take)
>>responsibility for the results. However if it is dynamically linked in
>>the way that unix does I would be very foolhardy to take responsibility
>>for the consequences because I have no way to ensure that extern linked
>>names I use mean what I intended them to do.
>>
>>Note that the last paragraph does not make unix wrong but it does mean
>>that there is room even in a unix environment for something that is more
>>than delayed static linking. I would be very happy if OSs such as the
>>Windows family supported delayed 'static' linking. However I would also
>>be very happy if unix also supported a Windows like DLL based system. I
>>would be even happier if both would also support something like SOM (for
>>those that can remember that concept from IBM)
>
>
> I have been waiting for someone to expound upon a critical point
> implied above:
>
> One must be careful in defining exactly what is meant by "dynamic
> linking."
>
> Conder the following Windows "program" script that asks the user to:
>
> 1. Enter the path of DLL
> 2. Enter the name of a function within the DLL
> 3. Enter the arguments that the DLL takes (assume they are all
> scalars)
>
> The "program" would then:
>
> 1. Call LoadLibrary() on the path to the DLL to load it
> 2. Call GetProcAddress() to get a pointer to the function in DLL
> 3. Push entered arguments on stack and invoke function.
> 4. Call FreeLibrary() on the DLL
>
> The user could then enter the path of an entirely new DLL,
> or...replace the current DLL with a new DLL just downloaded from the
> Internet, and try the same function with same arguments but different
> results, all while the "program" remains in memory.
>
> Thinking about the example above, two questions immediately arise:
>
> 1. Is the program the blob of code that remains in memory all by
> itself with no DLL's? After all, it is entirely conceivable that no
> DLL's ever become part of the "program".
>
> 2. When a DLL is loaded, does it become part of the "program?" After
> all, a DLL shares the virtual address space of the module that loaded
> it, a module which, by the way, may be only one of many in a chain 20
> levels deep. Do all 20 DLL's become part of the "program"?
>
> Therefore, I think we should be very specific about what we mean when
> speaking of things like the one-definition rule in context of DLLs.
> As James Kanze has implied in several of his posts, if we are to
> retain this peculiar feature of DLLs, then there is automatically the
> implicit assumption that there is a certain behavior we expect, and
> prefer that the compiler/linker did not interfere.
>
> -Chaud Lapin-
>
> ---
> [ comp.std.c++ is moderated. To submit articles, try just posting with ]
> [ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
> [ --- Please see the FAQ before posting. --- ]
> [ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
>
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: kanze@gabi-soft.fr
Date: Wed, 16 Jun 2004 21:02:48 +0000 (UTC) Raw View
llewelly <llewelly.at@xmission.dot.com> wrote in message
news:<86659sbyks.fsf@Zorthluthik.local.bar>...
> kanze@gabi-soft.fr writes:
> > Wil Evers <bouncer@dev.null> wrote in message
> > news:<ca4e8g$rlc$1@news.cistron.nl>...
> > > In article <d6652001.0406040046.15e2d2da@posting.google.com>,
> > > kanze@gabi-soft.fr wrote:
> > > > Wil Evers <bouncer@dev.null> wrote in message
> > > > news:<c9i1ei$vqn$1@news.cistron.nl>...
> > > > > In article
> > > > > <d6652001.0405280506.37f30ecc@posting.google.com>,
> > > > > kanze@gabi-soft.fr wrote:
> > > > > > There's nothing wrong with having (or wanting) such a
> > > > > > feature, but I don't see the relevance with regards to
> > > > > > standardization.
> > > > > IMHO, the relevance is this: (1) Will the standard allow me
> > > > > to use the same source code without having to specify which
> > > > > linking model is used?
> > > > The "linking model", as you seem to think of it, is an
> > > > implementation issue. The standard current supports what you
> > > > seem to be asking for; supports, in the sense of allowing it
> > > > as a legal implementation technique, but not requiring it. If
> > > > I understand you correctly, you want dynamic linking to work
> > > > exactly like static linking, except to take place later. When
> > > > the link takes place (as long as it takes place before the
> > > > execution of the program, or acts as if it did) is an
> > > > implementation issue, and not governed by the standard.
> > > I agree. I never thought of the programs and libraries I
> > > routinely build for Linux as non-compliant just because they use
> > > dynamic linking.
> > As long as it is the implicit dynamic linking, and you don't invoke
> > dlopen explicitly.
> >From n1496, the paper to which I referred:
> # This paper currently only addresses deferred linking; loadable
> # libraries are important, however, and their omission is
> # temporary.
> I originally took this to mean that the paper was not about code
> which invoked dlopen explicitly.
Funny, I would have interpreted it as exactly the opposite. If all of
the libraries and/or objects are linked before the program (my code)
starts, then there is, from a standards point of view, nothing new. By
'deferred', I understand something that takes place later than it
"should"; linking should normally be finished before my code starts; if
linking is deferred, my code starts before the modules are linked.
Whether I have to explicitly ask for the link or not is a separate
question. If linking is not deferred, I cannot have to ask for it
explicitly in my code. Rather obviously. If linking is deferred,
however, both possibilities exist; I can have to call some system
routine explicitly, to link, or the link can take place implicitly, say
when I access a non-resolved external.
In practice, I don't know of any system which supports deferred
implicite linking in C++. But that isn't saying much; about the only
system I know well is Unix, and even there, I've not done much with
dynamic linking (and always explicit dynamic linking -- calling dopen).
I have used dynamic objects (or shared objects, in Unix termonology),
but that use has always been almost perfectly transparent. It is, in
fact, that use that leads me to say that that use is already covered by
the standard.
[...]
> > > But that will change when the suggestions in N1496 are adopted by
> > > the standard, because it places additional requirements on
> > > currently conforming code, if that code is used to build dynamic
> > > libraries.
> > N1496 only speaks about a totally new feature. It doesn't make any
> > currently conforming code non conforming.
> Again, see the text I just quoted.
Which says that for the present, the paper is only concerned with a new
feature, deferred linking.
> > If you don't like the new feature, don't use it. There's nothing in
> > N1496 which would make the current differed linking used in Unix
> > non-conformant.
> 'deferred linking' is really the heart of the trouble; AFAICT, n1496
> is *only* about 'deferred linking'. What is 'deferred linking'? I
> thought I knew, but now it seems to me that you and I and Will
> Evers and Pete Becker have 4 different ideas of what the paper
> actually says. Since Pete wrote the paper, I've been reading and
> re-reading his posts, trying to figure out what he meant - but, I
> am sorry to say, I still do not understand them.
The "paper" is far from a polished proposal. It is a working paper.
It's possible, in fact, it's even probable, that there are
contradictions in it; it's goal is to present the current status of
thinking on the issue, and there are probably contradictions in the
current thinking.
Also, although Pete Becker is the author, I think that the intent of
such papers in the standards committee is to give more or less a summary
of what the working committee is thinking. Thus, the views expressed in
the paper aren't necessarily those of Pete Becker, personally, but
represent a summary of several person's possibly conflicting views.
Given the status of the paper, I think far too much is being made of
it. The passage you quote indicates very clearly that there is more to
come, so the fact that some feature someone happens to want isn't
present doesn't really mean much.
> I had thought that unix shared objects were a kind of deferred
> linking.
Everything depends on the context in which one places oneself. From a
traditional Unix point of view, in which linking takes place in a
command called "ld", they are deferred. From the standard's point of
view, all one can say about linking is that it takes place before
execution of your code. From that point of view, they are not deferred.
Put in another way, you don't have to change a word in the standard to
support the Unix implicit linking of shared objects. The Unix people
did have to change a lot of text in their description of "ld", however,
and more generally their descriptions of what happens on process
start-up.
> But the more I read it, the more it seems that the only
> reasonable interpretation of n1496's notion of 'deferred linking'
> is wholly independent of unix shared objects, and could coexist
> with unix shared objects, if that were desirable. If this is so,
> I withdraw my previous objections, and only say that I found the
> paper confusing.
Interesting. My initial attitude was that implicit dynamic linking, at
least as it is currently implemented in Unix, is already standards
conform, and that there is no need to modify the standard to support
it. For that reason, I was actively looking for
[..]
> > I still don't see the problem. But then, I'll admit that about the
> > only use I've found for differed linking is to reducd program
> > robustness, by introducing addition unknowns in the linking process.
> With the most common result that the runtime linker emits error
> messages when one attempts to run the program.
I think that all of the errors emitted by the linker (run-time or
otherwise) concern cases of what the standard considers undefined
behavior. The standard has already copped out:-).
More to the point, of course, even when the standard requires
diagnostics, it doesn't say when they have to occur.
>From a pratical point of view, of course, it pretty much means that you
can only use such techniques in exceptional cases. Robust programs are
linked statically, unless you actually need the possibility to link
different things according to the execution environment.
> I don't much like the common unix uses of shared libraries. But
> people do expect them to keep working.
They have their uses. They are also extremely overused, IMHO, and that
overuse can drive users up the wall.
> > And I don't see anything in the proposal which would render the
> > current differed linking in Unix non conform.
> The trouble is, the paper is explicitly about 'deferred linking'. It
> seems the only reasonable interpretation is that the paper's
> definition of 'deferred linking' is quite different from what a
> naive unix programmer like me expects.
Most of what is written in the standard is expressed quite differently
from what a naive programmer expects:-). I found the paper relatively
readable, certainly more readable that what will eventually make it into
the standard. But I also approached it from a standards point of view,
not a Unix one. And I asked myself: what problems is it attempting to
solve? I've done similar things in Unix, for example, loading different
widget libraries according to an environment variable or a command line
option, so my GUI looked Athena-like, or Motif-like. The paper seems to
fit right in with this use.
--
James Kanze GABI Software
Conseils en informatique orient�e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S�mard, 78210 St.-Cyr-l'�cole, France, +33 (0)1 30 23 00 34
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: bouncer@dev.null (Wil Evers)
Date: Sun, 13 Jun 2004 13:25:09 +0000 (UTC) Raw View
In article <d6652001.0406100200.63999264@posting.google.com>,
kanze@gabi-soft.fr wrote:
=20
> Wil Evers <bouncer@dev.null> wrote in message
> news:<ca4e8g$rlc$1@news.cistron.nl>...
>
> > In article <d6652001.0406040046.15e2d2da@posting.google.com>,
> > kanze@gabi-soft.fr wrote:
>=20
> > > The "linking model", as you seem to think of it, is an
> > > implementation issue. The standard current supports what you see=
m
> > > to be asking for; supports, in the sense of allowing it as a lega=
l
> > > implementation technique, but not requiring it. If I understand
> > > you correctly, you want dynamic linking to work exactly like stat=
ic
> > > linking, except to take place later. When the link takes place (=
as
> > > long as it takes place before the execution of the program, or ac=
ts
> > > as if it did) is an implementation issue, and not governed by the
> > > standard.
>=20
> > I agree. I never thought of the programs and libraries I routinely
> > build for Linux as non-compliant just because they use dynamic
> > linking.
>=20
> As long as it is the implicit dynamic linking, and you don't invoke
> dlopen explicitly.
>=20
> =A72.1/9 strongly suggests, if not requires, that linking be finished
> before program execution starts. As a programmer, however, program
> execution only starts when the start-up routine reachs your code; an
> implemention which only saved the source code, and then did a compile,
> link and go when you invoked your code, would be fully compliant.
>=20
> I'll admit that I don't see much use for such dynamic linking, but it i=
s
> currently conforming, if the implementation decides to offer it.
Dynamic linking has many advantages over static linking; the ability to u=
se
dlopen()/LoadLibrary() is one them, but there are other important benefit=
s:
for a set of related applications, dynamic linking reduces the overall
memory footprint, saves disks space, improves application startup time,
lowers deployment costs, and allows us to fix implementation bugs in
libraries without the need to rebuild all depending executables. =20
On my Linux system, dynamic linking is pretty much the norm; the use of
static libraries is exceptional, and mostly (but not always) a symptom of
API instability. That said, for the vast majority of source packages,
generating a static library or executable is simply a matter of changing =
a
single build-time flag. I therefore suspect that most uses of dynamic
linking are instances of the deferred linking scenario, as opposed to the
scenario supported by dlopen(). =20
Whatever we may think about that, at the very least, it shows that deferr=
ed
linking is an important use case, and that we should be careful not to
break it.
=20
> > > The only reason I can see why the standard might want to address
> > > the issue is to offer something different.
>=20
> > That is what appears to be happening in N1496; please see below.
>=20
> That's not the impression I got from a quick reading. The Unix
> implementation of differed linking (only doing the final parts of the
> linking at program start-up) is currently conforming, and I don't see
> anything in N1496 which would change that.
Are you sure? N1496 specifically says that it only addresses the deferre=
d
linking case, and what it proposes is certainly different from 'the curre=
nt
Unix implementation'. (By the way, we should be careful with that name;
there are many different Unixes around, which is why facilities such as
libtool exist.)
=20
[snip]
> N1496 only speaks about a totally new feature. It doesn't make any
> currently conforming code non conforming. If you don't like the new
> feature, don't use it. There's nothing in N1496 which would make the
> current differed linking used in Unix non-conformant.
I guess I still don't get it. What is this new feature N1496 speaks abou=
t?=20
It can't be deferred linking, because the current standard doesn't
differentiate between static linking and deferred linking.
[snip]
> For the moment, the only think I see in N1492 is a proposal to introduc=
e
> a new type of linkage, and even that is rather vague. I would certainl=
y
> hope that some new linkage wouldn't misteriously appear in my current
> programs without my doing something.
The trouble is that N1496 doesn't just propose to introduce a new type of
linkage; in passing, it also proposes to change the semantics of external
linkage, which is an existing type of linkage. And it is this change tha=
t
will, sometimes mysteriously, break existing code.=20
- Wil
--=20
Wil Evers, DOOSYS R&D, Utrecht, Holland
[Wil underscore Evers at doosys dot com]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: petebecker@acm.org (Pete Becker)
Date: Sun, 13 Jun 2004 19:11:47 +0000 (UTC) Raw View
Wil Evers wrote:
>
> The trouble is that N1496 doesn't just propose to introduce a new type of
> linkage; in passing, it also proposes to change the semantics of external
> linkage, which is an existing type of linkage. And it is this change that
> will, sometimes mysteriously, break existing code.
>
The trouble is that you keep stating conclusions without explaining your
reasoning. The intention is that what currently is defined as a program
will continue to be defined as a program, with the same meaning. It
would consist of a single linkage unit, so the rules about external
linkage and multiple linkage units simply do not apply. What difference
do you think would be present in a program consisting of a single
linkage unit?
--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: bouncer@dev.null (Wil Evers)
Date: Mon, 14 Jun 2004 16:15:24 +0000 (UTC) Raw View
In article <40CC7CCF.780CAD71@acm.org>, Pete Becker wrote:
> Wil Evers wrote:
>>
>> The trouble is that N1496 doesn't just propose to introduce a new type of
>> linkage; in passing, it also proposes to change the semantics of external
>> linkage, which is an existing type of linkage. And it is this change
>> that will, sometimes mysteriously, break existing code.
>
> The trouble is that you keep stating conclusions without explaining your
> reasoning. The intention is that what currently is defined as a program
> will continue to be defined as a program, with the same meaning. It
> would consist of a single linkage unit, so the rules about external
> linkage and multiple linkage units simply do not apply. What difference
> do you think would be present in a program consisting of a single
> linkage unit?
None; I agree that the change in external linkage semantics proposed in
N1496 only applies to programs consisting of multiple linkage units. The
issue is how that relates to currently conforming code, because the current
standard does not define what a linkage unit is, and it therefore does not
constrain the semantics of external linkage to a single linkage unit.
Problems would occur when code that relies on the current external linkage
semantics ends up in an program consisting of multiple linkage units.
Consider the following example:
template <typename T>
T& get_default_instance()
{
static T instance;
return instance;
}
According to the current standard, for a single type C, all calls to
get_default_instance<C>() will return a reference to the same object. In
contrast, under the rules proposed in N1496, the reference returned would
depend on the linkage unit from which the call is made.
This is a silent change: the code remains legal, but has a different
meaning. Furthermore, there is nothing the implementor of this template,
who has no control over the linkage model, can do about this: N1496 does
not propose support for shared templates.
- Wil
--
Wil Evers, DOOSYS R&D, Utrecht, Holland
[Wil underscore Evers at doosys dot com]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: kanze@gabi-soft.fr
Date: Fri, 11 Jun 2004 15:10:54 +0000 (UTC) Raw View
Wil Evers <bouncer@dev.null> wrote in message
news:<ca4e8g$rlc$1@news.cistron.nl>...
> In article <d6652001.0406040046.15e2d2da@posting.google.com>,
> kanze@gabi-soft.fr wrote:
> > Wil Evers <bouncer@dev.null> wrote in message
> > news:<c9i1ei$vqn$1@news.cistron.nl>...
> > > In article <d6652001.0405280506.37f30ecc@posting.google.com>,
> > > kanze@gabi-soft.fr wrote:
> > > > There's nothing wrong with having (or wanting) such a feature,
> > > > but I don't see the relevance with regards to standardization.
> > > IMHO, the relevance is this: (1) Will the standard allow me to
> > > use the same source code without having to specify which linking
> > > model is used?
> > The "linking model", as you seem to think of it, is an
> > implementation issue. The standard current supports what you seem
> > to be asking for; supports, in the sense of allowing it as a legal
> > implementation technique, but not requiring it. If I understand
> > you correctly, you want dynamic linking to work exactly like static
> > linking, except to take place later. When the link takes place (as
> > long as it takes place before the execution of the program, or acts
> > as if it did) is an implementation issue, and not governed by the
> > standard.
> I agree. I never thought of the programs and libraries I routinely
> build for Linux as non-compliant just because they use dynamic
> linking.
As long as it is the implicit dynamic linking, and you don't invoke
dlopen explicitly.
2.1/9 strongly suggests, if not requires, that linking be finished
before program execution starts. As a programmer, however, program
execution only starts when the start-up routine reachs your code; an
implemention which only saved the source code, and then did a compile,
link and go when you invoked your code, would be fully compliant.
I'll admit that I don't see much use for such dynamic linking, but it is
currently conforming, if the implementation decides to offer it.
> > The only reason I can see why the standard might want to address
> > the issue is to offer something different.
> That is what appears to be happening in N1496; please see below.
That's not the impression I got from a quick reading. The Unix
implementation of differed linking (only doing the final parts of the
linking at program start-up) is currently conforming, and I don't see
anything in N1496 which would change that.
Personally, of course, I don't consider this "dynamic" linking, but
"differed" linking; it is, as far as my program is concerned, a static
link, which simply happens to take place later and in a different
environment than the traditional static link.
> > > (2): Will I be able to use that code to build DLLs/shared
> > > libraries on all major platforms?
> > On those that support it. That's the current situation.
> > If I understand you correctly, what you want is a particular
> > implementation option that is fully conformant with the current
> > standard. Unix offers this option, Windows doesn't, so you want
> > the standard to somehow "require" it. The problem is, that isn't
> > the role of the standard. The standard specifies what is a
> > conforming program. It doesn't specify how you invoke the
> > compiler, linker, etc., to get from your sources to the executable.
> But that will change when the suggestions in N1496 are adopted by the
> standard, because it places additional requirements on currently
> conforming code, if that code is used to build dynamic libraries.
N1496 only speaks about a totally new feature. It doesn't make any
currently conforming code non conforming. If you don't like the new
feature, don't use it. There's nothing in N1496 which would make the
current differed linking used in Unix non-conformant. And no serious
vendor will stop supporting it -- backward compatibility is something
serious vendors take seriously.
> This is not a problem for Windows programmers, because they've never
> been able to use strictly conforming code to build DLLs.
That's because DLL's are designed to address a different problem. They
have only a vague relationship with Unix's shared objects.
> However, it *is* a problem for Unix programmers, because they will
> have to change currently conforming code to meet these additional
> requirements - at least if the code, and the way it is used, is to
> remain conforming.
If the code is conforming today, it will be conforming tomorrow.
> From that point on, we can no longer use to same code to build both
> static and shared libraries.
That's an OS decision, not one for the standard's committee. The
standard talks about conformity of a single program. Whether parts of
the code of that program are, or can be, physically shared with another
program is beyond the scope of the standard.
For the moment, the only think I see in N1492 is a proposal to introduce
a new type of linkage, and even that is rather vague. I would certainly
hope that some new linkage wouldn't misteriously appear in my current
programs without my doing something.
> I could probably live with these code changes if I had a tool (say,
> the build-time linker) that would tell me about any missing
> imports/exports, so I could add the required declarations. However,
> some of the semantic changes in N1496, especially when it comes to
> implicit template instantiations, are silent: the program will still
> link, but it will behave differently. Which brings me to my final
> complaint: I can only fix this by using explicit template
> instantiation and assign each such instantiation to a specific shared
> library. To put it mildly: for a large application that heavily uses
> templates, that's *a lot* of work.
I still don't see the problem. But then, I'll admit that about the only
use I've found for differed linking is to reducd program robustness, by
introducing addition unknowns in the linking process.
And I don't see anything in the proposal which would render the current
differed linking in Unix non conform.
--
James Kanze GABI Software
Conseils en informatique orient e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S mard, 78210 St.-Cyr-l' cole, France, +33 (0)1 30 23 00 34
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: bouncer@dev.null (Wil Evers)
Date: Fri, 11 Jun 2004 16:09:44 +0000 (UTC) Raw View
In article <40C79DB2.58D007FD@acm.org>, Pete Becker wrote:
> Wil Evers wrote:
>>
>> However, it *is* a
>> problem for Unix programmers, because they will have to change currently
>> conforming code to meet these additional requirements - at least if the
>> code, and the way it is used, is to remain conforming. From that point
>> on, we can no longer use to same code to build both static and shared
>> libraries.
>
> Nonsense. As James pointed out, Unix-style dynamic linking conforms to
> the C++ standard -- as you say, it's because Unix dynamic linking
> imitates static linking. That won't change.
Nonsense? If a recipe for writing code for dynamic libraries is to be
included in the standard, it should aim at broad acceptance; that's what
standards are for. Over the last few weeks, some people, including me,
have given specific reasons why the model described in N1496 might not be
as broadly accepted as one would hope - reasons based on firmly established
practice in a not completely insignificant part of the C++ community.
What I'm worried about is the way these concerns are being addressed. From
a standardization perspective, telling developers to disregard the
suggested future standard model and use some platform-specific backward
compatibility mode instead is not the answer; it's just a polite way to
ignore these concerns.
> What 1496 aims at is being
> able to write portable code that uses dynamic libraries. That requires
> some thought about how the ODR applies, etc. If you want to write code
> that uses dynamic libraries portably, yes, you'll need to change your
> coding style. What you get for that is something that you don't have
> today: portability across systems.
But what if we can achieve portability across systems without breaking
currently conforming code, and with the added benefit of portability
between dynamically and statically linked programs?
- Wil
--
Wil Evers, DOOSYS R&D, Utrecht, Holland
[Wil underscore Evers at doosys dot com]
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: francis@robinton.demon.co.uk (Francis Glassborow)
Date: Fri, 11 Jun 2004 17:32:03 +0000 (UTC) Raw View
In article <cabvb3$h2u$1@news.cistron.nl>, Wil Evers <bouncer@dev.null>
writes
>But what if we can achieve portability across systems without breaking
>currently conforming code, and with the added benefit of portability
>between dynamically and statically linked programs?
Fine, so who is going to show us how to achieve that. What is really
frustrating is that some people spend time saying they do not like a
discussion paper. Fine but how are we supposed to address a subject if
not by discussion. The paper in question was asked for by the Evolution
work group of WG21 exactly because very few people understand the
different models currently in use.
So unix has this mechanism that effectively ignores the idea of dynamic
libraries and simply uses delayed linking. That has some dangers as well
as some advantages. When I statically link and deliver an application I
know what libraries it was linked with and can take (indeed should take)
responsibility for the results. However if it is dynamically linked in
the way that unix does I would be very foolhardy to take responsibility
for the consequences because I have no way to ensure that extern linked
names I use mean what I intended them to do.
Note that the last paragraph does not make unix wrong but it does mean
that there is room even in a unix environment for something that is more
than delayed static linking. I would be very happy if OSs such as the
Windows family supported delayed 'static' linking. However I would also
be very happy if unix also supported a Windows like DLL based system. I
would be even happier if both would also support something like SOM (for
those that can remember that concept from IBM)
Now could we get on with technical discussion so that we can move
forward.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: Wil Evers <bouncer@dev.null>
Date: Wed, 9 Jun 2004 14:52:33 +0000 (UTC) Raw View
In article <d6652001.0406040046.15e2d2da@posting.google.com>,
kanze@gabi-soft.fr wrote:
> Wil Evers <bouncer@dev.null> wrote in message
> news:<c9i1ei$vqn$1@news.cistron.nl>...
> >
> > In article <d6652001.0405280506.37f30ecc@posting.google.com>,
> > kanze@gabi-soft.fr wrote:
> > >
> > > There's nothing wrong with having (or wanting) such a feature, but I
> > > don't see the relevance with regards to standardization.
> >
> > IMHO, the relevance is this: (1) Will the standard allow me to use the
> > same source code without having to specify which linking model is
> > used?
>
> The "linking model", as you seem to think of it, is an implementation
> issue. The standard current supports what you seem to be asking for;
> supports, in the sense of allowing it as a legal implementation
> technique, but not requiring it. If I understand you correctly, you
> want dynamic linking to work exactly like static linking, except to take
> place later. When the link takes place (as long as it takes place
> before the execution of the program, or acts as if it did) is an
> implementation issue, and not governed by the standard.
I agree. I never thought of the programs and libraries I routinely build
for Linux as non-compliant just because they use dynamic linking.
> The only reason I can see why the standard might want to address the
> issue is to offer something different.
That is what appears to be happening in N1496; please see below.
> > (2): Will I be able to use that code to build DLLs/shared libraries on
> > all major platforms?
>
> On those that support it. That's the current situation.
>
> If I understand you correctly, what you want is a particular
> implementation option that is fully conformant with the current
> standard. Unix offers this option, Windows doesn't, so you want the
> standard to somehow "require" it. The problem is, that isn't the role
> of the standard. The standard specifies what is a conforming program.
> It doesn't specify how you invoke the compiler, linker, etc., to get
> from your sources to the executable.
But that will change when the suggestions in N1496 are adopted by the
standard, because it places additional requirements on currently conforming
code, if that code is used to build dynamic libraries.
This is not a problem for Windows programmers, because they've never been
able to use strictly conforming code to build DLLs. However, it *is* a
problem for Unix programmers, because they will have to change currently
conforming code to meet these additional requirements - at least if the
code, and the way it is used, is to remain conforming. From that point on,
we can no longer use to same code to build both static and shared
libraries.
I could probably live with these code changes if I had a tool (say, the
build-time linker) that would tell me about any missing imports/exports, so
I could add the required declarations. However, some of the semantic
changes in N1496, especially when it comes to implicit template
instantiations, are silent: the program will still link, but it will behave
differently. Which brings me to my final complaint: I can only fix this by
using explicit template instantiation and assign each such instantiation to
a specific shared library. To put it mildly: for a large application that
heavily uses templates, that's *a lot* of work.
[ followups set to comp.std.c++ ]
- Wil
--
Wil Evers, DOOSYS R&D, Utrecht, Holland
[Wil underscore Evers at doosys dot com]
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
Author: petebecker@acm.org (Pete Becker)
Date: Thu, 10 Jun 2004 17:42:38 +0000 (UTC) Raw View
Wil Evers wrote:
>
> However, it *is* a
> problem for Unix programmers, because they will have to change currently
> conforming code to meet these additional requirements - at least if the
> code, and the way it is used, is to remain conforming. From that point on,
> we can no longer use to same code to build both static and shared
> libraries.
Nonsense. As James pointed out, Unix-style dynamic linking conforms to
the C++ standard -- as you say, it's because Unix dynamic linking
imitates static linking. That won't change. What 1496 aims at is being
able to write portable code that uses dynamic libraries. That requires
some thought about how the ODR applies, etc. If you want to write code
that uses dynamic libraries portably, yes, you'll need to change your
coding style. What you get for that is something that you don't have
today: portability across systems.
--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]