Topic: Lazy allocation and conformance


Author: kanze@gabi-soft.de
Date: 2000/11/11
Raw View
fjh@cs.mu.OZ.AU (Fergus Henderson) writes:

|>  Valentin Bonnard <Valentin.Bonnard@free.fr> writes:

|>  >Fergus Henderson wrote:

|>  >> My interpretation of the C++ standard is that this behaviour does
|>  >> not contravene the C++ standard, since each conforming
|>  >> implementations is only required to execute programs "within its
|>  >> resource limits".

|>  >Specific wording overrides general wording, so the mention of what
|>  >to do in a particular case overrides the general rule that
|>  >implementations execute programs "within [their] resource limits".

|>  Default logic is a reasonable way to interpret everyday
|>  conversation, but standards should be written using classical logic,
|>  not default logic.  That is, if specific wording and general wording
|>  conflict, the conclusion should just be that the standard is
|>  self-contradictory.  General rules that have specific exceptions
|>  should be described as such, e.g. using phrases such as "except as
|>  mentioned elsewhere".

Agreed.  Which means we have a definite defect in the standard, since
there are at least two places where specific rules specify what is to
happen in the case of a program exceeding its resource limits:
malloc/operator new, and ostream::flush/close (although the requirements
are a lot vaguer in the second case).

|>  >Why would the return value of malloc in case no memory is available
|>  >would be specified otherwise?

|>  Perhaps because the C and C++ committees have been very slow to
|>  recognize the idea of "recommend practice".

Or they overlooked the exceptions when formulating the resource limits
escape clause.  (If I understand correctly what you mean by "recommended
practice", it is something like casting a pointer to an integer, where
the standard has words to the effect that the results are undefined, but
the intent is that the be unsurprising for someone familiary with the
machine architecture.)

--=20
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient=E9e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh=FCttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: kanze@gabi-soft.de
Date: Sun, 12 Nov 2000 03:02:06 GMT
Raw View
Bernd Strieder <strieder@student.uni-kl.de> writes:

|>  A page of code must be in RAM to be executed, room has to be made,
|>  whenever RAM is full.  What should an OS do in the case, when all
|>  swap space is used, all RAM pages are used? No RAM page can be
|>  swapped out. And now just one page of some code of some process has
|>  to be loaded into some RAM page to continue execution of this
|>  process. It must fail. If for example all other processes are
|>  waiting on some work by that process, then waiting is not an option
|>  for the OS, since it results in a deadlock. The only thing any OS
|>  can do here is resigning or revoking some memory by killing some
|>  process.

That's an interesting scenario I hadn't thought of.  On Unix systems
derived from System V (but not on Berkley Unixes), the total "swap
space" is the sum of the space in memory and on disk.  If all of the
swap pages on disk are occupied, and all of the swap pages in memory are
occupied by RAM pages, then you have a problem.

I suspect that most OS's just ignore the problem since it can't occur in
practice.  Some of the pages in memory will always be executable code
(ROM pages), with the write only executable file as backing store, and
can simply be dropped.  Never the less, I'm willing to bet that if you
create a configuration where almost all of the pages are RAM pages, some
violent thrashing is going to start occuring.

Curiously, the C/C++ standard does allow undefined behavior in this
case.  It allows it, I think, in all cases of resource limits being
exceeded.  Except in the few special cases where it does provide an
explicit mechanism for reporting the error.  A conforming system must
correctly report a lack of resources for insufficient heap memory or
insufficient file space when writing.  (It's interesting that the people
saying that the resource limit clause allows lazy commit aren't also
saying that a program doesn't have to report write errors due to file
system full.  Conceptually, it is exactly the same problem.)

--=20
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient=E9e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh=FCttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: fjh@cs.mu.OZ.AU (Fergus Henderson)
Date: 2000/11/12
Raw View
kanze@gabi-soft.de writes:

>(It's interesting that the people
>saying that the resource limit clause allows lazy commit aren't also
>saying that a program doesn't have to report write errors due to file
>system full.

Did you ask?

>Conceptually, it is exactly the same problem.)

Yes, I agree.  And so my conclusion is the same.  I think the current
C++ standard can reasonably be interpreted as not requiring
implementations to handle such situations.

Note that most implementations buffer I/O requests.  And fflush() is
not the same as the Unix fsync().  In most implementations, fflush()
sends the data to the OS, but does not wait until it actually hits the
disk.  This conforms with the C standard.  (I think the same is true
of flush() in C++ iostreams, but I didn't check it.) So even after
writing some data and flushing the stream, there is no guarantee that
the operation has actually completed.  If the fflush() returns before
the operation has completed, it can't report all failures.

In the case of network file systems, it might be impossible for the
local OS to ascertain whether the file system will be full when the
write request finally reaches the remote system, since even if there
is space available at the time the request is sent, some other
client's write request might use up that space in the mean time.
The only way to check such things and to return appropriate return
values would be to force all I/O, or at least all flushes, to be
completely synchronous.  This could have a very significant impact on
performance.  So there might be legitimate reasons for an implementation
to not report errors in such cases.  Of course for applications where
proper error handling is important, you should not use such a file
system in such a mode.  But there are many applications where it is
adequate for the error to be reported directly to the user via some
other channel, rather than being reported to the program via function
return values.  I'm not convinced that the C++ intended to prohibit
such implementations.

--
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: fjh@cs.mu.OZ.AU (Fergus Henderson)
Date: 2000/11/12
Raw View
kanze@gabi-soft.de writes:

>Bernd Strieder <strieder@student.uni-kl.de> writes:
>
>|>  A page of code must be in RAM to be executed, room has to be made,
>|>  whenever RAM is full.  What should an OS do in the case, when all
>|>  swap space is used, all RAM pages are used?
>
>That's an interesting scenario I hadn't thought of. [...]
>I suspect that most OS's just ignore the problem since it can't occur in
>practice.

Another interesting scenario that you may not have thought of is when
you have a C++ implementation that uses a just-in-time compiler.
In order to execute some code for the first time, it may need to
allocate space for the machine code that the JIT will generate.

If this scenario seems far-fetched, bear in mind that Microsoft's
latest C++ compiler -- the one for their ".NET" system -- provides
exactly this model.  (However, to tell the full story, I must point
out that currently they don't do so for standard C++ code -- to use
this model, you need to make use of some Microsoft extensions.)

>Curiously, the C/C++ standard does allow undefined behavior in this
>case.  It allows it, I think, in all cases of resource limits being
>exceeded.  Except in the few special cases where it does provide an
>explicit mechanism for reporting the error.

In that case, what is the benefit of adopting the more restrictive
interpretation that you have taken, which requires implementations to
report errors in those cases, rather than the looser interpretation
which leaves such things to quality of implementation?

--
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: James.Kanze@dresdner-bank.com
Date: 2000/11/13
Raw View
In article <8uluvd$ovs$1@mulga.cs.mu.OZ.AU>,
  fjh@cs.mu.OZ.AU (Fergus Henderson) wrote:
> kanze@gabi-soft.de writes:

> >(It's interesting that the people saying that the resource limit
> >clause allows lazy commit aren't also saying that a program doesn't
> >have to report write errors due to file system full.

> Did you ask?

> >Conceptually, it is exactly the same problem.)

> Yes, I agree.  And so my conclusion is the same.  I think the
> current C++ standard can reasonably be interpreted as not requiring
> implementations to handle such situations.

I understand your reasoning why they shouldn't, but I think you should
consider the implications carefully.

First, formally, it means that any use of operator new or any write to
a file is potentially undefined behavior, according to the standard.
(Of course, so is any function call, since there is no disputing that
stack overflow is undefined behavior.)

In practice, it means that you cannot write typical Unix utilities for
use in shell scripts and makefiles in C++.  Lazy commit isn't
typically the problem, since it either results in the desired
behavior, or a core dump.  Not signalling write errors, however, will
typically make the semantics of such scripts undefined.

And from experience, the results will *NOT* be acceptable to users.
Things like:

     filter < xxx > /tmp/$$ && mv /tmp/$$ xxx

are all too common in shell scripts.

--
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient   e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627


Sent via Deja.com http://www.deja.com/
Before you buy.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: nmm1@cus.cam.ac.uk (Nick Maclaren)
Date: 2000/11/13
Raw View
In article <8uluvd$ovs$1@mulga.cs.mu.OZ.AU>,
Fergus Henderson <fjh@cs.mu.OZ.AU> wrote:
>kanze@gabi-soft.de writes:
>
>>(It's interesting that the people
>>saying that the resource limit clause allows lazy commit aren't also
>>saying that a program doesn't have to report write errors due to file
>>system full.
>
>Did you ask?
>
>>Conceptually, it is exactly the same problem.)
>
>Yes, I agree.  And so my conclusion is the same.  I think the current
>C++ standard can reasonably be interpreted as not requiring
>implementations to handle such situations.

I don't know about C++, but I think that it is true for most
modern languages.

>Note that most implementations buffer I/O requests.  And fflush() is
>not the same as the Unix fsync().  In most implementations, fflush()
>sends the data to the OS, but does not wait until it actually hits the
>disk.  This conforms with the C standard.  (I think the same is true
>of flush() in C++ iostreams, but I didn't check it.) So even after
>writing some data and flushing the stream, there is no guarantee that
>the operation has actually completed.  If the fflush() returns before
>the operation has completed, it can't report all failures.

We could go throught the whole horrible history of flush, fflush,
fsync and sync, but it wouldn't get us far.  In my experience,
fflush calls flush/fsync in some implementations but not all, and
some abominable systems do not synchronise on close, EVEN if
both fflush and fsync have been called just beforehand :-(

>In the case of network file systems, it might be impossible for the
>local OS to ascertain whether the file system will be full when the
>write request finally reaches the remote system, since even if there
>is space available at the time the request is sent, some other
>client's write request might use up that space in the mean time.

Agreed.

>The only way to check such things and to return appropriate return
>values would be to force all I/O, or at least all flushes, to be
>completely synchronous.  This could have a very significant impact on
>performance.  So there might be legitimate reasons for an implementation
>to not report errors in such cases.  Of course for applications where
>proper error handling is important, you should not use such a file
>system in such a mode.  But there are many applications where it is
>adequate for the error to be reported directly to the user via some
>other channel, rather than being reported to the program via function
>return values.  I'm not convinced that the C++ intended to prohibit
>such implementations.

I strongly disagree with "the ONLY way" - there are several good
solutions that do not have the problems you mention.  But, as far
as I know, no Unix has them.  And they wouldn't do any good in
this context until ALL Unices involved in the file transfer path
have them.  Other than that, I agree.

And discussions about how this could be got right, but not starting
from here, are definitely off-group ....


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email:  nmm1@cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Fri, 10 Nov 2000 00:12:22 GMT
Raw View
Christian Bau wrote:
>
> In article <3A06F100.9A1757B6@student.uni-kl.de>, Bernd Strieder
> <strieder@student.uni-kl.de> wrote:
>
> > What about a system that has to revoke some memory malloc'ed earlier for
> > a process of higher priority? Oh yes, C++ apps always have highest
> > priority ;-)
>
> This is complete nonsense. If an operating system frees memory malloced by
> some application while that application is running the operating system
> designers should be shot.

Freeing by OS means killing, isn't that clear? I was also joking about
the strong self assurance of the C++ community in the case of what can
be expected from the OSs C++ apps are running on, and tried to make this
really clear with that little smiley. Priority was in both uses meant as
the very abstract term of measuring the importance of some kind. No UNIX
process priority or anything else.

>
> On most OS'es I write for, the development system lets me specify a
> guaranteed amount of stack space, and that amount is safe to use. Anything
> else I would consider crap.

One app alone cannot produce the problem we are talking about, but many
together can. There is one of these apps that will see the problem as
the first one, and perhaps it is even the one with lowest total resource
consumption also programmed conformantly in C++.

>
> > Similar things might happen to memory for the executable. There might be
> > no room left for the next page of code. This produces a fatal runtime
> > error, IOW a crash. It is in common not possible to predict the needed
> > number of pages for code, perhaps there are some pessimistic
> > approximations.
>
> On most OS'es I write for, there is always room for the next page of the
> code, simply because executable files are added to the swap space. If an
> application can crash for the reason you give, the OS is crap.

A page of code must be in RAM to be executed, room has to be made,
whenever RAM is full.
What should an OS do in the case, when all swap space is used, all RAM
pages are used? No RAM page can be swapped out. And now just one page of
some code of some process has to be loaded into some RAM page to
continue execution of this process. It must fail. If for example all
other processes are waiting on some work by that process, then waiting
is not an option for the OS, since it results in a deadlock. The only
thing any OS can do here is resigning or revoking some memory by killing
some process.

So given that an OS is willing to give away all its resources, what it
ideally should be able to do, it can be made to kill a process or
itself. If it does not give away all its resources, then there must be
situations, where it refuses some allocations. If this happens at a call
to malloc, then we are lucky, if it happens at the time where stack
space is enlarged or a page of code has to be loaded into RAM, then
there is a problem. It might be difficult to construct an example making
the problem happen, but there is a final amount of virtual memory and it
can be reached, if the OS doesn't try to prevent it under nearly all
circumstances by strategies of refusing to allocate some memory much
earlier.

After all there is no soft way to handle that the stack could not be
enlarged or that executing could not proceed for lack of RAM. Just write
an app that uses a considerable, but allowed amount of stack. Start
instances of this app until your virtual memory is filled. What can any
OS do other than refusing to start instances when memory seems to become
scarce or killing one process in the moment it tries to get more memory
the system is willing to give. I doubt that there is an OS refusing to
start a possibly small app, because there is not enough room for the
maximum stack that app might use.

The C++ standard should make as few assumptions as possible about the
environment, the OS or anything like that, this is clear. The standard
can definitly not demand, that under any circumstances a conformant app
once it is under execution must never be killed. If that warranty is
wanted, then a nice protocol between the OS and an app would have to be
invented, to predict actual usage, to resolve shortage, and probably
other things. The information to create the right strategy can be more
easily provided by the programmers and operators on special machines,
where the overall design from hardware to software is under control. So
there it is easier to have the safety. I doubt that C++ was written just
for safe environments.

IMO the C++ standard should make clear, that execution and automatic
variable space needs resources, too, while there is in general no easy
way to handle problems with allocating these resources. Since the needed
resources might be the same as needed for dynamically allocated memory,
the problems may extend to there, which is completely
platform-dependent. These platforms might under rare conditions of
extremely scarce resources show the behaviour of crashing a conformant
app on accessing successfully allocated (via operator new or malloc)
memory. Without platform-specific support it is generally not sure, that
a conformant C++ app never crashes (shows undefined behaviour) in
situations of extremely scarce memory resources. The run-time
environment may refuse to start a conformant app, if resources are
scarce.

A wording like this, does not impose unfulfillable requirements on C++
implementors, it makes clear that there are a lot of platform-specific
things with resource allocation, and that the C++ standard currently
cannot provide error handling for all problems induced by the
runtime-environment. The C++ implementors have the choice to make the
best out of the platform they program for, and the C++ users are warned
to take care of the platform, if they have special safety requirements.
And finally it is made clear, that there is no problem, if there are
enough resources.

>
> You get this completely wrong. If the OS and compiler cannot guarantee
> that a conforming app runs without crashing, then the OS and compiler
> combination are not conforming, simple as that.
>

Being conforming under that definition is not possible without
platform-specific runtime support that the majority of platforms does
not give. Conforming should be defined in a way, that no uncommon
platform-specific warranties are needed.

BTW should it be called conforming, if the OS refused to start a
otherwise conforming C++ app, if it estimated, that there are not enough
resources, but there actually would have been enough resources? Is it
possible to estimate a priori the amount of resources a C++ app needs?
Is there any programming language clearly better than C++ in this
aspect?

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: fjh@cs.mu.OZ.AU (Fergus Henderson)
Date: 2000/11/10
Raw View
Valentin Bonnard <Valentin.Bonnard@free.fr> writes:

>Fergus Henderson wrote:
>
>> My interpretation of the C++ standard is that this behaviour does not
>> contravene the C++ standard, since each conforming implementations is
>> only required to execute programs "within its resource limits".
>
>Specific wording overrides general wording, so the mention of what to do
>in a particular case overrides the general rule that implementations
>execute programs "within [their] resource limits".

Default logic is a reasonable way to interpret everyday conversation,
but standards should be written using classical logic, not default
logic.  That is, if specific wording and general wording conflict, the
conclusion should just be that the standard is self-contradictory.
General rules that have specific exceptions should be described as
such, e.g. using phrases such as "except as mentioned elsewhere".

>Why would the return
>value of malloc in case no memory is available would be specified
>otherwise?

Perhaps because the C and C++ committees have been very slow to
recognize the idea of "recommend practice".

--
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: James.Kanze@dresdner-bank.com
Date: 2000/11/07
Raw View
In article <3A013608.10B2651@student.uni-kl.de>,
  Bernd Strieder <strieder@student.uni-kl.de> wrote:
> James.Kanze@dresdner-bank.com wrote:

> > In article <39FDC669.491D9F33@student.uni-kl.de>,
> >   Bernd Strieder <strieder@student.uni-kl.de> wrote:
> > > kanze@gabi-soft.de wrote:

> > > > Bernd Strieder <strieder@student.uni-kl.de> writes:
> [...app deleted]
> > I'm not quite sure what you are testing here.  You never verify
> > the results of malloc, so obviously, you cannot detect if malloc
> > fails. I

> I was sure that my system won't fail on that 20MB, so I didn't
> test. My new throws AFAIK, so I would have noticed.

But the question concerned how malloc reacts when it fails.  Looking
at it in an environment where it cannot fail doesn't tell us much.

> Since I have no easy access to the needed information wihtin the
> app, I built some breaks into it, to be able to look for the process
> information with system tools. This is the only true information
> about actual memory usage I can get.

The question concerns malloc, and what happens when not enough memory
is available.  The only way to test this is by ensuring that not
enough memory will be available, and then using malloc and seeing what
happens.

I would recommend against running such tests when others are also
working on the machine.  This involves stress testing the systems, and
not all systems react well under stress.  (Solaris 2.2 thrashed so
badly that for all practical purposes, it hung.  For about 20
minutes.)

> > You are just roughly explaining virtual memory.  It has nothing to
> > do with lazy allocation.  Where the pages are allocated is
> > unimportant.  That they are allocated is important.

> So we are talking about different allocation strategies of virtual
> memory. We are talking about allocation of addresses and allocation
> of actual memory pages.

Well, I'm not sure what you are talking about now, because I don't
know what your tool actually shows: allocated address space (I don't
think so), committed pages, mapped pages, or...

> > Good programs show good locality -- they use only a few virtual
> > memory pages at a time.  Bad programs thrash.  But again, this has
> > nothing to do with whether malloc (or more correctly, under Unix,
> > sbrk) uses lazy allocation or not.

> If you look at many processes allocating memory and how they use
> that memory, then it becomes important.

Quite.  A bad program, even in user space, can make things less
pleasant for other users.  How less pleasant depends on the OS, but
Unix really doesn't shine in this respect -- it was developped in the
days when people co-operated.  Some Unixes are worse than others,
however.  Under current versions of Solaris, you can allocate all the
memory you want, and it doesn't seem to affect other processes.  (This
was already true 10 or 12 years ago with the Sequent Unixes.)

> [...]

> > > SunOS 5.7 (Solaris 2.7) does do lazy allocation, or at least can
> > > be configured to do so.

> > Extensive trials with the above program never once caused any
> > process to core dump.  In every case, Solaris returned NULL after
> > enough mallocs.

> When addresses run out, every OS will be able to return 0.

My test accessed one byte every 100 (so at least one per page) in the
allocated blocks.  That was the whole point of the test.  Either
malloc returned a null pointer (no lazy commit), or sooner or later a
process got a core dump (lazy commit).  When I ran the Solaris tests,
I used a configuration with only about 250 MB virtual memory, and I
set my ulimits to infinite; since the Sparc address space is 4 GB, I
could be sure that if malloc failed, it would be because the pages
weren't available.

> > > >     AIX:        Does not do lazy allocation by default.  Lazy
> > > >                 allocation can be turned on (on a process by
> > > >                 process basis) by setting a specific
> > > >                 environment variable.

> > > AIX behaves like Solaris. At least where I tested, a highly
> > > loaded server with 5000 student accounts, 6 processors and
> > > typically some 100 users concurrently. I'm sure that this
> > > machine is tuned to do its best at extreme load.

> > What version of AIX?  AIX 3 does do lazy malloc.  AIX 4 doesn't by
> > default, but individual users can request it.

> AIX4.3?

I'm not sure.  My test showed no lazy malloc on the AIX at my last
customer site, but I don't know the version.  I've not dared to try it
here, since other people are actively using the machine.

The manpage for malloc under IBM contains a note to the effect that
"AIX Version 3 uses a delayed paging slot allocation technique..."  So
versions 3 (and before) used lazy commit.  Documented this way, I
would assume that this also means that version 4 and up don't.

> [...]

> > My impression is that we are measuring different things.  Your
> > program doesn't seem to address the problem directly; you depend
> > on some external measurements to determine what is going on.  But
> > do you really know what is being measured?

> It is the actual information of the running process, at the times I
> made it waiting for input.

But what information?

> Addresses must be allocated at the time of the allocation call from
> the app, or it wouldn't be possible to react to page faults to
> allocate the pages.

Not at all.  The pages can be allocated on the sbrk (the system call
under Unix).  Or the number of pages available can simply be
decremented by the number requested, without allocating anything.  Or
the system can basically do nothing except note that it has logically
allocated the address space.

In the last two cases, the pages will be mapped as inexistant, and a
page fault will occur on access.  At this point, the system examines
whether the faulting address is in the mapped address space; if so, it
attempts to allocate a page.  If the number of pages available has
already been decremented, the allocation cannot fail; if it hasn't,
there may be a problem.

>From what I understand of System V virtual memory, there is no hard
mapping.  The mapping will change dynamically, and a page in memory
will generally not have a fixed backing page on disk -- the page on
disk will only be allocated when the memory page is to be paged out.

> What we are discussing about is, how firmly the
> system assures that actual pages of virtual memory are provided for
> those addresses.  And here my impression is that some OS don't do
> the bookkeeping of all processes together to guarantee the memory
> for all together. This is for efficiency reasons.

I don't think efficiency comes into it.  There are cases where lazy
commit is the better strategy, and cases where immediate commit is.
Generally speaking, where lazy commit is best, fixed commit will also
work, although the throughput of the machine may be limited.  On the
other hand, for large categories of commercial applications (servers,
etc.), lazy commit is simply not acceptable.

> > Also, see my description of what I believe to be the actual
> > Solaris behavior.  It effectively doesn't allocate the pages until
> > use, but it commits at the request, so that the allocation cannot
> > fail.  I suspect that your measures on Solaris are only showing
> > actual allocation, and not commitment.

> That is probable. But if they are committed but not allocated, we
> might have the perverse situation that there are a 100MB of actual
> pages free, but an allocation of 10MB for a very short-running
> process would fail.  I'm sure that this is not wanted.

If the 100MB have already been spoken for, it is what is wanted, in
almost every case, in commercial software.  The fact that my server
hasn't yet accessed the allocated pages because it is waiting for a
response from the data base doesn't mean anyone else can use them.
When the response comes, those pages had better be there.

--
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient   e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627


Sent via Deja.com http://www.deja.com/
Before you buy.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: James.Kanze@dresdner-bank.com
Date: 2000/11/07
Raw View
In article <3A012EF0.22ECBE8B@student.uni-kl.de>,
  Bernd Strieder <strieder@student.uni-kl.de> wrote:
> Fergus Henderson wrote:

> > kanze@gabi-soft.de writes:

> > In particular, even if the "always overcommit" mode is disabled by
> > seeting the /proc/sys/vm/overcommit_memory parameter to 0 (false),
> > if you have say 300M of free virtual memory, and you have the
> > following sequence of actions

> >         process 1: allocate 200M
> >         process 2: allocate another 200M
> >         process 1: touch the allocated 200M
> >         process 2: touch the other allocated 200M

> > then both allocations will succeed, and the situation will be
> > resolved by killing one of the processes when it tries to touch
> > the memory.

> This is just the behaviour I expect from the AIX and Solaris boxes I
> have access to. Pages of virtual memory are allocated to a process
> when they are accessed the first time.

It may be the behavior you expect, but it is not the behavior that I
have actually seen.  In the case of Solaris, my tests were extensive
enough to ensure that this was the case.  In the case of AIX, my tests
were much more summary, but the AIX documentation would seem to
indicate that this is not the case by default.

> > My interpretation of the C++ standard is that this behaviour does
> > not contravene the C++ standard, since each conforming
> > implementations is only required to execute programs "within its
> > resource limits".

> This is a wise thing, I hope it is in the standard. Still haven't
> found it.

It's been there since the C standard.  The interpretation above,
however, is only Fergus'.  In general, it is true that a system can
reject a program without cause, because of resource limits.  (The
corollary is, of course, that there is no such thing as a program
without undefined behavior.)  In the case of malloc/operator new,
however, the standard specifies what the behavior should be in case
the resource is not there, so the otherwise undefined behavior is in
this specific case defined.

--
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient   e objet/
                  Beratung in objekt orientierter Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627


Sent via Deja.com http://www.deja.com/
Before you buy.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: James.Kanze@dresdner-bank.com
Date: 2000/11/07
Raw View
In article <3A06F100.9A1757B6@student.uni-kl.de>,
  Bernd Strieder <strieder@student.uni-kl.de> wrote:
> Valentin Bonnard wrote:

> > Fergus Henderson wrote:

> > > My interpretation of the C++ standard is that this behaviour
> > > does not contravene the C++ standard, since each conforming
> > > implementations is only required to execute programs "within its
> > > resource limits".

> > Specific wording overrides general wording, so the mention of what
> > to do in a particular case overrides the general rule that
> > implementations execute programs "within [their] resource
> > limits". Why would the return value of malloc in case no memory is
> > available would be specified otherwise ?

> What about a system that has to revoke some memory malloc'ed earlier
> for a process of higher priority? Oh yes, C++ apps always have
> highest priority ;-)

What about a system that terminated processes without notice on power
fail?

Generally speaking, the standard ignores what is outside of its
domain.  See how the C or the C++ system go about describing files,
for example.  The standard specifies a specific behavior for a given
program (supposing no undefined behavior).  If that program doesn't
present the given behavior, e.g. because it terminates prematurely,
then the implementation is not conform.  Except that the standard has
an escape clause with regards to system resources.  More generally, of
course, all implementations impose constraints for them to be fully
conform.  Specific options, etc.  And not executing a kill instruction
on the process, presumably, although frankly, I've never seen this one
documented:-).

The C++ standard describes programs executing in an imaginary world,
without hardware failures, etc.  The C++ standard cannot, and does not
attempt, to forbid alpha particules from modifying the memory,
cleaning ladies from pulling the plug at the wrong moment, or even
systems which intentionally try to make the language useless.  (A
compiler which generated a five minute empty loop after each C++
instruction would be perfectly conform.  And totally useless.)  The
C++ standard specifies a contract between the vendor and the user, but
I doubt that it is a contract that you could argue in court.  And it
is not the only thing that influences the compiler implementation.  No
compiler actually generates five minute empty loops after each C++
instruction, not because the standard forbids it, but because no
customer would buy it if it did.

Given this, it is really academic whether terminating a program
because it accessed memory the system said it could is allowed under
the clause of insufficient resources or not.  I happen to agree with
Valentin on this, but it really doesn't matter.  Because if the market
insists on lazy commit, the compilers/systems will do it, even if the
standard forbids it (except perhaps with a very special option for
conformance tests, like most compilers do with trigraphs).  And if the
market doesn't want it, compilers/systems won't do it, even if the
standard allows it.

At present, we have the situation where:

Windows NT:     The system offers the choice, via two different system
                calls.  The C/C++ libraries use the immediate commit.

Solaris:        No support for lazy commit, at least with regards to
                sbrk (and thus malloc and operator new).

AIX:            Immediate commit by default for sbrk.  Can be
                overriden by means of an environment variable.

HP/UX:          Immediate commit by default for sbrk (probably -- my
                tests were far from exaustive).

Linux:          Lazy commit by default, possibly always in certain
                cases?

In all cases, these indications must be taken with a grain of salt.
As Fergus said, it is impossible by testing to exclude the possibility
that a system does lazy commit in certain circumstances.  All one can
say is that it is not the general behavior for a large number of
systems, and that in at least one case, this is the result of customer
presure to suppress it.

> Since not all cases of lacking virtual memory can be treated as nice
> as failing mallocs, it does not matter that just malloc can react as
> it is presumed to do. E.g. failing to enlarge the stack produces a
> crash on all systems with virtual memory I know. AFAIK there is no
> C++ exception thrown, if the stack frame for a function just called
> cannot be allocated. There is in general no way to reserve stack
> space, but perhaps there are implementation defined ways.

This is a serious problem in certain cases.  None of the Unix systems
I've worked on reclaim stack space, and since it typically reaches a
peak fairly quickly, this is typically not a problem.  In at least one
application, we explicitly recursed on start up enough to ensure
sufficient stack space.

For critical applications, C/C++ vendors will offer extensions to
handle this, because the customers will require it.  For most
commercial applications, the risk of running out of stack space is
several orders of magnitude less than the risk of running out of heap,
and the customers are willing to ignore it.

> Similar things might happen to memory for the executable. There
> might be no room left for the next page of code. This produces a
> fatal runtime error, IOW a crash. It is in common not possible to
> predict the needed number of pages for code, perhaps there are some
> pessimistic approximations.

I'm not sure I understand this.  Are you talking about DLL's?  In
critical applications, it is standard practice to statically link
everything, in order to avoid this.  Or simply to load all DLL's at
the start.  Generally, however, the problem doesn't occur with the
executable, since the virtual memory pages are already there, in the
executable file.  No disk pages are needed.

> The worst thing with these problems is, that they happen in
> situations where hardly anything can be done, extreme lack of free
> memory. Any kind of handling of these problems probably needs some
> extra memory for e.g.  loading the handler's code, or constructing
> an object to be thrown.

Any kind of handling of these problems needs, first and foremost, that
they be reported.

> Too easily a double fault is the result. The
> biggest problem is, that the problems happen out of application's
> control. There are often no other defined protocols between OS and
> the app than e.g. sending signals, which means crashing in
> general. I'm sure you can make every current OS with virtual memory
> shooting a fully conformant C++ app, by constructing situations,
> where the system has no other choices than resigning or producing
> those asynchronous events with fatal results for running processes.

I'm not sure that you can do it because of memory, provided the
process has enough stack space, and handles lack of memory gracefully.
Obviously, it's not easy to guarantee either of these conditions, but
then, if writing robust application programs was easy, I wouldn't be
paid so much:-).  They can be met, and they regularly are in many well
written application programs.

Of course, most servers also need sockets, file locks, etc., etc.  And
I've yet to see an OS without a single bug.  So I don't doubt that you
are right, technically speaking.  In practice, OS's such as Solaris
are stable enough that you can write some really robust applications
which run on them.  And if they are not, there are industrial strength
OS's designed with just that in mind -- Sun also commercializes
Chorus, for example, for telephone systems and the like.

Systems don't have to crash.  Some systems can't, whatever, and for
many, a crash costs real money.

> IMO it is out of scope, to warrant in any general PL standard, that
> a conformant app will never crash.

It is, however, appropriate to define the language so that the
programmer has a fighting chance of being able to write applications
which don't crash.  (On most systems I've worked on, there have been
contractual penalties if the system crashed.  IMHO, that should be the
rule -- is a user mode program crashes a system, the system provider
should have to pay the user something for each crash.)

> In any OS for two or more
> independant processes there are asynchronous events, that would be
> too hard to be treated within such kind of standard. Resolving just
> the case with malloc/new is cosmetic, since there are untreatable
> cases too similar to that.

The untreated cases aren't necessarily in the realm of the C++
standard.  And generally speaking, they occur orders of magnitude less
frequently than running out of heap space.

--
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient   e objet/
                  Beratung in objekt orientierter Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627


Sent via Deja.com http://www.deja.com/
Before you buy.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: christian.bau@isltd.insignia.com (Christian Bau)
Date: 2000/11/07
Raw View
In article <3A06F100.9A1757B6@student.uni-kl.de>, Bernd Strieder
<strieder@student.uni-kl.de> wrote:

> What about a system that has to revoke some memory malloc'ed earlier for
> a process of higher priority? Oh yes, C++ apps always have highest
> priority ;-)

This is complete nonsense. If an operating system frees memory malloced by
some application while that application is running the operating system
designers should be shot.

> Since not all cases of lacking virtual memory can be treated as nice as
> failing mallocs, it does not matter that just malloc can react as it is
> presumed to do. E.g. failing to enlarge the stack produces a crash on
> all systems with virtual memory I know. AFAIK there is no C++ exception
> thrown, if the stack frame for a function just called cannot be
> allocated. There is in general no way to reserve stack space, but
> perhaps there are implementation defined ways.

On most OS'es I write for, the development system lets me specify a
guaranteed amount of stack space, and that amount is safe to use. Anything
else I would consider crap.

> Similar things might happen to memory for the executable. There might be
> no room left for the next page of code. This produces a fatal runtime
> error, IOW a crash. It is in common not possible to predict the needed
> number of pages for code, perhaps there are some pessimistic
> approximations.

On most OS'es I write for, there is always room for the next page of the
code, simply because executable files are added to the swap space. If an
application can crash for the reason you give, the OS is crap.

> IMO it is out of scope, to warrant in any general PL standard, that a
> conformant app will never crash. In any OS for two or more independant
> processes there are asynchronous events, that would be too hard to be
> treated within such kind of standard.

You get this completely wrong. If the OS and compiler cannot guarantee
that a conforming app runs without crashing, then the OS and compiler
combination are not conforming, simple as that.

> Resolving just the case with
> malloc/new is cosmetic, since there are untreatable cases too similar to
> that.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: christian.bau@isltd.insignia.com (Christian Bau)
Date: 2000/11/08
Raw View
In article <8u913m$d07$1@nnrp1.deja.com>, James.Kanze@dresdner-bank.com wrote:

> I don't think efficiency comes into it.  There are cases where lazy
> commit is the better strategy, and cases where immediate commit is.
> Generally speaking, where lazy commit is best, fixed commit will also
> work, although the throughput of the machine may be limited.  On the
> other hand, for large categories of commercial applications (servers,
> etc.), lazy commit is simply not acceptable.

I think it would not be difficult for a C or C++ implementation to add two
new functions:

   void* lazy_malloc (size_t t);

returns a value just like malloc () would do, but reading or writing byte
#i invokes undefined behavior unless you first called

   int confirm_malloc (void* ptr, size_t start, size_t end);

where ptr is the non-null result of lazy_malloc and start <= i < end, and
confirm_malloc returned TRUE.

For systems without lazy commit the implementation is trivial; lazy_malloc
is the same as malloc and confirm_malloc always returns TRUE.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: Valentin Bonnard <Valentin.Bonnard@free.fr>
Date: Fri, 3 Nov 2000 16:38:36 GMT
Raw View
Fergus Henderson wrote:

> My interpretation of the C++ standard is that this behaviour does not
> contravene the C++ standard, since each conforming implementations is
> only required to execute programs "within its resource limits".

Specific wording overrides general wording, so the mention of what to do=20
in a particular case overrides the general rule that implementations=20
execute programs "within [their] resource limits". Why would the return=20
value of malloc in case no memory is available would be specified=20
otherwise=A0?

--=20

Valentin Bonnard

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: 2000/11/06
Raw View
Valentin Bonnard wrote:
>
> Fergus Henderson wrote:
>
> > My interpretation of the C++ standard is that this behaviour does not
> > contravene the C++ standard, since each conforming implementations is
> > only required to execute programs "within its resource limits".
>
> Specific wording overrides general wording, so the mention of what to do
> in a particular case overrides the general rule that implementations
> execute programs "within [their] resource limits". Why would the return
> value of malloc in case no memory is available would be specified
> otherwise ?

What about a system that has to revoke some memory malloc'ed earlier for
a process of higher priority? Oh yes, C++ apps always have highest
priority ;-)

Since not all cases of lacking virtual memory can be treated as nice as
failing mallocs, it does not matter that just malloc can react as it is
presumed to do. E.g. failing to enlarge the stack produces a crash on
all systems with virtual memory I know. AFAIK there is no C++ exception
thrown, if the stack frame for a function just called cannot be
allocated. There is in general no way to reserve stack space, but
perhaps there are implementation defined ways.

Similar things might happen to memory for the executable. There might be
no room left for the next page of code. This produces a fatal runtime
error, IOW a crash. It is in common not possible to predict the needed
number of pages for code, perhaps there are some pessimistic
approximations.

The worst thing with these problems is, that they happen in situations
where hardly anything can be done, extreme lack of free memory. Any kind
of handling of these problems probably needs some extra memory for e.g.
loading the handler's code, or constructing an object to be thrown. Too
easily a double fault is the result. The biggest problem is, that the
problems happen out of application's control. There are often no other
defined protocols between OS and the app than e.g. sending signals,
which means crashing in general. I'm sure you can make every current OS
with virtual memory shooting a fully conformant C++ app, by constructing
situations, where the system has no other choices than resigning or
producing those asynchronous events with fatal results for running
processes.

IMO it is out of scope, to warrant in any general PL standard, that a
conformant app will never crash. In any OS for two or more independant
processes there are asynchronous events, that would be too hard to be
treated within such kind of standard. Resolving just the case with
malloc/new is cosmetic, since there are untreatable cases too similar to
that.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: fjh@cs.mu.OZ.AU (Fergus Henderson)
Date: Thu, 2 Nov 2000 21:55:15 GMT
Raw View
Bernd Strieder <strieder@student.uni-kl.de> writes:

>Fergus Henderson wrote:
>>
>> [On Linux]
>> In particular, even if the "always overcommit" mode is disabled by
>> seeting the /proc/sys/vm/overcommit_memory parameter to 0 (false),
>> if you have say 300M of free virtual memory, and you have the
>> following sequence of actions
>>
>>         process 1: allocate 200M
>>         process 2: allocate another 200M
>>         process 1: touch the allocated 200M
>>         process 2: touch the other allocated 200M
>>
>> then both allocations will succeed, and the situation will be
>> resolved by killing one of the processes when it tries to touch
>> the memory.
>
>This is just the behaviour I expect from the AIX and Solaris boxes I
>have access to.

Did you test it?  My own simple tests on Solaris (SunOS 5.7) did not
exhibit that behaviour.  Instead the second allocation would fail.
As far as I can tell, Solaris seems to do lazy allocation for global
variables but eager allocation for malloc() and fork().

>> My interpretation of the C++ standard is that this behaviour does not
>> contravene the C++ standard, since each conforming implementations is
>> only required to execute programs "within its resource limits".
>
>This is a wise thing, I hope it is in the standard. Still haven't found
>it.

1.4 [intro.compliance] paragraph 2.

--
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Thu, 2 Nov 2000 17:51:31 GMT
Raw View
Fergus Henderson wrote:
>
> kanze@gabi-soft.de writes:
>
>
> In particular, even if the "always overcommit" mode is disabled by
> seeting the /proc/sys/vm/overcommit_memory parameter to 0 (false),
> if you have say 300M of free virtual memory, and you have the
> following sequence of actions
>
>         process 1: allocate 200M
>         process 2: allocate another 200M
>         process 1: touch the allocated 200M
>         process 2: touch the other allocated 200M
>
> then both allocations will succeed, and the situation will be
> resolved by killing one of the processes when it tries to touch
> the memory.

This is just the behaviour I expect from the AIX and Solaris boxes I
have access to. Pages of virtual memory are allocated to a process when
they are accessed the first time.

>
> My interpretation of the C++ standard is that this behaviour does not
> contravene the C++ standard, since each conforming implementations is
> only required to execute programs "within its resource limits".

This is a wise thing, I hope it is in the standard. Still haven't found
it.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Thu, 2 Nov 2000 17:58:13 GMT
Raw View
James.Kanze@dresdner-bank.com wrote:
>
> In article <39FDC669.491D9F33@student.uni-kl.de>,
>   Bernd Strieder <strieder@student.uni-kl.de> wrote:
> > kanze@gabi-soft.de wrote:
>
> > > Bernd Strieder <strieder@student.uni-kl.de> writes:
[...app deleted]
> I'm not quite sure what you are testing here.  You never verify the
> results of malloc, so obviously, you cannot detect if malloc fails.  I

I was sure that my system won't fail on that 20MB, so I didn't test. My
new throws AFAIK, so I would have noticed.

Since I have no easy access to the needed information wihtin the app, I
built some breaks into it, to be able to look for the process
information with system tools. This is the only true information about
actual memory usage I can get.


>
> You are just roughly explaining virtual memory.  It has nothing to do
> with lazy allocation.  Where the pages are allocated is unimportant.
> That they are allocated is important.

So we are talking about different allocation strategies of virtual
memory. We are talking about allocation of addresses and allocation of
actual memory pages.

>
> Good programs show good locality -- they use only a few virtual memory
> pages at a time.  Bad programs thrash.  But again, this has nothing to
> do with whether malloc (or more correctly, under Unix, sbrk) uses lazy
> allocation or not.

If you look at many processes allocating memory and how they use that
memory, then it becomes important.

[...]
>
> > SunOS 5.7 (Solaris 2.7) does do lazy allocation, or at least can be
> > configured to do so.
>
> Extensive trials with the above program never once caused any process
> to core dump.  In every case, Solaris returned NULL after enough
> mallocs.

When addresses run out, every OS will be able to return 0.

>
> > >     AIX:        Does not do lazy allocation by default.  Lazy
> allocation
> > >                 can be turned on (on a process by process basis) by
> > >                 setting a specific environment variable.
>
> > AIX behaves like Solaris. At least where I tested, a highly loaded
> > server with 5000 student accounts, 6 processors and typically some
> > 100 users concurrently. I'm sure that this machine is tuned to do
> > its best at extreme load.
>
> What version of AIX?  AIX 3 does do lazy malloc.  AIX 4 doesn't by
> default, but individual users can request it.

AIX4.3?

[...]
>
> My impression is that we are measuring different things.  Your
> program doesn't seem to address the problem directly; you depend on
> some external measurements to determine what is going on.  But do you
> really know what is being measured?

It is the actual information of the running process, at the times I made
it waiting for input.

Addresses must be allocated at the time of the allocation call from the
app, or it wouldn't be possible to react to page faults to allocate the
pages. What we are discussing about is, how firmly the system assures
that actual pages of virtual memory are provided for those addresses.
And here my impression is that some OS don't do the bookkeeping of all
processes together to guarantee the memory for all together. This is for
efficiency reasons.

>
> Also, see my description of what I believe to be the actual Solaris
> behavior.  It effectively doesn't allocate the pages until use, but it
> commits at the request, so that the allocation cannot fail.  I suspect
> that your measures on Solaris are only showing actual allocation, and
> not commitment.

That is probable. But if they are committed but not allocated, we might
have the perverse situation that there are a 100MB of actual pages free,
but an allocation of 10MB for a very short-running process would fail.
I'm sure that this is not wanted.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: James.Kanze@dresdner-bank.com
Date: Wed, 1 Nov 2000 16:45:49 GMT
Raw View
In article <39FDC669.491D9F33@student.uni-kl.de>,
  Bernd Strieder <strieder@student.uni-kl.de> wrote:
> kanze@gabi-soft.de wrote:

> > Bernd Strieder <strieder@student.uni-kl.de> writes:

> > |>  All major OS do lazy allocation these days or they lose.

> > I'm curious about your "all major OS."  A couple of years ago
> > (four or five), I was concerned about this problem; we were
> > writing relatively critical software, and this behavior was deemed
> > unacceptable.  As a result, I have tested a number of machines.
> > To date:

> The following quickly hacked code (not conformant and bad in respect
> of syntax and I/O, but OK for the compilers I have access to) shows
> some things when monitored at runtime:

> #include <iostream.h>

> int main(void)
> {
>   cout<<"Lazy allocation tester"<<endl;
>   char * c = new char[20000000];
>   *c='a';

>   for( int i = 0; i < 1000; ++i ) {
>     c[i*4096]='c';
>     if( i % 100 == 99 ) {
>       cout<<"Lookup memory usage"<<endl;
>       char buffer[80];
>       cin>>buffer;
>     }
>   }

> }
I'm not quite sure what you are testing here.  You never verify the
results of malloc, so obviously, you cannot detect if malloc fails.  I
used something more or less like the following (from memory -- the
actual code had more instrumentation, and output how far it got,
etc.):

    #include <stdio.h>
    #include <stdlib.h>

    #define PAGESIZE 1024
    #define BLOCKSIZE 1024 * 1024

    int
    main()
    {
        printf( "Starting...\n" ) ;
        char* p ;
        for ( p = malloc( BLOCKSIZE ) ; p != NULL ; p = malloc(
BLOCKSIZE ) ) {
            for ( size_t i = 0 ; i < BLOCKSIZE ; i += PAGESIZE ) {
                p[ i ] = 'a' ;
            }
        }
        printf( "No lazy malloc\n" ) ;
        return 0 ;
    }

> If you start this application and monitor the amount of free memory
> in your system or the memory usage of the corresponding process,
> resp. , you will see this amount decreasing, resp. the usage of the
> process increasing, step by step, as the small app procedes. At
> least on SunOS 5.7 on two different Sun machines, AIX on RS/6000,
> and Linux 2.2 on i386 I have seen this effect. As one accesses a
> page of RAM it is removed from the set of free ones and allocated to
> your process, not earlier, and not at the time operator new is
> called.

How do you monitor the ammount of free memory in your system?  What
are you actually measuring?  My program is self-contained, and answers
the simple question: does malloc overcommit or not.  If malloc
overcommits, at some point, one of the writes will cause a core dump.
If it doesn't, at some point, malloc will return NULL.

This program is only really valid when the total amount of memory
available is significantly less than the address space, since I
presume that on all systems, malloc will return NULL if it cannot
allocate the address space.

> The overall design of most current OS's relies on the feature, that
> pages of memory are lazily fetched from memory mapped files or swap
> space. Even compiled code and dynamic libraries are concerned. When
> I say all major OS, I can't prove that now, but I'm sure, because it
> is best practice in OS design, and anything else will produce an OS
> that cannot stand competition. The "Working Set Model" is the
> theoretical foundation for all of this. It is crucial that just the
> currently used part of virtual memory pages is in RAM. It is crucial
> that as few pages as possible are used at one time, and that
> allocation is deferred to as late as possible. Anything else and it
> becomes easy to construct cases where a system sucks badly.

You are just roughly explaining virtual memory.  It has nothing to do
with lazy allocation.  Where the pages are allocated is unimportant.
That they are allocated is important.

Good programs show good locality -- they use only a few virtual memory
pages at a time.  Bad programs thrash.  But again, this has nothing to
do with whether malloc (or more correctly, under Unix, sbrk) uses lazy
allocation or not.

> To give an example in a field I have seen the bits, since sources
> are available: Memory allocation on Linux via malloc is usually
> implemented in terms of memory mapping the /dev/null file. The code
> of the executable and dynamic libraries are memory mapped. If RAM is
> full then pages are swapped out. If RAM and swap space, which
> togethergive the total amount of virtual memory, are filled, then
> our initial problem becomes visible. The system will fail in finding
> as well as making room in RAM for a mapped, but not allocated page
> of memory. Looking at it closely we can see that this might happen
> due to memory accesses to the heap of an app, due to accessing code
> in the executable or dynamic libraries, or as others have pointed
> out due to stack accesses.

I'm not familiar enough with Linux to comment.  The "classical" Unix
approach is for malloc to call sbrk, to obtain memory from the
system.  Supposing that the address space is available, the system has
two choices: it can "map" the address space as unmapped, and allocate
a page on the first fault, or it can allocate all of the pages
immediately.  (Actually, I think most systems, including Solaris,
follow a compromize strategy.  They keep track of the total number of
pages requested, and return an error if a request would cause this
number to exceed the total number of pages available, but they only
actually map on the first fault.)

> Is it conformant that C++ apps might crash due to failing to execute
> the next line of code? This is a closely related problem to the
> problems with lazy allocation of heap memory. If one is treated in
> the standard, then the others must be, correspondingly. The OS's are
> usually designed under the assumption of unlimited virtual memory
> with some emergency code to maintain robustness. This should be
> reflected somehow in the standard.

> After all the dilemma remains, the current standard seems to give
> guarantees it cannot provide. The OS's that could imaginably give us
> sufficient guarantees, will try hard to not do so, since major
> attainments of the past 20 years would suffer badly. As I pointed
> out in my first posting in this thread, performance, security and
> robustness of the system are concerned.

> >     Solaris:    Does not do lazy allocation, although earlier
versions
> >                 (through about 2.3 or 2.4) started thrashing like
crazy
> >                 when the limit was reached.

> SunOS 5.7 (Solaris 2.7) does do lazy allocation, or at least can be
> configured to do so.

Extensive trials with the above program never once caused any process
to core dump.  In every case, Solaris returned NULL after enough
mallocs.


> >     AIX:        Does not do lazy allocation by default.  Lazy
allocation
> >                 can be turned on (on a process by process basis) by
> >                 setting a specific environment variable.

> AIX behaves like Solaris. At least where I tested, a highly loaded
> server with 5000 student accounts, 6 processors and typically some
> 100 users concurrently. I'm sure that this machine is tuned to do
> its best at extreme load.

What version of AIX?  AIX 3 does do lazy malloc.  AIX 4 doesn't by
default, but individual users can request it.

> >     HP-UX:      Does not do lazy allocation.

> >     Linux:      From hearsay: the kernal can be configured in both
> >                 modes.  How it is normally configured by default is
not
> >                 too clear, and probably depends on the distributor.

> >     Windows NT: In my tests, suspends processes when memory gets
tight,
> >                 and brings up a pop-up box suggesting that the user
kill
> >                 a few applications.  Presumably, it does something
> >                 different in processes started from a
non-interactive
> >                 environment.  Regretfully, the behavior that I
observed
> >                 doesn't allow me to say whether it uses lazy
allocation
> >                 or not.

> NT cannot behave differently in essential points. It asks the
> operator to solve the problem instead of trying to get along
> itself. As you noticed, this is not wise for non-interactive
> sessions. The behaviour of NT gives an indication that our problem
> is not a problem of C++ but of the system the C++ virtual machine
> runs on. The C++ standard is not to give guarantees about the system
> it runs on. The OS and the operator rule everything on a system,
> finally. The best to be defined there is undefined behaviour.

Whatever else happened, I was never able to get NT to kill the process
on its own.  If I clicked enough on continue, it eventually returned
NULL.

> > I would consider that any group which excludes Solaris, AIX, HP-UX
> > and probably Linux and Windows NT should not be qualified as "all
> > major OS".

> I fear that at least one of us missed the point here. Perhaps there
> is a misunderstanding about the terms? I regarded lazy allocation as
> the effect that the actual memory usage of a process grows during
> accessing the memory, and not at the point where the process issued
> the allocation command. This is the behaviour I have seen on
> Solaris, AIX, and Linux.  They are the major *nixen. NT cannot
> deviate from best practices, it must behave the same way.

My impression is that we are measuring different things.  Your
program doesn't seem to address the problem directly; you depend on
some external measurements to determine what is going on.  But do you
really know what is being measured?

Also, see my description of what I believe to be the actual Solaris
behavior.  It effectively doesn't allocate the pages until use, but it
commits at the request, so that the allocation cannot fail.  I suspect
that your measures on Solaris are only showing actual allocation, and
not commitment.

--
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient   e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627


Sent via Deja.com http://www.deja.com/
Before you buy.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bill Wade <wrwade@swbell.net>
Date: Tue, 31 Oct 2000 17:56:41 GMT
Raw View
"Bernd Strieder" <strieder@student.uni-kl.de> wrote

> I fear that at least one of us missed the point here. Perhaps there is a
> misunderstanding about the terms? I regarded lazy allocation as the
> effect that the actual memory usage of a process grows during accessing
> the memory, and not at the point where the process issued the allocation
> command.

A reasonable definition, but not the one that JK was using.

JK is using "lazy allocation" to describe systems that will over-commit
virtual memory.  On such a system the system may allocate more memory than
is available even when virtual (disk) space is included.

In my limited tests, HP and SunOs, using the vendors' compilers, don't
over-commit.  gcc on linux will over-commit.  NT will fail to allocate a
block which grossly over-commits.  The NT behavior for borderline cases is a
bit fuzzy (blocking or failing), but does not appear to over-commit.


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Tue, 31 Oct 2000 18:18:27 GMT
Raw View
kanze@gabi-soft.de wrote:
>
> Bernd Strieder <strieder@student.uni-kl.de> writes:
>
> But that's what all the programmers I know of do.  We allocate what we
> need, and no more.

It has become common to have pool allocators to improve speed of memory
allocation. They often do allocate quite large pools at startup without
immediate need. Another one, look at std::vector which grows
exponentially. There is the chance that at some point you have allocated
an amount of memory where you don't use almost one half of it. The lazy
strategy allows the system to minimize the number of pages your process
owns.

It seems that you work in a kind of ideal environment. Your applications
run on dedicated hardware, all participiants are known, the whole system
is designed for its purpose. There is a chance to estimate and control
the amount of resources that are needed at a certain time, and the
machine can be tuned according to that. You can introduce fair
strategies between your apps on allocation of scarce resources, without
having to fear, that there is an overeater that steals everything it
gets hands on. This is well-educated programmers in a well-behaving
environment, it's not at all everywhere, neither the first, nor the
second.

>
> |>  In contrary imagine the
> |>  situation where operator new or malloc() truely allocate pages of
> |>  memory.
>
> You mean like Solaris, AIX and HP-UX?  Not very hard to imagine, since
> I've actually worked on all three.

The Solaris and AIX machines I see do allocate memory pages lazily.

>
> |>  Then an application programmer could easily write a loop
> |>  allocating all the memory in the system, without leaving the system
> |>  any chance to identify and drop the responsible process(es).
>
> Well, the above three (and most others) have a ulimits -- the sysadmin
> can restrict the amount of memory for a given user's processes.  And if
> I wanted to hang a system in this manner, I'd start allocating disk, not
> memory.  Memory is freed whenever my process stops, where as disk...

Using ulimit in a sensible manner the operator/user has to know the
exact sizes of the processes, which is in common not possible. And what
does it help when for all single processes their worst-case memory usage
is set, when the problem is their usage together.

>
> But you talk as if the application programmer was the enemy.  This may
> be true in cases like a University, where you allow all students
> access.  But I can assure you that on telephone switches and bank
> information servers, all of the programs running have been written or at
> least configured by people interested in the good operation of the
> system.  The problem you describe simply doesn't exist in industry, or
> at least, it is very exceptional.  (There is always the chance of a
> disgruntled employee, or some such.  But generally, an insider already
> has so many ways to mess up the system -- introducing subtle bugs in the
> code he is working on, etc., that just running out of memory isn't worth
> considering.)
>
> I'm not sure what is the security risk.  An application that allocates
> memory, or any other resources, it doesn't need is not appreciated.  But
> that is precisely because allocated resources *are* committed.

Trying to allocate is not evil, but committing possibly results in
outage of memory, which is the point where other processes are affected.

>
> |>  Compare that with the situation of one person providing a resource
> |>  to a crowd of others. The provider asks round robin: "how much do
> |>  you need?"  "20" "Oh sorry, there are no 20 left" or "Oh yes, here
> |>  they are".
>
> |>  Another organisation would be: "how much do you need" "20" "Look over
> |>  there the pile, maybe it contains 20. Fetch them as you need, but if you
> |>  come over to fetch one when it's empty, you will be lost."
>
> |>  The second choice is more wise, since it makes the crowd trying hard
> |>  to avoid the fatal outage, or they get used to being lost at some
> |>  times.  There is nothing better the provider can do, if it is not
> |>  sure, that no one fetches more then actually needed.
>
> I don't understand your logic.  It is the first which encourages more
> responsible memory use.  If I know that I will get thrown out any time I
> cannot get the memory I ask for, I will try and ask for as little as
> possible, to avoid getting thrown out unnecessarily.

The first option gives the person who got the answer "No, nothing left"
the option to wait, since she is not thrown out, immediately provoking
the deadlock problem. I forgot to make that clear. If the provider could
rely on some items being returned in that case, there would be no
problem at all. But it is difficult to specify a protocol between the
participants to get that. The easiest and therefore most robust protocol
is to choose one and tell her "Give all back now". Just waiting must be
prevented.

>
> Any number of things.  Most of the time, we run on an uninterruptable
> power supply, so the problem is generally irrelevant.  (Note that
> terminating a multithreaded process at an arbitrary time can, in many
> cases, cause data corruption on the disk.  It is a definite no-no on
> servers.)

What happens if your uninterruptible power supply fails? There is no
hardware that cannot fail. The only thing you can do about hardware
failures is backup, in the end, or my database lecture was a lie. The
probability can be reduced by redundant hardware, but not to zero. Fatal
errors in DB engines are not common, but even then the last resort is
backup.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Pierre Baillargeon <pb@artquest.net>
Date: Tue, 31 Oct 2000 18:57:20 GMT
Raw View
kanze@gabi-soft.de wrote:
>
>     Windows NT: In my tests, suspends processes when memory gets tight,
>                 and brings up a pop-up box suggesting that the user kill
>                 a few applications.  Presumably, it does something
>                 different in processes started from a non-interactive
>                 environment.  Regretfully, the behavior that I observed
>                 doesn't allow me to say whether it uses lazy allocation
>                 or not.

I would just like to retract what I said earlier about NT swap space on
compressed drives. I knew the problem first-hand with memory-map files,
and the information in the MSDN knowledge base indicated that the same
sub-system is used for swap and memory map files. But the OS
specifically check for this and uncompresses any compressed swap file
before use. So I was wrong. On the other hand, what happens if the
decompression runs out of disk space in not specified in the
documentation.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: fjh@cs.mu.OZ.AU (Fergus Henderson)
Date: Wed, 1 Nov 2000 16:43:29 GMT
Raw View
kanze@gabi-soft.de writes:

>Bernd Strieder <strieder@student.uni-kl.de> writes:
>
>|>  All major OS do lazy allocation these days or they lose.
>
>I'm curious about your "all major OS."  A couple of years ago (four or
>five), I was concerned about this problem; we were writing relatively
>critical software, and this behavior was deemed unacceptable.  As a
>result, I have tested a number of machines.

Are you sure your testing was sufficient?  Testing can only ever
confirm the presence of lazy allocation; no amount of testing can
guarantee that an OS never does lazy allocation.

>    Linux:      From hearsay: the kernal can be configured in both
>                modes.  How it is normally configured by default is not
>                too clear, and probably depends on the distributor.

For Linux kernel 2.2.13, the "always overcommit" mode is disabled by
default, except on certain ARM processors with little memory.  Of
course a particular Linux distribution might override the kernel's
default, but the ones that I happen to have access to now (SuSE and
Debian) don't.

But from reading the source code, and from running some tests of my
own, I've found that in fact (at least for kernel 2.2.13) Linux's two
modes are "always overcommit" and "sometimes overcommit"; there is no
"never overcommit" mode.

In particular, even if the "always overcommit" mode is disabled by
seeting the /proc/sys/vm/overcommit_memory parameter to 0 (false),
if you have say 300M of free virtual memory, and you have the
following sequence of actions

 process 1: allocate 200M
 process 2: allocate another 200M
 process 1: touch the allocated 200M
 process 2: touch the other allocated 200M

then both allocations will succeed, and the situation will be
resolved by killing one of the processes when it tries to touch
the memory.

My interpretation of the C++ standard is that this behaviour does not
contravene the C++ standard, since each conforming implementations is
only required to execute programs "within its resource limits".

--
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: James.Kanze@dresdner-bank.com
Date: Mon, 30 Oct 2000 20:15:29 GMT
Raw View
In article <srHJ5.76$6w5.616@burlma1-snr2>,
  Barry Margolin <barmar@genuity.net> wrote:
> In article <hn0evsof3re4fm2sblph0ngg69espt3qai@4ax.com>,
> Herb Sutter  <hsutter@peerdirect.com> wrote:
> >Certain operating systems check memory allocations for success, not
> >at allocation time, but on first use. Consider the following code,
> >running on such an OS:

> This has been discussed ad nauseum in comp.std.c regarding lazy
> malloc(), and I put forth that the issues are no different for C++'s
> operator new.  I suggest you search deja.com for those threads, and
> you'll find the answers to your questions (in particular, which
> implementations behave which way).

First, there are actually two separate issues, be it malloc or
operator new.  One is the lazy commit issue: the OS allocates the
address space, but the actual pages are only allocated on first access
or first write.  The other is that the malloc/operator new (and in
C++, the code generated in the operator new expression) doesn't check
for overflow.  Since most malloc (and operator new functions) add a
little bit for internal bookkeeping, a *very* large allocation may
result in an actual request for a very small bit of memory.

The second problem is just a case of sloppy programming.  The first is
generally misguided but intentional behavior.

I've gone back over most of the recent discussion in comp.std.c (which
I haven't followed regularly for some years now).  First: I was
apparently mistaken in my belief that there was a formal decision on
this in the C committee.  The people active in comp.std.c follow this
much closer than I do, and if there had been such a decision, they
would have certainly mentioned it.

There seems to be a general concensus among the experts in the group
that the lazy commit violates the spirit of the standard, and no
consensus as to whether it violates the letter.  Although I didn't see
the point mentioned, I suspect that the overflow problem is banned by
the words requiring the returned pointers to point to non-overlapping
objects; if the program requests a very big object, and receives a
pointer to just a very little bit of memory, most implementations will
later return a pointer which would point into the very big object.

The other point is the apparent disagreement as to which systems
actually use lazy commit.  There are one or two who claim that all
popular systems use it.  Others have confirmed my statements that
neither Solaris nor HP-UX use it.  Everyone seems to claim that
Windows NT uses it, although my tests seemed to indicate that this was
not the case.  And both AIX and Linux have documented ways of
disabling or enabling it.  All of which tends to make me think that
the people claiming it is universal haven't really verified their
claims.

With regards to my tests: the tests run on Solaris 2.4 (and Sun OS
4.1.something) where very extensive -- they were part of an evaluation
for a customer, for a critical system where lazy commit would have
been a killer criteron.  Since then, I've generally run a quick
version of the test on every machine I had access to.  To date, I've
never seen an occurance of lazy commit, although my Windows NT did
suspend the process with a dialog box, which I presume is not the
behavior for non-interactive systems.


Sent via Deja.com http://www.deja.com/
Before you buy.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Mon, 30 Oct 2000 20:51:32 GMT
Raw View
kanze@gabi-soft.de wrote:
>
> Bernd Strieder <strieder@student.uni-kl.de> writes:
>
> |>  All major OS do lazy allocation these days or they lose.
>
> I'm curious about your "all major OS."  A couple of years ago (four or
> five), I was concerned about this problem; we were writing relatively
> critical software, and this behavior was deemed unacceptable.  As a
> result, I have tested a number of machines.  To date:

The following quickly hacked code (not conformant and bad in respect of
syntax and I/O, but OK for the compilers I have access to) shows some
things when monitored at runtime:

#include <iostream.h>

int main(void)
{
  cout<<"Lazy allocation tester"<<endl;
  char * c = new char[20000000];
  *c='a';

  for( int i = 0; i < 1000; ++i ) {
    c[i*4096]='c';
    if( i % 100 == 99 ) {
      cout<<"Lookup memory usage"<<endl;
      char buffer[80];
      cin>>buffer;
    }
  }

}

If you start this application and monitor the amount of free memory in
your system or the memory usage of the corresponding process, resp. ,
you will see this amount decreasing, resp. the usage of the process
increasing, step by step, as the small app procedes. At least on SunOS
5.7 on two different Sun machines, AIX on RS/6000, and Linux 2.2 on i386
I have seen this effect. As one accesses a page of RAM it is removed
from the set of free ones and allocated to your process, not earlier,
and not at the time operator new is called.

The overall design of most current OS's relies on the feature, that
pages of memory are lazily fetched from memory mapped files or swap
space. Even compiled code and dynamic libraries are concerned. When I
say all major OS, I can't prove that now, but I'm sure, because it is
best practice in OS design, and anything else will produce an OS that
cannot stand competition. The "Working Set Model" is the theoretical
foundation for all of this. It is crucial that just the currently used
part of virtual memory pages is in RAM. It is crucial that as few pages
as possible are used at one time, and that allocation is deferred to as
late as possible. Anything else and it becomes easy to construct cases
where a system sucks badly.

To give an example in a field I have seen the bits, since sources are
available: Memory allocation on Linux via malloc is usually implemented
in terms of memory mapping the /dev/null file. The code of the
executable and dynamic libraries are memory mapped. If RAM is full then
pages are swapped out. If RAM and swap space, which togethergive the
total amount of virtual memory, are filled, then our initial problem
becomes visible. The system will fail in finding as well as making room
in RAM for a mapped, but not allocated page of memory. Looking at it
closely we can see that this might happen due to memory accesses to the
heap of an app, due to accessing code in the executable or dynamic
libraries, or as others have pointed out due to stack accesses.

Is it conformant that C++ apps might crash due to failing to execute the
next line of code? This is a closely related problem to the problems
with lazy allocation of heap memory. If one is treated in the standard,
then the others must be, correspondingly. The OS's are usually designed
under the assumption of unlimited virtual memory with some emergency
code to maintain robustness. This should be reflected somehow in the
standard.

After all the dilemma remains, the current standard seems to give
guarantees it cannot provide. The OS's that could imaginably give us
sufficient guarantees, will try hard to not do so, since major
attainments of the past 20 years would suffer badly. As I pointed out in
my first posting in this thread, performance, security and robustness of
the system are concerned.

>
>     Solaris:    Does not do lazy allocation, although earlier versions
>                 (through about 2.3 or 2.4) started thrashing like crazy
>                 when the limit was reached.

SunOS 5.7 (Solaris 2.7) does do lazy allocation, or at least can be
configured to do so.

>
>     AIX:        Does not do lazy allocation by default.  Lazy allocation
>                 can be turned on (on a process by process basis) by
>                 setting a specific environment variable.

AIX behaves like Solaris. At least where I tested, a highly loaded
server with 5000 student accounts, 6 processors and typically some 100
users concurrently. I'm sure that this machine is tuned to do its best
at extreme load.

>
>     HP-UX:      Does not do lazy allocation.
>
>     Linux:      From hearsay: the kernal can be configured in both
>                 modes.  How it is normally configured by default is not
>                 too clear, and probably depends on the distributor.
>
>     Windows NT: In my tests, suspends processes when memory gets tight,
>                 and brings up a pop-up box suggesting that the user kill
>                 a few applications.  Presumably, it does something
>                 different in processes started from a non-interactive
>                 environment.  Regretfully, the behavior that I observed
>                 doesn't allow me to say whether it uses lazy allocation
>                 or not.

NT cannot behave differently in essential points. It asks the operator
to solve the problem instead of trying to get along itself. As you
noticed, this is not wise for non-interactive sessions. The behaviour of
NT gives an indication that our problem is not a problem of C++ but of
the system the C++ virtual machine runs on. The C++ standard is not to
give guarantees about the system it runs on. The OS and the operator
rule everything on a system, finally. The best to be defined there is
undefined behaviour.

>
> I would consider that any group which excludes Solaris, AIX, HP-UX and
> probably Linux and Windows NT should not be qualified as "all major OS".

I fear that at least one of us missed the point here. Perhaps there is a
misunderstanding about the terms? I regarded lazy allocation as the
effect that the actual memory usage of a process grows during accessing
the memory, and not at the point where the process issued the allocation
command. This is the behaviour I have seen on Solaris, AIX, and Linux.
They are the major *nixen. NT cannot deviate from best practices, it
must behave the same way.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: "Marco Dalla Gasperina" <marcodg@home.com>
Date: Tue, 31 Oct 2000 16:46:29 GMT
Raw View
<James.Kanze@dresdner-bank.com> wrote in message
news:8tk9s7$pk1$1@nnrp1.deja.com...
> Everyone seems to claim that Windows NT uses it, although my tests seemed
to
> indicate that this was not the case.

Windows NT allows either on an allocation by allocation basis
at the discretion of the programmer.  The API function VirtualAlloc()
can be used for lazy or non-lazy (busy?) commit.

The standard C/C++ library functions use commit on allocation.  The
stack space is commited lazily.  The OS is smart enough to trap the
page fault on a stack allocation and commit a new page (or pages)
up to the maximum specified when creating the thread.

marco



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Team-Rocket@gmx.net (Niklas Matthies)
Date: Mon, 30 Oct 2000 19:30:59 GMT
Raw View
On Wed, 25 Oct 2000 18:09:48 GMT, Herb Sutter <hsutter@peerdirect.com> wrote:
> Certain operating systems check memory allocations for success, not at
> allocation time, but on first use. Consider the following code, running
> on such an OS:
>
>   char* p = new char[100000000];
>
>   // ... millions of instructions later ...
>
>   p[99999999] = 'a';
>
> Assume that the request for memory cannot be fulfilled. Questions:
>
> a) Nonconformance: If a C++ implementation's ::operator new[], or its
> generated code for the new-expression, naively wraps the OS allocation
> function, will there be no exception from the new-expression but an
> exception from the expression "p[99999999]"? If so, that's
> nonconforming, right?

There were lengthy discussions about overcommitment (that's what it's
called) on cmp.std.c some time ago, since there is basically the same
issue with malloc(). Looking at that discussion might provide some
useful insight. With regard to C, the views ranged between:

(1) An implementation based on an overcommitting operating system is as
conforming as an implementation running on an operating system that
allows users to kill running programs. This is because it's the user's
choice to use an operating system with overcommitment activated. Whether
an implementation takes special steps to prevent overcommitment is an
QoI issue, not a conformance issue.

(2) The standard specifies that malloc() allocates "space", which it
doesn't further define. An implementation which interprets this to mean
to only allocate address space (as opposed to physical storage space) is
hence conforming. (This probably doesn't apply to C++ because it uses
the wording "storage", not "space".)

(3) A malloc() invocation must allocate storage space, which must be
fully accessible afterwards. This is what common sense dictates.
When a program fails or misbehaves due to accesses to overcommitted
memory, then the implementation, for the duration of that program
execution, is not conforming.

(4) A strictly conforming program cannot use malloc(), anyway. This is
because the use of malloc() matches the definition of undefined behavior
and/or unspecified behavior in the C standard. Therefore, after an
invocation of malloc(), anything can happen. (This doesn't apply to C++
either, because it has a different conformance model.)

Hence, for C++, one might choose between (1) and (3).

-- Niklas

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: "John Hickin" <hickin@nortelnetworks.com>
Date: Mon, 30 Oct 2000 19:41:19 GMT
Raw View
Herb Sutter wrote:
>
> Certain operating systems check memory allocations for success, not at
> allocation time, but on first use. Consider the following code, running
> on such an OS:
>
>   char* p = new char[100000000];
>
>   // ... millions of instructions later ...
>
>   p[99999999] = 'a';
>
> Assume that the request for memory cannot be fulfilled. Questions:
>
> a) Nonconformance: If a C++ implementation's ::operator new[], or its
> generated code for the new-expression, naively wraps the OS allocation
> function, will there be no exception from the new-expression but an
> exception from the expression "p[99999999]"? If so, that's
> nonconforming, right?

I would tend to agree (but see below). Note that depending on how you
define _exception_ there might be no exception at all. For example, I
define exception as C++ exception and observe the result of assigning a
through p as a raised Unix signal (SIGSEGV or SIGBUS). This is not an
exception the way I defined it and, in fact, it may not be possible to
turn it into one.

>
> b) Conformance: To be conforming, does a C++ implementation have to have
> an ::operator new[], or generate code for the new-expression, that calls
> the native allocation function and then immediately attempts to access
> its first and last objects? Or does it have to test at least one object

No, because the first and last pages of the allocated virtual memory
might have been committed while internal pages may only have been
reserved.

> per page? Clearly the answer will be platform-specific, but I'm curious
> what an implementer has to do to get it right.

It would seem that new would have to have a way to cause the memory to
be committed to the page file before returning to the caller. And since
new was intended to be implementable using malloc() [or at least I think
that this was the intent], it would seem that a version of malloc needs
to be made available which allows the commit as well.

>
> c) Which known implementations fall under category (a), and which under
> category (b)?


I would guess that both Linux and some versions of AIX (those that can
generate SIGDANGER) fall into (a).

One thing that the standard does not address, and which might influence
the discussion, is that happens if just trying to commit a new page of
stack causes a similar problem? AFAIK both MT Un*x and WinNT do not
commit the entire stack and it is possible to generate a nonrecoverable
page fault even though the stack isn't completely used up.

Another point: the standard doesn't talk about virtual memory and so the
fact that new (or creation of a stack) causes, on some OS, a bunch of
virtual addresses to be mapped and some subset thereof to be committed
doesn't really matter. One considers that new succeeded. Does the
standard say anywhere that storage returned by new may be accessed
without other problems? I can twist my interpretation of section
3.7.3.1/2 because the meaning of the words _if it is successful_ doesn't
seem to be cast in stone.

Again: much of the time we will use classes with well defined
initialization code (all data members initialized through writes to main
memory). In these cases the constructor will fail so an object won't be
created. If we can arrange a way to fail the constructor by translating
the access violation into a synchronous exception we may be in good
enough shape. This may, however, be difficult to arrange on some OS, but
perhaps not impossible.

I think that I'll change my mind and cast my vote that (a) isn't really
nonconformance.


Regards, John.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: kanze@gabi-soft.de
Date: Mon, 30 Oct 2000 19:55:27 GMT
Raw View
Herb Sutter <hsutter@peerdirect.com> writes:

|>  Certain operating systems check memory allocations for success, not a=
t
|>  allocation time, but on first use. Consider the following code, runni=
ng
|>  on such an OS:

|>    char* p =3D new char[100000000];

|>    // ... millions of instructions later ...

|>    p[99999999] =3D 'a';

|>  Assume that the request for memory cannot be fulfilled.

Assuming that 100000000 fits in a size_t, if the request cannot be
fulfilled, the new must throw bad_alloc.  Any other behavior is
non-conforming.  Two comments are in order, though:

1. I've used systems where 100000000 wouldn't fit in a size_t.  Since
   size_t is required to be an unsigned type, this will allocate the
   value of 100000000 modulo ~static_cast< size_t >( 0 )+1.  Since the
   same modulo will apply to the index operator in the assignment, the
   assignment is probably perfectly legal.  On the other hand, I
   seriously doubt that the program will work as expected.

2. There are legitimate uses for systems using lazy commit on
   allocation.  This is not standard conforming.  (I think that there
   was even a request for interpretation to the C committee which
   clarified this with regards to malloc.)  On the other hand, at least
   two systems (AIX and Linux) support it as an extension.  In the case
   of AIX, the extension is off by default.  In the case of Linux, it
   varies.  (In the case of Linux, I am basing my comments on what I
   have been told, rather than first hand experience.)

   If a system uses lazy commit, I would expect some sort of system
   defined signal on the assignment.  This is purely implementation
   defined, however, since this behavior is *NOT* conform, and is only
   offered as an implementation defined extension.

|>  Questions:

|>  a) Nonconformance: If a C++ implementation's ::operator new[], or its
|>  generated code for the new-expression, naively wraps the OS allocatio=
n
|>  function, will there be no exception from the new-expression but an
|>  exception from the expression "p[99999999]"? If so, that's
|>  nonconforming, right?

I don't know what you mean by "na=EFvly wraps the OS allocation
function...".  The two compilers I have handy both generate a direct
call to operator new, with 100000000 as parameter.  And frankly, I can't
see why a compiler would do anything else; it might add a few bytes, but
I can't see where that would effectively change anything.

I do know of problems when allocating large arrays of int or double.
For example, if I try to allocate 1073741824 ints on my machine, the new
returns a valid pointer, both with g++ and Sun CC.  Trying to access the
memory generally results in a core dump, however.

This is a common compiler error.  The compiler na=EFvly multiplies the
number of elements by sizeof(int) (here, 4), and requests the results.
Of course, on my 32 bit machine, 1073741824*4 =3D=3D 0.  According to the
standard, of course, I should get a bad_alloc exception.  There is no
overflow in my code; I asked for 1073741824 int's.  The system couldn't
give them to me.  (There is no way the system could ever give them to
me, given my hardware constraints.)  The standard specifies bad_alloc in
such cases.

The problem becomes more complicated if I have something like:

    void* operator new( size_t , int poolId ) ;

    int* p =3D new int( 0 )[ 1073741824 ] ;

According to the standard, the implementation must signal an error in
this case.  But what?  Throw bad_alloc?  Return a null pointer?  The
implementation has no way of knowing how the error is to be reported.

Dave Abrahams has pointed out that there is a note in 5.3.4/13 to the
effect that "unless an allocation function is declared with an empty
exception specification, it indicates failure to allocate storage by
throwing a bad_alloc exception, ..."  Since the compiler can see the
exception specification, it can thus know whether to throw bad_alloc or
to return null.  IMHO, there are two problems with this:
  - notes aren't normative, and
  - what about cases lime new( voidPtr )int[ ... ], which by definition
    never fail?

|>  b) Conformance: To be conforming, does a C++ implementation have to h=
ave
|>  an ::operator new[], or generate code for the new-expression, that ca=
lls
|>  the native allocation function and then immediately attempts to acces=
s
|>  its first and last objects? Or does it have to test at least one obje=
ct
|>  per page? Clearly the answer will be platform-specific, but I'm curio=
us
|>  what an implementer has to do to get it right.

An implementor has to either throw bad_alloc, or return a pointer which
can be accessed for all elements.  For once, the standard is extremely
clear.  (And I wish I knew the exact history of C with regards to malloc
and this problem.  I'm almost sure that there was a formal question
asked, and a formal response to the effect that the C standard meant
what it said.)

What does that mean if the OS overcommits, and you cannot turn it off?
It means that if you want a conforming C++ (or C) implementation, you
have to fix this error in the OS.

|>  c) Which known implementations fall under category (a),

Supposing that with "na=EFvly wrapping", you meant the problem with
allocating a large array of int's that I expoused, all of them.  At
least, I don't know of any that get it right.

This isn't the first time.  The ARM explicitly said that static
variables were destructed in the reverse order of construction, but no
implementation got it right, until the standard explicitly voted that it
meant what it said, and not something else.  Frankly, I don't expect too
many compiler implementors to get heated up about this either, until it
is made clear to them that the current situation is unacceptable.

|>  and which under
|>  category (b)?

Since falling under category (b) means that you don't have a C++
implementation, the obvious answer is that no C++ implementations fall
under (b):-).  Seriously, the two I know of only fall under (b) as an
option.  I don't know the exact situation in Linux as to when the option
is on and when it is off, but with AIX, the option is off by default,
and is only activated if the user sets a specific environment variable.
Presumably, if the user knows enough to set the variable, he knows
enough to understand the consequences.  As I said, there are situations
where such behavior is not only acceptable, but even preferrable.

--=20
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient=E9e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh=FCttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: kanze@gabi-soft.de
Date: Mon, 30 Oct 2000 20:03:08 GMT
Raw View
Bernd Strieder <strieder@student.uni-kl.de> writes:

|>  > In summary, there are at least four basic options to handle memory
|>  > allocations, of which two are without any doubt conformant with
|>  > the C++ standard:

|>  > - commit on allocation

|>  Can be sort of simulated by the implementation. The problem is, when
|>  this feature becomes broadly available, every application programmer
|>  will use it, since it prevents from crashing.

Precisely.  This feature is standard in most modern OS's, and it *IS*
used.  The most insistent example is the AIX, where the feature was
practically imposed on IBM by customers who needed it.

People develope code in a variety of environments, for a variety of
applications.  I notice, for example, that you have a University email
address.  I can very much understand that something like lazy commit
would be an advantage on a machine on which students program, or on
machines running large experimental simulations.  There are doubtlessly
other cases as well.

But I write programs which run telephone switching systems, or serve up
information to bank tellers.  On such systems, every application
programmer *does* use commit on allocation -- these are the customers
who pushed for the change at IBM.  We allocate memory on an as needed
basis, and when we get a valid pointer, we expect it to work.  We are
exceedingly unhappy about not being able to control stack overflow as
well, although realistically, unless recursive algorithms are involved
(and they generally aren't), this is not a real source of errors.

Outside of Universities or research institutions, I think that these are
probably the majority of the applications.  And for better or worse,
most of them are written by people who have never heard of lazy commit.
Which means that not only must lazy commit be optional, it must be off
by default.  (As you say, it is an optimization.  It makes some systems
run faster, but it *never* improves correctness.  Like all optimization
measures, it should only be undertaken when necessary, and not
automatically.)

|>  But this defeats the
|>  optimization resulting from lazy allocation totally. Overall
|>  performance of systems would be lower than nowadays, until
|>  programmers are educated to allocate and immediately use that amount
|>  of memory they are just now about to use.

But that's what all the programmers I know of do.  We allocate what we
need, and no more.

|>  In contrary imagine the
|>  situation where operator new or malloc() truely allocate pages of
|>  memory.=20

You mean like Solaris, AIX and HP-UX?  Not very hard to imagine, since
I've actually worked on all three.

|>  Then an application programmer could easily write a loop
|>  allocating all the memory in the system, without leaving the system
|>  any chance to identify and drop the responsible process(es).

Well, the above three (and most others) have a ulimits -- the sysadmin
can restrict the amount of memory for a given user's processes.  And if
I wanted to hang a system in this manner, I'd start allocating disk, not
memory.  Memory is freed whenever my process stops, where as disk...

But you talk as if the application programmer was the enemy.  This may
be true in cases like a University, where you allow all students
access.  But I can assure you that on telephone switches and bank
information servers, all of the programs running have been written or at
least configured by people interested in the good operation of the
system.  The problem you describe simply doesn't exist in industry, or
at least, it is very exceptional.  (There is always the chance of a
disgruntled employee, or some such.  But generally, an insider already
has so many ways to mess up the system -- introducing subtle bugs in the
code he is working on, etc., that just running out of memory isn't worth
considering.)

|>  This is
|>  deemed a security risk nowadays. An application that allocates
|>  memory is looked at as evil, it tries to steal memory to prevent
|>  others from getting it. We know that we aren't bad guys, but the OS?
|>  An OS is not just a library we call, in fact it rules.

I'm not sure what is the security risk.  An application that allocates
memory, or any other resources, it doesn't need is not appreciated.  But
that is precisely because allocated resources *are* committed.

|>  Compare that with the situation of one person providing a resource
|>  to a crowd of others. The provider asks round robin: "how much do
|>  you need?"  "20" "Oh sorry, there are no 20 left" or "Oh yes, here
|>  they are".

|>  Another organisation would be: "how much do you need" "20" "Look over
|>  there the pile, maybe it contains 20. Fetch them as you need, but if =
you
|>  come over to fetch one when it's empty, you will be lost."=20

|>  The second choice is more wise, since it makes the crowd trying hard
|>  to avoid the fatal outage, or they get used to being lost at some
|>  times.  There is nothing better the provider can do, if it is not
|>  sure, that no one fetches more then actually needed.

I don't understand your logic.  It is the first which encourages more
responsible memory use.  If I know that I will get thrown out any time I
cannot get the memory I ask for, I will try and ask for as little as
possible, to avoid getting thrown out unnecessarily.

|>  > - wait until memory gets freed

|>  No option for fair scheduling.

Agreed.

|>  > The third and fourth might not be conformant, and even if it is,
|>  > I'd anyway use it as last ressort only, unless explicitly
|>  > specified for the given process:

|>  > - kill the process
|>  > - kill a random process

|>  > (Both are the same, as far as conformance is concerned: Program
|>  > termination at a random point is not any more conformant just
|>  > because it happens only when accessing a new page)

|>  Killing induced by the OS is a kind of failure of the system the
|>  implementation of C++ runs on. I don't think that a C++ standard coul=
d
|>  contain optimistic claims about those failures. Every systems has
|>  possible points of failures. Every application that needs to overcome
|>  this problem has to take measures at the application design level. Wh=
at
|>  about e.g. power loss?

Any number of things.  Most of the time, we run on an uninterruptable
power supply, so the problem is generally irrelevant.  (Note that
terminating a multithreaded process at an arbitrary time can, in many
cases, cause data corruption on the disk.  It is a definite no-no on
servers.)

Finally, there is an option not mentioned.  Windows NT, up to a certain
point, will try and allocate space on disk, and use that to increase the
virtual memory space.  (Beyond that limit, it pops up a dialog box
requesting the user to terminate some processes.  Not a bad solution for
a desktop machine, but rather useless in an unmanned server.)

    [...]
|>  What I could imagine, to differentiate the real implementations of
|>  C++ into two cathegories in the standard. One, where operator new
|>  works like expected, and another one, where operator new almost
|>  never throws, but where the system can fail due to memory outages
|>  which is overcome by dropping the process. The second cathegory is
|>  the most widespread one these days and to say it clear, it is not
|>  secure for any processes, but it uses to be very good natured, gives
|>  high performance and tries to make it easy to get performance.

I can see very good use for this second category, but although you say
it is the most widespread one these days, I know of no OS which
implements it by default in its current version.  (I know of it from
older versions of AIX.)

It is also unusable for most applications.

|>  My last option is leaving the standard as it is in this respect, but
|>  have a comment somewhere about reality to warn implementors and
|>  users.

I also see no need to modify the standard in this regard.  Since it
definitly forbids lazy commit.  And since nothing forbids implementors
from offering it as a non-standard extension where appropriate.

--=20
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient=E9e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh=FCttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Herb Sutter <hsutter@peerdirect.com>
Date: 2000/10/25
Raw View
Certain operating systems check memory allocations for success, not at
allocation time, but on first use. Consider the following code, running
on such an OS:

  char* p = new char[100000000];

  // ... millions of instructions later ...

  p[99999999] = 'a';

Assume that the request for memory cannot be fulfilled. Questions:

a) Nonconformance: If a C++ implementation's ::operator new[], or its
generated code for the new-expression, naively wraps the OS allocation
function, will there be no exception from the new-expression but an
exception from the expression "p[99999999]"? If so, that's
nonconforming, right?

b) Conformance: To be conforming, does a C++ implementation have to have
an ::operator new[], or generate code for the new-expression, that calls
the native allocation function and then immediately attempts to access
its first and last objects? Or does it have to test at least one object
per page? Clearly the answer will be platform-specific, but I'm curious
what an implementer has to do to get it right.

c) Which known implementations fall under category (a), and which under
category (b)?

Herb

---
Herb Sutter (mailto:hsutter@peerdirect.com)

CTO, PeerDirect Inc. (http://www.peerdirect.com)
Contributing Editor, C/C++ Users Journal (http://www.cuj.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: Barry Margolin <barmar@genuity.net>
Date: 2000/10/25
Raw View
In article <hn0evsof3re4fm2sblph0ngg69espt3qai@4ax.com>,
Herb Sutter  <hsutter@peerdirect.com> wrote:
>Certain operating systems check memory allocations for success, not at
>allocation time, but on first use. Consider the following code, running
>on such an OS:

This has been discussed ad nauseum in comp.std.c regarding lazy malloc(),
and I put forth that the issues are no different for C++'s operator new.  I
suggest you search deja.com for those threads, and you'll find the answers
to your questions (in particular, which implementations behave which way).

--
Barry Margolin, barmar@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: joerg.barfurth@attglobal.net (Joerg Barfurth)
Date: 2000/10/25
Raw View
Herb Sutter <hsutter@peerdirect.com> wrote:

> Certain operating systems check memory allocations for success, not at
> allocation time, but on first use. Consider the following code, running
> on such an OS:
>=20
>   char* p =3D new char[100000000];
>=20
>   // ... millions of instructions later ...
>=20
>   p[99999999] =3D 'a';
>=20
> Assume that the request for memory cannot be fulfilled. Questions:
>=20
> a) Nonconformance: If a C++ implementation's ::operator new[], or its
> generated code for the new-expression, naively wraps the OS allocation
> function, will there be no exception from the new-expression but an
> exception from the expression "p[99999999]"? If so, that's
> nonconforming, right?

IMHO it depends on the exact meaning you attach to 'allocation'. One
might argue that something (at least a huge chunk of the address space)
has been allocated to you. So not thowing an exception may or may not be
nonconforming - I'm not at all sure.

At least that OS seems to be content with a loose definition of the
meaning of 'allocation' ;-)

OTOH it would certainly be non-conforming if there was an exception (in
the C++ sense) from the expression "p[99999999]" (assuming 'p' is a
builtin pointer).=20

But an OS/a C++-implementation would probably signal problems to
'physicalize' a virtual memory page not by 'throw'ing an exception, but
rather by using signals (as in <csignal> - afair these occur under
implementation-defined circumstances) or other means.

BTW: Would other (undefined) behavior be conforming here under a label
of "implementation limits exceeded" ?=20

> b) Conformance: ...

Of course, if a) isn't non-conforming after all, this means you can't
protect your application reliably from blowing the limits off the free
store - and you can't keep it running stably in a portable way as soon
as signals enter into play. Hm..

> c) Which known implementations fall under category (a), and which under
> category (b)?

Still, there seem to be reasons in favor of lazy allocation (why else
would such OSs choose it as their default mode). So I wonder whether it
would be C++-like to force implementers to go to great lengths in order
to get around it. After all the design of C and C++ always seemed to aim
at supporting platforms 'as they are'.

Just my 0.02 euro-cent, J=F6rg
--=20
J=F6rg Barfurth                         joerg.barfurth@attglobal.net
-------------- using std::disclaimer; -----------------------------
Download:     StarOffice 5.2 at       http://www.sun.com/staroffice
Participate:  OpenOffice now at       http://www.OpenOffice.org

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]






Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Thu, 26 Oct 2000 14:36:30 GMT
Raw View
Herb Sutter wrote:
>
> Certain operating systems check memory allocations for success, not at
> allocation time, but on first use. Consider the following code, running
> on such an OS:
>
>   char* p = new char[100000000];
>
>   // ... millions of instructions later ...
>
>   p[99999999] = 'a';
>
> Assume that the request for memory cannot be fulfilled. Questions:
>
> a) Nonconformance: If a C++ implementation's ::operator new[], or its

>
> b) Conformance: To be conforming, does a C++ implementation have to have

>
> c) Which known implementations fall under category (a), and which under
> category (b)?
>

I asked a similar question in a newsgroup of the OS crowd. Together with
some background my answer is as follows:

An OS is tuned to make its best out of its job, scheduling lots of
processes, sharing scarce resources, being fair. At this level it
doesn't matter what a certain program has to be conformant to. The OS
rules and processes must be killed if the OS decides to do so, might the
sources of the process be conformant or not.

A common assumption nowadays is unlimited memory. To be fair the
currently available amount of memory is not made public. Processes
should allocate what they need and not make this dependent on the
currently available amount (which is in a permanent flux), since this
would make it difficult to be fair to as many as possible. This results
in  pretending unlimited memory.

Another common point of optimization, which has big effects, is reusing
initially unused memory regions of processes for other processes. This
is simulated by separating allocation of addresses from allocation of
memory pages. Page allocation is done lazy. This strategy maximizes the
amount of memory available to the system. It has become common, to rely
on that feature. Too many applications allocate e.g. large internal
heaps without a guarantee for using it. Whenever lazy allocation fails,
the system has not enough memory for all the processes currently
running, at least one of them must be killed.

All major OS do lazy allocation these days or they lose. So your option
a) nonconformance is very common.

I have seen suggestions similar to b) how a C++ implementation could
avoid this problem on *nix platforms: Have operator new accessing all
pages it allocates, and catch the signal SIGSEGV during that time. There
are ways to continue with throwing the according exception. This has
many implications, interactions and some hacks, I doubt we will see it.
Another quickfix is giving advice to the users that correct behaviour of
the C++ app is subject to sufficient memory. That one matches all apps
of all languages.

The essence is: What could a C++ standard do about almost all real
systems possibly showing undefined behaviour (standardese meaning)
anytime anywhere? The OS crowd knows that their sloppy memory handling
is evil for robustness of singular applications but almost all need it
for simplicity, performance and robustness of their OS. There is an
obvious and hard tradeoff.

I'm sure there is reluctance against including too much about the
environment of a running C++ app into the standard. I don't even know if
there is anything about it. What should go into the standard? Perhaps
some pessimistic stuff, that corresponds to the current real world: "A
conformant implementation is allowed to show undefined behaviour on any
first-time memory accesses at runtime, if the runtime environment gives
the implementation no chance to handle its possible errors due to these
accesses." Hard time for implementors.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Christopher Eltschka <celtschk@physik.tu-muenchen.de>
Date: Thu, 26 Oct 2000 18:13:42 GMT
Raw View
Bernd Strieder wrote:

[...]

> Another common point of optimization, which has big effects, is reusing
> initially unused memory regions of processes for other processes. This
> is simulated by separating allocation of addresses from allocation of
> memory pages. Page allocation is done lazy. This strategy maximizes the
> amount of memory available to the system. It has become common, to rely
> on that feature. Too many applications allocate e.g. large internal
> heaps without a guarantee for using it. Whenever lazy allocation fails,
> the system has not enough memory for all the processes currently
> running, at least one of them must be killed.

I don't see that. The process could also be stopped until some
other process frees the memory page (f.ex. by being terminated
the normal way).
Note that waiting for a (possibly long) period is quite conformant.

An OS could also use mixed strategies (like waiting for a certain
time, then terminating, or even making the strategy a property of
the process, maybe with a priority system)

Of course, the system would have to check for the danger of a
"memory deadlock", or for especially important processes which
may not wait. For the latter, a per-process "commit on allocation"
option could work.

In summary, there are at least four basic options to handle memory
allocations, of which two are without any doubt conformant with
the C++ standard:

- commit on allocation
- wait until memory gets freed

The third and fourth might not be conformant, and even if it is, I'd
anyway use it as last ressort only, unless explicitly specified
for the given process:

- kill the process
- kill a random process

(Both are the same, as far as conformance is concerned: Program
termination at a random point is not any more conformant just
because it happens only when accessing a new page)

However, given that calloc clears the memory (and therefore
has to commit it), I think that at least the first option
(commit on allocation) can be achieved from standard C++:
Just replace operator new with a version which calls calloc
to allocate memory.

[...]

> The essence is: What could a C++ standard do about almost all real
> systems possibly showing undefined behaviour (standardese meaning)
> anytime anywhere? The OS crowd knows that their sloppy memory handling
> is evil for robustness of singular applications but almost all need it
> for simplicity, performance and robustness of their OS. There is an
> obvious and hard tradeoff.

Is there any OS which uses the waiting strategy (possibly
as part of a mixed strategy)? IMHO waiting as default, and
only killing on emergency (esp. "memory deadlock") would make
a more robust strategy. And waiting is (in difference to
killing) without doubt conformant with the C++ standard
(and probably most - if not all - other language standards).

[...]

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Dennis Yelle <dennis51@jps.net>
Date: Thu, 26 Oct 2000 20:14:39 GMT
Raw View
Christopher Eltschka wrote:
>
> Bernd Strieder wrote:
>
> [...]
>
> > Another common point of optimization, which has big effects, is reusing
> > initially unused memory regions of processes for other processes. This
> > is simulated by separating allocation of addresses from allocation of
> > memory pages. Page allocation is done lazy. This strategy maximizes the
> > amount of memory available to the system. It has become common, to rely
> > on that feature. Too many applications allocate e.g. large internal
> > heaps without a guarantee for using it. Whenever lazy allocation fails,
> > the system has not enough memory for all the processes currently
> > running, at least one of them must be killed.
>
> I don't see that. The process could also be stopped until some
> other process frees the memory page (f.ex. by being terminated
> the normal way).
> Note that waiting for a (possibly long) period is quite conformant.

I don't think waiting is a good solution.
As a practical matter, waiting for a long time,
where "long time" is longer than 2 or 3 seconds can be just as
bad, or even worse than termination.

At least with termination, the programmer has a chance of finding
out what went wrong and fixing it at a higher level.
Arbitrary long delays make even this impossible.

Dennis Yelle
--
I am a computer programmer and I am looking for a job.
There is a link to my resume here:
http://table.jps.net/~vert/

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: "Balog Pal \(mh\)" <pasa@lib.hu>
Date: Fri, 27 Oct 2000 13:29:06 GMT
Raw View
Christopher Eltschka wrote in message
<39F87165.E678EF88@physik.tu-muenchen.de>...

>I don't see that. The process could also be stopped until some
>other process frees the memory page (f.ex. by being terminated
>the normal way).
>Note that waiting for a (possibly long) period is quite conformant.
>
>An OS could also use mixed strategies (like waiting for a certain
>time, then terminating, or even making the strategy a property of
>the process, maybe with a priority system)
>
>Of course, the system would have to check for the danger of a
>"memory deadlock", or for especially important processes which
>may not wait. For the latter, a per-process "commit on allocation"
>option could work.


Yep, a likely strategy may be to put those precesses offline, removing all
their paged memory to the page file. It can be a working strategy with
assumptations: 1) the system processes always fit in physical memory leaving
a litle room, and 2) you have reserve in the page file equal to the physical
memory. Certainly that can be tuned for some aim values.
In problematic case the user can be asked to resolve the memory conflicts by
allowing some processes to run, and close. If remote killing some processes
means disaster (who like a corrupt database) while temporary going out of
service is not, this can be a valuable strategy.

>In summary, there are at least four basic options to handle memory
>allocations, of which two are without any doubt conformant with
>the C++ standard:
>
>- commit on allocation

IMHO that is still the cleanest way, the system should have distinct API
calls to get committed and lazy-allocated memory, then everyone can ask from
the proper pool. C++ new then should use the former, but the user can call
the other API (certainly wrapped in some class for portability reasons) to
get the other.

>- wait until memory gets freed


>The third and fourth might not be conformant, and even if it is, I'd
>anyway use it as last ressort only, unless explicitly specified
>for the given process:
>
>- kill the process
>- kill a random process


On interactively used systems those can be tied to confirmation, so at least
a human could be blamed. ;-)

>However, given that calloc clears the memory (and therefore
>has to commit it), I think that at least the first option
>(commit on allocation) can be achieved from standard C++:
>Just replace operator new with a version which calls calloc
>to allocate memory.


Some really smart systems could easily ignore the trick. When the system is
about to drop pages to pagefile it can examine the content. If it is
zero-filled the system can decide to just forget (uncommit) the page, then
rebuild it when accessed the next time.  (Or apply compression, etc. The
whole paging is transparent to the user level, so the question is not really
what pages are where and in what form at a given moment, but that how much
memory the system show to the clients. If it is conservative to present only
the amount it can present with arbitrary content anytime, no problem can
ocour later. If it is tuned for other assumptations, it will behave much
better in overall, but with the possible problem on occasion. )

At least two things tend to force the latter strategy: stack allocation and
code/resource segments. Every process and thread started generally obtains a
huge stack, a megabyte or more. Then it maybe use just a few keys of it.
Also, code in modules is expected to be read-only by the system, so if
another instance asks for the same, it just get a shared copy, and modify
attempts activate copy-on-write. If all those things got their full memory
precommitted we could run considerably less programs on the same
physical+pagefile configuration. Certainly there may be other things.

Paul




======================================= MODERATOR'S COMMENT:
 Please do not allow any further discussion to drift into pure OS design choices.  Posts regarding the interaction between the C++ Standard and OS design/implementation/standards are, of course, very welcome.


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Pierre Baillargeon <pb@artquest.net>
Date: Fri, 27 Oct 2000 13:29:58 GMT
Raw View
Christopher Eltschka wrote:
>
> The third and fourth might not be conformant, and even if it is, I'd
> anyway use it as last ressort only, unless explicitly specified
> for the given process:
>
> - kill the process
> - kill a random process
>

I they are not conformant, does it mean any OS with a kill command (or
"end task" in the Task Manager under Windows NT) is non-conformant? What
is the difference between an OS-triggered kill and a user-triggered one?
Note: the user killing the process may be different from the one who
started it.

While my opinion has no weight in this matter, I don't think any events
that is outside of the language should affect conformance. Otherwise,
where do you stop?

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Bernd Strieder <strieder@student.uni-kl.de>
Date: Fri, 27 Oct 2000 17:09:48 GMT
Raw View
Christopher Eltschka wrote:
>
> Bernd Strieder wrote:
>
> [...]
>
> > [...]
> > heaps without a guarantee for using it. Whenever lazy allocation fails,
> > the system has not enough memory for all the processes currently
> > running, at least one of them must be killed.
>
> I don't see that. The process could also be stopped until some
> other process frees the memory page (f.ex. by being terminated
> the normal way).
> Note that waiting for a (possibly long) period is quite conformant.

Waiting is not an option, it would not at all result in a fair schedule.
Without freeing some memory in that situation a deadlock is the
immediate result, if all processes try to allocate even a bit. Freeing
memory induced by the OS means dooming at least one process.

>
> An OS could also use mixed strategies (like waiting for a certain
> time, then terminating, or even making the strategy a property of
> the process, maybe with a priority system)

Scheduling is difficult enough already.

>
> Of course, the system would have to check for the danger of a
> "memory deadlock", or for especially important processes which
> may not wait. For the latter, a per-process "commit on allocation"
> option could work.

The deadlock problem would make scheduling more difficult, while it is
time critical already.

>
> In summary, there are at least four basic options to handle memory
> allocations, of which two are without any doubt conformant with
> the C++ standard:
>
> - commit on allocation

Can be sort of simulated by the implementation. The problem is, when
this feature becomes broadly available, every application programmer
will use it, since it prevents from crashing. But this defeats the
optimization resulting from lazy allocation totally. Overall performance
of systems would be lower than nowadays, until programmers are educated
to allocate and immediately use that amount of memory they are just now
about to use. In contrary imagine the situation where operator new or
malloc() truely allocate pages of memory. Then an application programmer
could easily write a loop allocating all the memory in the system,
without leaving the system any chance to identify and drop the
responsible process(es). This is deemed a security risk nowadays. An
application that allocates memory is looked at as evil, it tries to
steal memory to prevent others from getting it. We know that we aren't
bad guys, but the OS? An OS is not just a library we call, in fact it
rules.

Compare that with the situation of one person providing a resource to a
crowd of others. The provider asks round robin: "how much do you need?"
"20" "Oh sorry, there are no 20 left" or "Oh yes, here they are".

Another organisation would be: "how much do you need" "20" "Look over
there the pile, maybe it contains 20. Fetch them as you need, but if you
come over to fetch one when it's empty, you will be lost."

The second choice is more wise, since it makes the crowd trying hard to
avoid the fatal outage, or they get used to being lost at some times.
There is nothing better the provider can do, if it is not sure, that no
one fetches more then actually needed.


> - wait until memory gets freed

No option for fair scheduling.

>
> The third and fourth might not be conformant, and even if it is, I'd
> anyway use it as last ressort only, unless explicitly specified
> for the given process:
>
> - kill the process
> - kill a random process
>
> (Both are the same, as far as conformance is concerned: Program
> termination at a random point is not any more conformant just
> because it happens only when accessing a new page)

Killing induced by the OS is a kind of failure of the system the
implementation of C++ runs on. I don't think that a C++ standard could
contain optimistic claims about those failures. Every systems has
possible points of failures. Every application that needs to overcome
this problem has to take measures at the application design level. What
about e.g. power loss? Why do e.g. database systems have to do their own
journalling? Is a failing runtime environment not conformant? If the
standard contained implementable assurances about crashing, then C++
would be the choice of the whole DB crowd forever.

>
> However, given that calloc clears the memory (and therefore
> has to commit it), I think that at least the first option
> (commit on allocation) can be achieved from standard C++:
> Just replace operator new with a version which calls calloc
> to allocate memory.

This moves the point of lazy allocation failure to the calloc call. And
BTW calloc() does too much, accessing one byte in every memory page
suffices.
>
> [...]
>
> Is there any OS which uses the waiting strategy (possibly
> as part of a mixed strategy)? IMHO waiting as default, and
> only killing on emergency (esp. "memory deadlock") would make

Mixing just adds to the complexity of scheduling since the deadlock
really happens.

The whole problem is a little bit theoretical, since in practical
environments we have virtual memory, i.e. RAM backed with discs.
Whenever the virtual memory comes near to running out, the systems run
as slow as a crawl. They often start to do so a lot earlier. This
situation with contention by many applications leads to overall running
times bigger than running the applications sequentially. It is out of
scope to do something about that problem in the standard. A
well-designed application might have mechnisms built in to cope with
overall system failures and high contention with many other apps to give
appropriate advice to the user to help optimizing her run-time
environment.

What I could imagine, to differentiate the real implementations of C++
into two cathegories in the standard. One, where operator new works like
expected, and another one, where operator new almost never throws, but
where the system can fail due to memory outages which is overcome by
dropping the process. The second cathegory is the most widespread one
these days and to say it clear, it is not secure for any processes, but
it uses to be very good natured, gives high performance and tries to
make it easy to get performance.

My last option is leaving the standard as it is in this respect, but
have a comment somewhere about reality to warn implementors and users.

Bernd Strieder

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: kanze@gabi-soft.de
Date: Sun, 29 Oct 2000 16:35:06 GMT
Raw View
Bernd Strieder <strieder@student.uni-kl.de> writes:

|>  All major OS do lazy allocation these days or they lose.

I'm curious about your "all major OS."  A couple of years ago (four or
five), I was concerned about this problem; we were writing relatively
critical software, and this behavior was deemed unacceptable.  As a
result, I have tested a number of machines.  To date:

    Solaris:    Does not do lazy allocation, although earlier versions
                (through about 2.3 or 2.4) started thrashing like crazy
                when the limit was reached.

    AIX:        Does not do lazy allocation by default.  Lazy allocation
                can be turned on (on a process by process basis) by
                setting a specific environment variable.

                It is interesting to remark that the AIX originally did
                do lazy allocation, systematically, and that the current
                strategy was adopted under customer presure -- too many
                customers considered lazy allocation simply an error.

    HP-UX:      Does not do lazy allocation.

    Linux:      From hearsay: the kernal can be configured in both
                modes.  How it is normally configured by default is not
                too clear, and probably depends on the distributor.

    Windows NT: In my tests, suspends processes when memory gets tight,
                and brings up a pop-up box suggesting that the user kill
                a few applications.  Presumably, it does something
                different in processes started from a non-interactive
                environment.  Regretfully, the behavior that I observed
                doesn't allow me to say whether it uses lazy allocation
                or not.

I would consider that any group which excludes Solaris, AIX, HP-UX and
probably Linux and Windows NT should not be qualified as "all major OS".

--=20
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient=E9e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh=FCttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: kanze@gabi-soft.de
Date: Sun, 29 Oct 2000 16:36:00 GMT
Raw View
Pierre Baillargeon <pb@artquest.net> writes:

|>  Christopher Eltschka wrote:

|>  > The third and fourth might not be conformant, and even if it is, I'=
d
|>  > anyway use it as last ressort only, unless explicitly specified
|>  > for the given process:

|>  > - kill the process
|>  > - kill a random process

|>  I they are not conformant, does it mean any OS with a kill command
|>  (or "end task" in the Task Manager under Windows NT) is
|>  non-conformant? What is the difference between an OS-triggered kill
|>  and a user-triggered one?  Note: the user killing the process may be
|>  different from the one who started it.

Is an implementation non-conformant because I don't get the correct
results when someone pulls the power plug?  If this is the case, there
are a lot of non-conformant implementations around.

Is the implementation conformant when the machine is turned off?

My answer to this one is no.  If not, my coffee machine is also a
conformant C++ compiler.  Just not at the moment.

When someone pulls the power plug, or uses a system command to abort the
process, the implementation is also (temporarily) non-conformant.

In all cases, however, we are talking about an event external to the
program.  What happens in the case of stack overflow is more
interesting?  Although I can't seem to find it right now, there is a
clause somewhere concerning resource limits, which lets the
implementation off the hook.  This *could* be interpreted to also let
the implementation off the hook in the case of lazy commit.  Except that
in this case, the standard has specifically said what should happen in
the case of not enough resources.  And the particular has precedence
over the general.

|>  While my opinion has no weight in this matter, I don't think any
|>  events that is outside of the language should affect
|>  conformance. Otherwise, where do you stop?

Good question.  I can't find anywhere in either the C or C++ standard
where it says anything about not having to compile a program when the
machine is turned off:-).  Although common sense normally has no role in
interpreting a standard, I'll make an exception this time; I'm not
sending in a bug report to Sun because Sun CC fails to compile my legal
program in this case.

Which is fine for the cases where the standard says nothing, like stack
overflow.  The problem is that if the standard imposes a certain
behavior, then an implementation must provide that behavior to be
conform.  And the standard imposes a certain behavior when there is not
enough memory to fulfill a new request.

--=20
James Kanze                               mailto:kanze@gabi-soft.de
Conseils en informatique orient=E9e objet/
                   Beratung in objektorientierter Datenverarbeitung
Ziegelh=FCttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]





Author: Pierre Baillargeon <pb@artquest.net>
Date: Mon, 30 Oct 2000 14:46:21 GMT
Raw View
kanze@gabi-soft.de wrote:
>
> Pierre Baillargeon <pb@artquest.net> writes:
>
> |>  While my opinion has no weight in this matter, I don't think any
> |>  events that is outside of the language should affect
> |>  conformance. Otherwise, where do you stop?
>
> Good question.  I can't find anywhere in either the C or C++ standard
> where it says anything about not having to compile a program when the
> machine is turned off:-).  Although common sense normally has no role in
> interpreting a standard, I'll make an exception this time; I'm not
> sending in a bug report to Sun because Sun CC fails to compile my legal
> program in this case.
>
> Which is fine for the cases where the standard says nothing, like stack
> overflow.  The problem is that if the standard imposes a certain
> behavior, then an implementation must provide that behavior to be
> conform.  And the standard imposes a certain behavior when there is not
> enough memory to fulfill a new request.

(Warning: philosophical drivel follows, you probably don't want to read
this)

Any standard faces the difficult problem of specifying reactions to
actions, which mean predicting the future. So any reactions is truly
based on probability, and thus only requires the "best effort" of an
implementer. Most of the time, people won't pull the plug. Most of the
time, the OS won't kill the process due to over-commit. Also, the
standard describes the result of programs using language restricted to
the content of the standard (boy, did I ever felt like stating the
obvious). There is no mention of virtual memory, and equally none of
over-commiting. So, IMO, when talking about "enough memory", it has to
rely on the underlying platform notion of it.

Note that NT, with its swap placed on a compressed disk has the same
problem, so this non-conformance would be fairly wide-spread. Then again
this OS-provider's compiler in not even on this planet as far as memory
allocation conformance goes.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]
[ Note that the FAQ URL has changed!  Please update your bookmarks.     ]