Topic: Multiple inheritance and delete


Author: Robert Andrew Ryan <rr2b+@andrew.cmu.edu>
Date: Mon, 15 Aug 1994 01:37:41 -0400
Raw View
Excerpts from netnews.comp.std.c++: 14-Aug-94 Re: Multiple inheritance
an.. Peter Kron@corona.com (1222)

> The important compatibility issue is that standard C is still
> accepted by the compiler.

This is not exactly true.  It is true that a large subset of C is
accepted in C++.  The exceptions primarily relate to the stricter type
checking.  (e.g. no assignment of ints to enums without an explicit
cast, or is that illegal in C too :-), but also void * can't be assigned
to foo *)  Another sticking point is that some C++ compilers implement
strict scoping of types.  (ie, struct foo declared in a class A is only
made visible as A::foo, not as plain foo which is noted as an
anachronism in the ARM as I recall.)

-Rob




Author: ball@Eng.Sun.COM (Mike Ball)
Date: 15 Aug 1994 21:21:56 GMT
Raw View
In article 3096@corona.com, pkron@corona.com (Peter Kron) writes:
>
> It's the classic tradeoff of generality/safety for efficiency. Most
> non-trivial uses of C++ will suffer some loss of efficiency over
> equivalent code in C, and not primarily due to virtual function
> tables. The important compatibility issue is that standard C is still
> accepted by the compiler. If timing critical routines need the
> efficiency of C, they should probably be written in C.

Why do you want to eliminate such useful constructs as handle classes?  These
are typically non-polymorphic, and making them polymorphic could make a very large
difference in their speed (10X in some cases).  You should not assume that
your preferred style is the only good style.

> >  ...
> >  Many people wish "struct" in C++ just meant exactly what
> >  it did in C. But it doesnt (and Bjarne says that decision
> >  was deliberate)
>
> Historical, deliberate, or whatever--if many people feel the
> definition is incorrect and/or detrimental, the committee should
> consider changing it.

But a lot of people feel that the definition is correct and very advantageous.

Please try to understand ALL sides of the argument before passing judgement.

Mike Ball
SunSoft Developer Products.





Author: pkron@corona.com (Peter Kron)
Date: Sun, 14 Aug 1994 11:15:41 PDT
Raw View
From: maxtal@physics.su.OZ.AU (John Max Skaller)
>  In article <1994Aug02.045000.2253@corona.com> pkron@corona.com writes:
>  !The point here is whether non-polymorphic classes really add
>  !anything to the language--my position being that they create more
>  !potential for error than anything else.
>  You would be right, were C++ not an extension of
>  C, and were not efficiency one of the issues
>  which makes C compatibility important.

It's the classic tradeoff of generality/safety for efficiency. Most
non-trivial uses of C++ will suffer some loss of efficiency over
equivalent code in C, and not primarily due to virtual function
tables. The important compatibility issue is that standard C is still
accepted by the compiler. If timing critical routines need the
efficiency of C, they should probably be written in C.

>  ...
>  Many people wish "struct" in C++ just meant exactly what
>  it did in C. But it doesnt (and Bjarne says that decision
>  was deliberate)

Historical, deliberate, or whatever--if many people feel the
definition is incorrect and/or detrimental, the committee should
consider changing it.
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: jjb@watson.ibm.com (John Barton)
Date: Fri, 5 Aug 1994 21:23:30 GMT
Raw View
In article <1994Aug05.130445.352@corona.com>, pkron@corona.com (Peter Kron) writes:
|> From: Philip@storcomp.demon.co.uk (Philip Hugh Hunt)
|> >  One example follows:
|> >
|> >  class Point {
|> >  public:
|> >     int x;
|> >     int y;
|> >     Point(int xx =0, int yy =0) { x=xx; y=yy; };
|> >     Point operator+(Point p) {return Point(x+p.x, y+p.y);};
|> >  };
|>
|> class ConstrainedPoint : Point
|>     {
|> public:
|>     int limit;
|>     Point operator+(Point p)
|>         {return Point( x+p.x < limit ? x+p.x : limit, y+p.y)};
|>     }
|>
|> If this class is used in a collection of Point, it is probably going
|> to cause some subtle bugs.

  Nope, ain't so:
int main()
   {
   Point* pa[2];  // Collection of Point-s
   pa[0] = new Point();
   pa[1] = new ConstrainedPoint();  // ERROR
   }

|> It's probably guaranteed to happen in the
|> maintenance of any software using Point. Since this use--which seems
|> reasonable enough--wasn't anticipated by the designer of Point, the
|> reuse of Point is limited.

  Nope, ain't so.  The designer of Point did anticipate the use
of Point as a Point.  ConstainedPoint used Point as a base class,
inheriting state and behavior, which it --- wisely --- chose to
encapsulate.  ConstainedPoint, being the class in closest contact
with re-use of Point, chose to disallow the use of ConstainedPoint*
as a Point*.  The person who wrote ConstainedPoint understood
the nature of Point: its not a class with virtual functions to be
overridden, its a concrete class with members to be called and
state to be aggregated.

  C++ is a hybrid of dynamic and static polymorphism.  Its not
a pure breed. Neither is reality.

--
John.

John J. Barton        jjb@watson.ibm.com            (914)784-6645
H1-C13 IBM Watson Research Center P.O. Box 704 Hawthorne NY 10598




Author: fjh@munta.cs.mu.OZ.AU (Fergus Henderson)
Date: Sun, 7 Aug 1994 01:13:15 GMT
Raw View
pkron@corona.com (Peter Kron) writes:

>class ConstrainedPoint : Point
>    {
>public:
>    int limit;
>    Point operator+(Point p)
>        {return Point( x+p.x < limit ? x+p.x : limit, y+p.y)};
>    }
>
>If this class is used in a collection of Point, it is probably going
>to cause some subtle bugs.

Yes.  And if Point::operator+() had been declared virtual, it would
*still* cause some bugs - probably even more subtle.

>Since this use--which seems reasonable enough--

No, I disagree, it seems quite unreasonable.  It will break all my code
written using `Point', since my code assumes that `Point::operator+()'
is an associative operator.

--
Fergus Henderson - fjh@munta.cs.mu.oz.au




Author: maxtal@physics.su.OZ.AU (John Max Skaller)
Date: Sun, 7 Aug 1994 20:47:17 GMT
Raw View
In article <1994Aug02.045000.2253@corona.com> pkron@corona.com writes:
>From: maxtal@physics.su.OZ.AU (John Max Skaller)
>        (would change default of member functions to "virtual" break
>   reams of code?...)
>>   Yes. In C++ there are two distinct kinds of class:
>>
>>   a) polymorphic
>>   b) non-polymorphic
>>
>>  and a whole lot of the language like RTTI works
>>  differently for polymorphic classes.
>The language doesn't make any syntactic distinction--it's an
>implementation issue based on the semantics of "virtual".

 The Working Paper explicitly defines a term
"polymorphic type" and their semantics are distinct
from non-polymorphic types.

>The point here is whether non-polymorphic classes really add
>anything to the language--my position being that they create more
>potential for error than anything else.

 You would be right, were C++ not an extension of C,
and were not efficiency one of the issues which makes
C compatibility important.
>
>already broken that compatibility. To some extent, adding any
>functions at all has broken it, since the member functions are not
>accessible from C and the headers won't parse.

 That is true. Many people wish "struct" in C++ just
meant exactly what it did in C. But it doesnt (and Bjarne
says that decision was deliberate)

--
        JOHN (MAX) SKALLER,         INTERNET:maxtal@suphys.physics.su.oz.au
 Maxtal Pty Ltd,
        81A Glebe Point Rd, GLEBE   Mem: SA IT/9/22,SC22/WG21
        NSW 2037, AUSTRALIA     Phone: 61-2-566-2189




Author: olaf@cwi.nl (Olaf Weber)
Date: Mon, 8 Aug 1994 08:21:06 GMT
Raw View
In article <1994Aug04.051817.592@corona.com>, pkron@corona.com (Peter Kron) writes:

> From: olaf@cwi.nl (Olaf Weber)

>> As for having `virtual' rather than `nonvirtual', I still
>> feel that having to request the more expensive mechanism
>> is the right thing.

> But isn't the less expensive mechanism more prone to errors and
> greater development expense in the long term? In most cases the
> runtime expense is negligible. I would prefer that the compiler opt
> for safety by default and require explicit action to follow a more
> fragile path.

There are basically two cases here: a class was made to be derived
from, or it was not.  In the first case, the designer will (read
should) have made some careful decisions about which members are
virtual and which aren't.  The default shouldn't matter in this case.

With respect to the second case you seem to overestimate the ease with
which a class be be reused by derivation from it.  Regarding reuse,
Bjarne Stroustrup wrote (2e, 11.4.1, page 383):
 [A] component is not re-usable until someone has "re-used" it.
This point is made forcefully by M. Caroll, in "Invasive Inheritance"
(C++ Report, 4(8): 34-42), and again by Scott Meyers, in "Code Reuse,
Concrete Classes and Inheritance" (C++ Report, 6(6): 46-48).

For example, for a small class that isn't derived from, call by value
may be more efficient than call by reference.  If support functions
assume that call by value can be used, and that instances can be
copied freely, any attempt to use a derived class as an argument will
likely end in disaster.  That non-virtual is the default for member
functions is only a small aspect of this problem.

Crossposted to comp.lang.c++, and followups set to that group only.

-- Olaf Weber




Author: maxtal@physics.su.OZ.AU (John Max Skaller)
Date: Mon, 1 Aug 1994 18:16:39 GMT
Raw View
In article <1994Jul27.022108.252@corona.com> pkron@corona.com writes:
>From: maxtal@physics.su.OZ.AU (John Max Skaller)

 (change default of member functions to "virtual").
>
>>  It would also break REAMS of C++ code, for no good reason
>>  other than to change a default to your liking.  Even if
>>  your liking is shared by many, that is not enough to zap
>>  almost all C++ code in existence.
>
>Would it?

 Yes. In C++ there are two distinct kinds of class:

 a) polymorphic
 b) non-polymorphic

and a whole lot of the language like RTTI works differently
for polymorphic classes. Changing the default makes
simple C structs with a few member functions in them
polymorphic when they used to be non-polymorphic.

Indeed, ALL C structs would become polymorphic because their
destructors would be implicitly virtual -- which would
destroy the ability to do aliasing based on knowledge of layout.

The default is "non-virtual" because it has to be for
compatibility withe C.

--
        JOHN (MAX) SKALLER,         INTERNET:maxtal@suphys.physics.su.oz.au
 Maxtal Pty Ltd,
        81A Glebe Point Rd, GLEBE   Mem: SA IT/9/22,SC22/WG21
        NSW 2037, AUSTRALIA     Phone: 61-2-566-2189




Author: pkron@corona.com (Peter Kron)
Date: Mon, 1 Aug 1994 21:50:00 PDT
Raw View
From: maxtal@physics.su.OZ.AU (John Max Skaller)
        (would change default of member functions to "virtual" break
   reams of code?...)
>   Yes. In C++ there are two distinct kinds of class:
>
>   a) polymorphic
>   b) non-polymorphic
>
>  and a whole lot of the language like RTTI works
>  differently for polymorphic classes.
The language doesn't make any syntactic distinction--it's an
implementation issue based on the semantics of "virtual". The point
here is whether non-polymorphic classes really add anything to the
language--my position being that they create more potential for error
than anything else.

Concrete examples to the contrary would be illuminating here.

RTTI is part of the standard under discussion. How it behaves has yet
to be finalized.

>  Changing the default makes simple C structs with a few
>  member functions in them polymorphic when they used to
>  be non-polymorphic.
>
>  Indeed, ALL C structs would become polymorphic because
>  their destructors would be implicitly virtual -- which
>  would destroy the ability to do aliasing based on
>  knowledge of layout.
>
>  The default is "non-virtual" because it has to be for
>  compatibility withe C.

It was certainly my intention that structs should remain compatible
with C--and adding a vtable pointer to all of them would defeat
that. However, adding any virtual functions to a struct has
already broken that compatibility. To some extent, adding any
functions at all has broken it, since the member functions are not
accessible from C and the headers won't parse.

The ARM simply defines a struct as a class whose members are public
by default.  But lots of C++ code simply uses struct as a C
portability concept; has no need of virtual functions; and would not
break if they were not allowed within structs. Non-polymorphic structs
(ie, which override inherited, non-virtual member functions) are just
as error prone as non-polymorphic classes.

By leaving the implementation of structs alone as opposed to
classes--or even prohibiting use of virtuals in structs--it's
questionable that reams of vulnerable code exist.
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: olaf@cwi.nl (Olaf Weber)
Date: Wed, 3 Aug 1994 11:23:46 GMT
Raw View
In article <1994Aug02.045000.2253@corona.com>, pkron@corona.com (Peter Kron) writes:

> The point here is whether non-polymorphic classes really add
> anything to the language--my position being that they create more
> potential for error than anything else.

There are two issues here: (1) should all classes be polymorphic with
respect to RTTI and destructor calls, and (2) should all member
functions of a class be virtual.

With repect to (1): there are ways of implementing this with little
run-time costs for people who don't use it.

The run-time cost incurred is related to finding the destructor to be
called in the absence of a virtual table: a single word per heap
object can be used to store the address of the destructor.  The time
overhead should vanish in the cost associated with the allocation
itself.  Whether the space overhead is bearable is a different matter.

The consequences of "RTTI for all" for executable size are perhaps
less pleasant, as some information needs to be available on the
lay-out of stack frames.  Howeverm, the same information could be
useful for exception handling, and perhaps also for garbage collection
or persistent objects, so the larger executable size might still be
considered affordable.

For (2) I think the answer should be no.  Use of call-by-reference in
combination with separate compilation means that many optimization
opportunities depend on knowing that no virtual call needs to be made.

As for having `virtual' rather than `nonvirtual', I still feel that
having to request the more expensive mechanism is the right thing.

> Concrete examples to the contrary would be illuminating here.

In the Standard Template Library, you'll find code like this:

 struct empty { };

 inline bool operator != (empty const &, empty const &)
  { return true; }
 inline bool operator < (empty const &, empty const &)
  { return false; }

 template <class Arg, class Result>
 struct unary_function: public empty {
  typedef Arg argument_type;
  typedef Result result_type;
 };

 template <class T>
 struct negate: public unary_function<T, T> {
  T operator () (T const & x) const
   { return -x; }
 };

 template <class T>
 struct logical_not: public unary_function<T, bool> {
  bool operator () (T const & x) const
   { return !x; }
 };

Here derivation is used to provide basic functionality to a large
number of classes without having to write it out for each of them.
It would be undesirable (for performance reasons) to let the member
functions be virtual in this example.

It could be argued that this is an abuse of inheritance, and that some
other mechanism should be used for this.  Macro hackery could probably
do the job, but the C++ preprocessor isn't the best tool.  [Insert ad
for the ARC++ macro facilities here].  Nor am I convinced that doing
this with macros is a superior technique.

> Non-polymorphic structs (ie, which override inherited, non-virtual
> member functions) are just as error prone as non-polymorphic
> classes.

This is an interesting definition of non-polymorphic structs.  If I
understand it correctly, the structs in the example above are not
non-polymorphic by that measure.

Now, does anybody actually _know_ how much of a problem C++'s ability
to override non-virtual member functions is?  I suspect that code that
uses that particular construct is very rare indeed.  If so, and if my
interpretation of what Peter Kron means by non-polymorphic is correct,
then it would seem that we are arguing a non-issue here.

-- Olaf Weber




Author: jjb@watson.ibm.com (John Barton)
Date: Wed, 3 Aug 1994 17:21:23 GMT
Raw View
In article <CtyIBt.4E7@cwi.nl>, olaf@cwi.nl (Olaf Weber) writes:
|> In article <1994Aug02.045000.2253@corona.com>, pkron@corona.com
|> (Peter Kron) writes:
[ stuff deleted]
|>
|> > Non-polymorphic structs (ie, which override inherited, non-virtual
|> > member functions) are just as error prone as non-polymorphic
|> > classes.
|>
|> This is an interesting definition of non-polymorphic structs.  If I
|> understand it correctly, the structs in the example above are not
|> non-polymorphic by that measure.
|>
|> Now, does anybody actually _know_ how much of a problem C++'s ability
|> to override non-virtual member functions is?  I suspect that code that
|> uses that particular construct is very rare indeed.  If so, and if my
|> interpretation of what Peter Kron means by non-polymorphic is correct,
|> then it would seem that we are arguing a non-issue here.
|>

You are certainly arguing a non-issue: there is no such thing as an
override of a non-virtual member function.  Names declared in derived
classes hide names declared in base classes in the same way that names
declared in inner blocks hide names declared in outer blocks.
Overriding only occurs for virtual functions.

The concept that everything in C++ should be "polymorphic", meaning in
this case a the narrow virtual function dispatch version of
polymophism, is silly.  The only reason to pay for the overhead of
building vtables, storing vtable pointers, initializing vtable
pointers, calling through vtables, and managing memory through
pointers as vtables require is if your application needs to build
collections of similar but not identical objects (eg collections of
triangles, circles and squares) or if you want to use pure abstract
base classes as an object-oriented API mechanism.  There are many
applications that use static polymorphism via templates.  Here
identical member function names express commonality and if you want to
combine this with inheritance to do data-structure reuse, name hiding
is exactly what you need.

--
John.

John J. Barton        jjb@watson.ibm.com            (914)784-6645
H1-C13 IBM Watson Research Center P.O. Box 704 Hawthorne NY 10598




Author: pkron@corona.com (Peter Kron)
Date: Wed, 3 Aug 1994 22:18:17 PDT
Raw View
From: olaf@cwi.nl (Olaf Weber)
>  There are two issues here: (1) should all classes be
>  polymorphic with respect to RTTI and destructor calls,
>  and (2) should all member functions of a class be virtual.
>
>  With repect to (1): there are ways of implementing this
>  with little run-time costs for people who don't use it.
>
>  ... <qualified agreement>
>
>  For (2) I think the answer should be no.  Use of
>  call-by-reference in combination with separate compilation
>  means that many optimization opportunities depend on
>  knowing that no virtual call needs to be made.
>
>  As for having `virtual' rather than `nonvirtual', I still
>  feel that having to request the more expensive mechanism
>  is the right thing.

But isn't the less expensive mechanism more prone to errors and
greater development expense in the long term? In most cases the
runtime expense is negligible. I would prefer that the compiler opt
for safety by default and require explicit action to follow a more
fragile path.
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: jason@cygnus.com (Jason Merrill)
Date: Thu, 4 Aug 1994 01:05:34 GMT
Raw View
>>>>> Olaf Weber <olaf@cwi.nl> writes:

> There are two issues here: (1) should all classes be polymorphic with
> respect to RTTI and destructor calls, and (2) should all member
> functions of a class be virtual.

> With repect to (1): there are ways of implementing this with little
> run-time costs for people who don't use it.

But would doing so be useful?  If people want their classes to be
polymorphic with respect to destructor calls, they should make their
destructors virtual.  That's what virtual is for.

Jason




Author: Philip@storcomp.demon.co.uk (Philip Hugh Hunt)
Date: Thu, 4 Aug 1994 18:14:04 +0000
Raw View
In article <1994Aug02.045000.2253@corona.com>
           pkron@corona.com "Peter Kron" writes:
> The language doesn't make any syntactic distinction--it's an
> implementation issue based on the semantics of "virtual". The point
> here is whether non-polymorphic classes really add anything to the
> language--my position being that they create more potential for error
> than anything else.

Perhaps they do create potential for error if used by programmers who
don't know C++ well. They should either learn the language properly
or stick to Pascal or Visual Basic :-).

If people don't want to use non-polymorphic classes, they don't have to.

"People's ways of thinking and working are so diverse that an attempt to
force a single style would do more harm than good" - Stroustrup.

>
> Concrete examples to the contrary would be illuminating here.

One example follows:

class Point {
public:
   int x;
   int y;
   Point(int xx =0, int yy =0) { x=xx; y=yy; };
   Point operator+(Point p) {return Point(x+p.x, y+p.y);};
};

If this class was forced to be polymophic, it would be less efficient.
The loss of efficiency would probably be unacceptable if this class was
used in eg a windowing system.

--
Phil Hunt




Author: olaf@cwi.nl (Olaf Weber)
Date: Fri, 5 Aug 1994 08:00:50 GMT
Raw View
In article <JASON.94Aug3180534@deneb.cygnus.com>, jason@cygnus.com (Jason Merrill) writes:

>>>>>> Olaf Weber <olaf@cwi.nl> writes:
>> There are two issues here: (1) should all classes be polymorphic with
>> respect to RTTI and destructor calls, and (2) should all member
>> functions of a class be virtual.

>> With repect to (1): there are ways of implementing this with little
>> run-time costs for people who don't use it.

> But would doing so be useful?  If people want their classes to be
> polymorphic with respect to destructor calls, they should make their
> destructors virtual.  That's what virtual is for.

I certainly agree with this sentiment when applied to normal member
functions.  Steve Clamage pointed out some good reasons why you might
want member functions to be non-virtual in a class that is ostensible
meant to be derived from (it has a virtual destructor).
[comp.lang.c++ article <31p84a$k27@engnews2.Eng.Sun.COM>, "Re: virtual
iostream methods? why not?"]

Destructors are special however, and a guarantee that the correct
destructor will be called for an object is worth something, but it has
to be balanced against the costs.

One way of accomplishing safe destruction would be a requirement that
a class can only be derived from if it has a virtual destructor.
However, the code excerpt from the Standard Template Library shows
some very reasonable use of derivation from classes without virtual
destructors.  There would be a space overhead for all objects, even if
they live on the stack only, which is not acceptable.

A problem with more clever methods is that rolling you own memory
management for a class would become more complex, as users of the
class will assume that deletion through a pointer to it will work,
even if the object is of a derived class.

In all, I don't think it is worth the extra hassle.  Of course, with
garbage collection most of the infrastructure required will be there
in any case, and I'd like to see it offered (as an extension) in that
case.

-- Olaf Weber




Author: pkron@corona.com (Peter Kron)
Date: Fri, 5 Aug 1994 06:04:45 PDT
Raw View
From: Philip@storcomp.demon.co.uk (Philip Hugh Hunt)
>  One example follows:
>
>  class Point {
>  public:
>     int x;
>     int y;
>     Point(int xx =0, int yy =0) { x=xx; y=yy; };
>     Point operator+(Point p) {return Point(x+p.x, y+p.y);};
>  };

class ConstrainedPoint : Point
    {
public:
    int limit;
    Point operator+(Point p)
        {return Point( x+p.x < limit ? x+p.x : limit, y+p.y)};
    }

If this class is used in a collection of Point, it is probably going
to cause some subtle bugs. It's probably guaranteed to happen in the
maintenance of any software using Point. Since this use--which seems
reasonable enough--wasn't anticipated by the designer of Point, the
reuse of Point is limited. Why should this be the default?

>  If this class was forced to be polymophic, it would be
>  less efficient.  The loss of efficiency would probably
>  be unacceptable if this class was used in eg a windowing
>  system.

I could certainly produce a benchmark supporting your claim. I could
also show that using a simple C struct would improve performance
further. However, in a real system, I'd be more concerned about the
total efficiency. If a polygon of a 1000 points is translated using
operator+, which will contribute more to the overall performance:
memberfunction calls or rerendering the polygon following translation?
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: ball@Eng.Sun.COM (Mike Ball)
Date: 5 Aug 1994 16:07:57 GMT
Raw View
In article 352@corona.com, pkron@corona.com (Peter Kron) writes:
> If this class is used in a collection of Point, it is probably going
> to cause some subtle bugs. It's probably guaranteed to happen in the
> maintenance of any software using Point. Since this use--which seems
> reasonable enough--wasn't anticipated by the designer of Point, the
> reuse of Point is limited. Why should this be the default?

The designer of "Point" may well have stated that the class was NOT to be
derived from, and the deriver did so at his or her own risk.  There
are many cases where you don't want to derive from a class.  Handle classes
are an obvious case in point, some kinds of numeric classes are another.

One of the reasons for the success of C++ is  that it allows this freedom
of choice.  Given that it must allow such freedom, the only question is
"What should be the default?"

There are still lots of C programmers moving to C++, and many of those complain
about "all that stuff the compiler puts in there".  In fact, there are a lot
more of them than there are OO programmers moving from some other language to
C++.

Though it's arguable, it sure looks to me like minimum changes from C was
the right decision.

Mike Ball
SunSoft






Author: pkron@corona.com (Peter Kron)
Date: Tue, 26 Jul 1994 19:21:08 PDT
Raw View
From: maxtal@physics.su.OZ.AU (John Max Skaller)
>  In article <1994Jul20.044817.698@corona.com>,
>  Peter Kron <pkron@corona.com> wrote:
>>  If polymorphism is not desired, it would be much more
>>  appropriate to use a different member name rather than overriding
>>  memberFunction. As I suggested, a keyword could be defined to provide
>>  non-polymorphic overrides, but that would be the exception rather than
>>  the rule.

>  Yes it could but the default is the other way around.
>  Big deal. Its just a default.

The committee is standardizing the language. It's a good
time to consider whether defaults chosen for whatever
historical reasons are achieving intended goals and
consider changing them if they aren't.

>>  I'm suggesting changes that I believe would reduce a lot of
>>  difficulty, based on some recent threads.

>  It would also break REAMS of C++ code, for no good reason
>  other than to change a default to your liking.  Even if
>  your liking is shared by many, that is not enough to zap
>  almost all C++ code in existence.

Would it? It would break code that depends on non-polymorphic
overrides. Non-virtual functions which are not overridden or accessed
only by the overriding classes would still function properly, though
through the virtual mechanism rather than the linker. These cases
probably cover most code. Code that depends on non-polymorphic
overrides is probably destined to break anyway, for the very reasons
I've discussed in proposing the change. Better that those breaks are
detected by the compiler than at run-time.

Function prototypes broke lots of code. They caused a lot of grief to
a lot of people but it was worth it. You have to consider the reams
of code-to-be that will suffer if the default causes problems.
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: rjl@f111.iassf.easams.com.au (Rohan LENARD)
Date: 26 Jul 1994 07:41:04 +1000
Raw View
In article <JASON.94Jul24193114@deneb.cygnus.com>,
Jason Merrill <jason@cygnus.com> wrote:
>>>>>> Rohan LENARD <rjl@f111.iassf.easams.com.au> writes:
>
>> Your structs A & B are aggregates and thus do *not* have destructors, however
>> C is not an aggregate (since it has base classes).  It implicitly has a
>> destructor, so you code meets the highlighted text.
>
>Where is it written that all non-aggregates have destructors?  I don't
>think that is accurate.
>

You could be right here.  According to the ARM 12.4 (pg 277) -
  "If a base or a member has a destructor and no destructor is declared
   for its derived class a default destructor is generated"

This obviously matches the original example, suggesting that I'm wrong.

However I find many parts of this chapter a tad ambiguous, since,
 "When invoked by the _delete_ operator, memory is freed by the destructor
  for the most derived class (%12.6.2) of the object using an
  _operator_delete() (%5.3.4)" (pg 278).

Maybe I'm reading too much into it, but the language of section 12 is
completely reversed from elsewhere where the ARM talks about the delete
operator calling the destructor (pg 64. Expressions) - and to me implies
that no destructor means no deletion (of course I know this is silly :-).


Regards,

 Rohan


--
----------------------------------------------------------------------------
rjl@iassf.easams.com.au | All quotes can be attributed to my automated quote
Rohan Lenard            | writing tool.  Yours for just $19.95; and if you
+61-2-367-4555          | call now you'll get a free set of steak knives ...




Author: immel@chord.centerline.com (Mark Immel)
Date: 26 Jul 1994 11:19:46 GMT
Raw View
>>> Your structs A & B are aggregates and thus do *not* have destructors, however
>>> C is not an aggregate (since it has base classes).  It implicitly has a
>>> destructor, so you code meets the highlighted text.
>>
>>Where is it written that all non-aggregates have destructors?  I don't
>>think that is accurate.
>>

>You could be right here.  According to the ARM 12.4 (pg 277) -
>  "If a base or a member has a destructor and no destructor is declared
>   for its derived class a default destructor is generated"

Notice that this language is quite different from the parallel statement
for constructors (WP 12.1 p. 4):

  "If no constructor has been declared for class X, a default constructor
   is implicitly declared."

The constructor is declared, but :

  "The definition for an implicitly-declared default constructor is generated
   only if the constructor is called."

We could read the statement about destructors, then, to imply that the
destructor is declared and defined if the base classes have destructors, etc..,
or infer that it should have said declared where it said generated, or (and
this is the solution I would favor), parallel the constructor language
exactly:

  "If no destructor has been declared for class X, a default destructor
   is implicitly declared."

Of course, if none of the base classes, member need destructors the compiler
can optimize away any calls (explicit or implicit) to the function.  But,
if I can say ~int, I sure want to be able to say ~X for X any class!

-- Mark Immel
   immel@centerline.com







Author: jason@cygnus.com (Jason Merrill)
Date: Wed, 27 Jul 1994 05:41:58 GMT
Raw View
>>>>> Mark Immel <immel@chord.centerline.com> writes:

> We could read the statement about destructors, then, to imply that the
> destructor is declared and defined if the base classes have destructors,
> etc.., or infer that it should have said declared where it said
> generated

I prefer this interpretation.

> or (and this is the solution I would favor), parallel the
> constructor language exactly:

>   "If no destructor has been declared for class X, a default destructor
>    is implicitly declared."

> Of course, if none of the base classes, member need destructors the compiler
> can optimize away any calls (explicit or implicit) to the function.  But,
> if I can say ~int, I sure want to be able to say ~X for X any class!

You can.  12.6 says "Using the [explicit destructor call] notation for a
type that does not have a destructor has no effect."

The reason we have to have copy constructors implicitly declared for all
classes so so that overload resolution will work.  Destructors cannot be
overloaded, so this is not a concern.

I'd prefer to leave it as "this class has a destructor" or "this class does
not have a destructor" rather than "this class has a complex destructor" or
"this class has a trivial compiler-generated destructor" like we have to
deal with for constructors (a complex constructor is either one that is
user-defined or one that calls other complex constructors).

Jason




Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Wed, 27 Jul 1994 06:51:50 GMT
Raw View
In article <IMMEL.94Jul18092230@chord.centerline.com> immel@chord.centerline.com (Mark Immel) writes:
>
>C++ gurus and standard committee --
>
>I asked about the following code a few days ago.  The problem is that the
>call to delete b calls free with a bad address; this is a particularly nasty
>silent failure -- often the program will crash much later.  Many of you
>commented that giving B a virtual destructor will fix the problem; that's
>true.  However, the WP 5.3.5 (26 May, 1994 version) implies that my code
>is legal:
>
>
> struct A {};
> struct B {};
> struct C: A, B {};
>
> void foo()
> {
>   B* b = new C;
>   delete b;
> }
>
>5.3.5 paragraph 2:
>
>  "The value of the operand of delete must be a pointer to a non-array
>   object created by a new-expression without a new-placement specification,
>   or a pointer to a subobject representing a base class of such an object."
>
>5.3.5 paragraph 3:
>
>  "If the static type of the operand is different from its dynamic type *AND
>   THE CLASS OF THE COMPLETE OBJECT HAS A DESTRUCTOR*, the static type must
>   have a virtual destructor, or the result is undefined." (emphasis mine)

I hope that I'm not the only one who notices the sloppy wording here.

In the above example, the static type of the operand of the `delete'
operator is type B*.  (One assumes that that is it's dynamic type also.)

5.3.5p3 needs to be fixed to talk about the ``pointed at'' type of the
operand, rather that the type of the operand itself.  (Note that the
operand type for a `delete' must be a pointer type, and pointer types
NEVER have virtual destructors.  It is even arguable whether pointer
types have destructors at all!)

--

-- Ron Guilmette, Sunnyvale, CA ---------- RG Consulting -------------------
---- domain addr: rfg@netcom.com ----------- Purveyors of Compiler Test ----
---- uucp addr: ...!uunet!netcom!rfg ------- Suites and Bullet-Proof Shoes -




Author: scalio@hogpf.ho.att.com (-J.SCALIO)
Date: Tue, 19 Jul 1994 14:56:06 GMT
Raw View
In article <306h9v$7f1@fsgi01.fnal.gov>,
David Sachs <b91926@fsgi01.fnal.gov> wrote:
>immel@chord.centerline.com (Mark Immel) writes: ...
>>
>>  Suppose I have the following (which breaks with every compiler I try) :
>>
>> struct A {};
>> struct B {};
>> struct C: A, B {};
>>
>> void foo()
>> {
>>   B* b = new C;
>>   delete b;
>> }
>>
>>  It breaks in the sense that delete calls free with a bad address.  This
>>  is a silent failure that can be detected only with a debugging version
>>  of malloc/free or the like.  But much later, you might find out your
>>  heap is corrupted...
>>  This code *SHOULD* be legal according to WP 5.3.5 p 2:
>>  [clipped...]

>
>The code would work properly if struct B is declared to have
>a virtual destructor. e.g. struct B { virtual ~B(){}};
>
>I would really like the C++ standard to REQIRE this, but even
>with such a requirement, the error would probably be undetectable.

This requirement will require every library designer to explicitly declare
the destructors of their classes as virtual.  It also forces the library
designers to explicitly declare *ALL* destructors, since the compiler
generated destructor will most certainly *NOT* add the virtual keyword.

The requirement and necessary action will increase the size of many
classes that otherwise would not declare anything virutal.

The standard needs to address the concerns of library designers and
users regarding this issue.  Library designers should not be responsible
in advance for deciding whether anyone should/will derive from their
classes.





Author: pete@genghis.interbase.borland.com (Pete Becker)
Date: Tue, 19 Jul 1994 16:51:03 GMT
Raw View
In article <Ct705J.Dq@nntpa.cb.att.com>,
-J.SCALIO <scalio@hogpf.ho.att.com> wrote:
>In article <306h9v$7f1@fsgi01.fnal.gov>,
>David Sachs <b91926@fsgi01.fnal.gov> wrote:
>>immel@chord.centerline.com (Mark Immel) writes: ...
>>>
>>>  Suppose I have the following (which breaks with every compiler I try) :
>>>
>>> struct A {};
>>> struct B {};
>>> struct C: A, B {};
>>>
>>> void foo()
>>> {
>>>   B* b = new C;
>>>   delete b;
>>> }
>>>
>>>  It breaks in the sense that delete calls free with a bad address.  This
>>>  is a silent failure that can be detected only with a debugging version
>>>  of malloc/free or the like.  But much later, you might find out your
>>>  heap is corrupted...
>>>  This code *SHOULD* be legal according to WP 5.3.5 p 2:
>>>  [clipped...]
>
>>
>>The code would work properly if struct B is declared to have
>>a virtual destructor. e.g. struct B { virtual ~B(){}};
>>
>>I would really like the C++ standard to REQIRE this, but even
>>with such a requirement, the error would probably be undetectable.
>
>This requirement will require every library designer to explicitly declare
>the destructors of their classes as virtual.  It also forces the library
>designers to explicitly declare *ALL* destructors, since the compiler
>generated destructor will most certainly *NOT* add the virtual keyword.
>

 If your design anticipates that your class will be used polymorphically
you must provide a virtual destructor. If your design does not anticipate that
your class will be used polymorphically you should not provide a virtual
destructor. Users who misuse your class will find that their code does not
work as they expect it to, so be sure to document how your class is intended
to be used.
 -- Pete






Author: pkron@corona.com (Peter Kron)
Date: Tue, 19 Jul 1994 21:48:17 PDT
Raw View
From: immel@chord.centerline.com (Mark Immel)
>  I asked about the following code a few days ago.  The
>  problem is that the call to delete b calls free with a
>  bad address; this is a particularly nasty silent failure
>  -- often the program will crash much later.  Many of you
>  commented that giving B a virtual destructor will fix
>  the problem; that's true.

This problem is particularly nasty, but is typical of a larger class
of problems due to semantics of "virtual". For example, the following:

class Base
 {public: void memberFunction();};
class Derived : public Base
 {public: void memberFunction();};

void function()
 {
 Derived object;
 Base *pointer=&object;
 object.memberFunction(); //Derived::method
 pointer->memberFunction(); //Base::method
 }

Clearly this is not desirable behavior, but is a side effect of trying
to minimize storage for vtables and pointers to them. The concept of
polymorphism should imply that *objects* determine how they respond to
member functions, not callers. But here the reverse is the case.

The problem can be avoided by using virtual everywhere, but this
places the burden on the programmer when it should be on the language.
I would like to see the standard address this issue by making the
default for all member functions be virtual to avoid this general
problem.

Such a change would mean the "typical" C++ program increases in size
due to larger virtual tables and slightly more frequent incidence of
vtable pointers in instances. But it seems justified to me, in that
the more critical issues facing software are those of preventing
subtle errors, not reducing memory by a few hundred kilobytes.
Certainly a new keyword could be introduced to force explicit linkage
for primitive classes, like Rectangle, in which virtual tables may be
clearly unnecessary. Like inline functions, the option to streamline
would exist but the burden of responsibility would shift to the class
designer to ensure that no inconsistency exists first.

Some code that depends on constructs like the example above would
certainly break. That is unfortunate, but it is certainly arguable
that such dependencies are poor design in the first place. In the long
run, the result would be a safer language.
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: pete@genghis.interbase.borland.com (Pete Becker)
Date: Wed, 20 Jul 1994 16:04:03 GMT
Raw View
In article <1994Jul20.044817.698@corona.com>,
Peter Kron <pkron@corona.com> wrote:
>
>This problem is particularly nasty, but is typical of a larger class
>of problems due to semantics of "virtual". For example, the following:
>
>class Base
> {public: void memberFunction();};
>class Derived : public Base
> {public: void memberFunction();};
>
>void function()
> {
> Derived object;
> Base *pointer=&object;
> object.memberFunction(); //Derived::method
> pointer->memberFunction(); //Base::method
> }
>
>Clearly this is not desirable behavior, but is a side effect of trying
>to minimize storage for vtables and pointers to them. The concept of
>polymorphism should imply that *objects* determine how they respond to
>member functions, not callers. But here the reverse is the case.
>

 Clearly, the design decision for this pair of classes was that they
are not polymorphic. That is not a language flaw, but a deliberate decision
on the part of the designer of these classes. If you want the call to
memberFunction() to be virtual, design your classes appropriately. Please don't
impose your design criteria on the rest of the world. "There are more things
in heaven and Earth, Horation, than are dreamt of in your philosophy."
 -- Pete




Author: pkron@corona.com (Peter Kron)
Date: Wed, 20 Jul 1994 19:19:36 PDT
Raw View
From: pete@genghis.interbase.borland.com (Pete Becker)
>  In article <1994Jul20.044817.698@corona.com>,
>  Peter Kron <pkron@corona.com> wrote:
>>Clearly this is not desirable behavior, but is a side effect of trying
>>to minimize storage for vtables and pointers to them. The concept of
>>polymorphism should imply that *objects* determine how they respond to
>>member functions, not callers.
>
>  Clearly, the design decision for this pair of
>  classes was that they are not polymorphic. That
>  is not a language flaw, but a deliberate decision on the
>  part of the designer of these classes. If you want the
>  call to memberFunction() to be virtual, design your
>  classes appropriately.

Polymorphism is central to using C++ as an OOPL rather than just as
a better of C. If polymorphism is not desired, it would be much more
appropriate to use a different member name rather than overriding
memberFunction. As I suggested, a keyword could be defined to provide
non-polymorphic overrides, but that would be the exception rather than
the rule.

>  Please don't impose your design criteria on the rest of
>  the world.

Hmmm. It would seem the language is imposing the criteria, not me.
I'm suggesting changes that I believe would reduce a lot of
difficulty, based on some recent threads.

Do we now get "Love it or leave it" C++ flag decals with our
compilers?
---
NeXTMail:peter.kron@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022




Author: pete@genghis.interbase.borland.com (Pete Becker)
Date: Thu, 21 Jul 1994 18:14:34 GMT
Raw View
In article <1994Jul21.021936.314@corona.com>,
Peter Kron <pkron@corona.com> wrote:
>From: pete@genghis.interbase.borland.com (Pete Becker)
>>  In article <1994Jul20.044817.698@corona.com>,
>>  Peter Kron <pkron@corona.com> wrote:
>>>Clearly this is not desirable behavior, but is a side effect of trying
>>>to minimize storage for vtables and pointers to them. The concept of
>>>polymorphism should imply that *objects* determine how they respond to
>>>member functions, not callers.
>>
>>  Clearly, the design decision for this pair of
>>  classes was that they are not polymorphic. That
>>  is not a language flaw, but a deliberate decision on the
>>  part of the designer of these classes. If you want the
>>  call to memberFunction() to be virtual, design your
>>  classes appropriately.
>
>Polymorphism is central to using C++ as an OOPL rather than just as
>a better of C. If polymorphism is not desired, it would be much more
>appropriate to use a different member name rather than overriding
>memberFunction. As I suggested, a keyword could be defined to provide
>non-polymorphic overrides, but that would be the exception rather than
>the rule.
>

 I still don't understand the point. The designer of the base class
decided that it should not be treated polymorphically. This is not a decision
that the language imposed, it is a design decision made for this class. Had
the designer wanted it to be polymorphic, that function could have been made
virtual.
 The fact that someone tried to inherit from this class and "override"
a function that wasn't intended to be overridden is not a language issue. It
is a result of inadequate documentation, misunderstanding the language, or not
paying attention.
 -- Pete





Author: jason@cygnus.com (Jason Merrill)
Date: Mon, 25 Jul 1994 02:29:48 GMT
Raw View
>>>>> Peter Kron <pkron@corona.com> writes:

> I would like to see the standard address this issue by making the
> default for all member functions be virtual to avoid this general
> problem.

g++ offers this option, with the -fall-virtual flag.

Jason




Author: jason@cygnus.com (Jason Merrill)
Date: Mon, 25 Jul 1994 02:31:13 GMT
Raw View
>>>>> Rohan LENARD <rjl@f111.iassf.easams.com.au> writes:

> Your structs A & B are aggregates and thus do *not* have destructors, however
> C is not an aggregate (since it has base classes).  It implicitly has a
> destructor, so you code meets the highlighted text.

Where is it written that all non-aggregates have destructors?  I don't
think that is accurate.

Jason




Author: maxtal@physics.su.OZ.AU (John Max Skaller)
Date: Mon, 25 Jul 1994 15:26:05 GMT
Raw View
In article <1994Jul21.021936.314@corona.com> pkron@corona.com writes:
>From: pete@genghis.interbase.borland.com (Pete Becker)
>>  In article <1994Jul20.044817.698@corona.com>,
>>  Peter Kron <pkron@corona.com> wrote:
>>>Clearly this is not desirable behavior, but is a side effect of trying
>>>to minimize storage for vtables and pointers to them. The concept of
>>>polymorphism should imply that *objects* determine how they respond to
>>>member functions, not callers.
>>
>>  Clearly, the design decision for this pair of
>>  classes was that they are not polymorphic. That
>>  is not a language flaw, but a deliberate decision on the
>>  part of the designer of these classes. If you want the
>>  call to memberFunction() to be virtual, design your
>>  classes appropriately.
>
>Polymorphism is central to using C++ as an OOPL rather than just as
>a better of C. If polymorphism is not desired, it would be much more
>appropriate to use a different member name rather than overriding
>memberFunction. As I suggested, a keyword could be defined to provide
>non-polymorphic overrides, but that would be the exception rather than
>the rule.

 Yes it could but the default is the other way around.
Big deal. Its just a default, it has a historical basis.
C++ has a history, you know. Read Stroustrup's Design & Evolution.

>>  Please don't impose your design criteria on the rest of
>>  the world.
>
>Hmmm. It would seem the language is imposing the criteria, not me.
>I'm suggesting changes that I believe would reduce a lot of
>difficulty, based on some recent threads.

 It would also break REAMS of C++ code, for no good
reason other than to change a default to your liking.
Even if your liking is shared by many, that is not enough
to zap almost all C++ code in existence.

 As it happens, I think the default is correct.
(See discussion here some time back on "private virtual" methods)

--
        JOHN (MAX) SKALLER,         INTERNET:maxtal@suphys.physics.su.oz.au
 Maxtal Pty Ltd,
        81A Glebe Point Rd, GLEBE   Mem: SA IT/9/22,SC22/WG21
        NSW 2037, AUSTRALIA     Phone: 61-2-566-2189




Author: immel@chord.centerline.com (Mark Immel)
Date: 15 Jul 1994 15:09:50 GMT
Raw View
C++ gurus --

  Suppose I have the following (which breaks with every compiler I try) :

 struct A {};
 struct B {};
 struct C: A, B {};

 void foo()
 {
   B* b = new C;
   delete b;
 }

  It breaks in the sense that delete calls free with a bad address.  This
  is a silent failure that can be detected only with a debugging version
  of malloc/free or the like.  But much later, you might find out your
  heap is corrupted...
  This code *SHOULD* be legal according to WP 5.3.5 p 2:

  "The value of the operand of delete must be a pointer to a non-array
   object created by a new-expression with a new-placement specification,
   or a pointer to a subobject representing a base class of such an object."

  My question is this: is it the responsibility of operator delete() to
  do the right thing, or the responsibility of the compiler to change the
  value of b no whatever was returned from new C?

  You may email me at immel@centerline.com -- I'll follow up to the net
  if there's interest.

-- Mark Immel
   immel@centerline.com







Author: b91926@fsgi01.fnal.gov (David Sachs)
Date: 15 Jul 1994 12:33:51 -0500
Raw View
immel@chord.centerline.com (Mark Immel) writes: ...

>  Suppose I have the following (which breaks with every compiler I try) :

> struct A {};
> struct B {};
> struct C: A, B {};

> void foo()
> {
>   B* b = new C;
>   delete b;
> }

>  It breaks in the sense that delete calls free with a bad address.  This
>  is a silent failure that can be detected only with a debugging version
>  of malloc/free or the like.  But much later, you might find out your
>  heap is corrupted...
>  This code *SHOULD* be legal according to WP 5.3.5 p 2:

>  "The value of the operand of delete must be a pointer to a non-array
>   object created by a new-expression with a new-placement specification,
>   or a pointer to a subobject representing a base class of such an object."

>  My question is this: is it the responsibility of operator delete() to
>  do the right thing, or the responsibility of the compiler to change the
>  value of b no whatever was returned from new C?

...

The code would work properly if struct B is declared to have
a virtual destructor. e.g. struct B { virtual ~B(){}};

I would really like the C++ standard to REQIRE this, but even
with such a requirement, the error would probably be undetectable.




Author: immel@chord.centerline.com (Mark Immel)
Date: 18 Jul 1994 13:22:30 GMT
Raw View
C++ gurus and standard committee --

I asked about the following code a few days ago.  The problem is that the
call to delete b calls free with a bad address; this is a particularly nasty
silent failure -- often the program will crash much later.  Many of you
commented that giving B a virtual destructor will fix the problem; that's
true.  However, the WP 5.3.5 (26 May, 1994 version) implies that my code
is legal:


 struct A {};
 struct B {};
 struct C: A, B {};

 void foo()
 {
   B* b = new C;
   delete b;
 }

5.3.5 paragraph 2:

  "The value of the operand of delete must be a pointer to a non-array
   object created by a new-expression without a new-placement specification,
   or a pointer to a subobject representing a base class of such an object."

5.3.5 paragraph 3:

  "If the static type of the operand is different from its dynamic type *AND
   THE CLASS OF THE COMPLETE OBJECT HAS A DESTRUCTOR*, the static type must
   have a virtual destructor, or the result is undefined." (emphasis mine)

Mr. Hartmut Kocher has been courteous enough to send me his thoughts on the
issue -- he would like to see the WP altered to remove the emphasized type,
I believe (if I have misstated his position, I apologize).  Then, B would
have to have a virtual destructor and all would be well.

Another possibility is:

  "If the static type of the operand is different from its dynamic type, and
   the class of the complete object has multiple inheritance anyplace, the
   static type must have a virtual destructor, or the result is undefined."

However, neither of these is easy to statically check (BTW neither is the
correct WP wording, and this is a nasty bug that I would like my compiler
to nail me for).  Compilers could generate warnings, most of which would
be spurious, about deleting classes without virtual destructors; we would
ignore them all and miss the few that were real trouble.  I think
there are two reasonable options:

1) Require a compiler diagnostic if multiple inheritance is used and at least
   one direct or indirect base class does not have a virtual destructor.

2) (Preferably) use the RTTI information mandated in the standard to convert
   the B* to a C* before it goes off to delete; the language now mandates
   that the type information be around anyhow.  This may be the intention in
   the WP; but it should say so explicitly so that compiler implementors
   realize this.

-- Mark Immel
   immel@centerline.com

(You may email me if convenient -- I will summarize)