Topic: For Your Amusement: My Favorite C++ Bug


Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Sun, 17 Oct 1993 16:58:28 GMT
Raw View
In article <1993Oct04.163321.1042@microsoft.com> jimad@microsoft.com (Jim Adcock) writes:
>|Right Jim.  I almost forgot that users just assume that the cost of *their*
>|C++ compiler(s) can be amortized over those millions and millions of Crays
>|sitting on everybody's desks. :-)
>
>The development costs and development amortization are essentially the same
>whether 100,000 people share one $100,000,000 Cray, or each of the 100,000
>people have their own $1000 PC.  People should be free to choose the
>solution that works best for them.  If for some reason quality software is more
>difficult for user's of the Cray to obtain, then that argues strongly against
>the design of the Cray...

That's what I like about reading this newsgroup.  People here have such a
great sense of humor. :-)

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: danodom@matt.ksu.ksu.edu (Dan Odom)
Date: 17 Oct 1993 19:04:58 -0500
Raw View
Somebody said:

>>The development costs and development amortization are essentially the same
>>whether 100,000 people share one $100,000,000 Cray, or each of the 100,000
>>people have their own $1000 PC.  People should be free to choose the
>>solution that works best for them.  If for some reason quality software is more
>>difficult for user's of the Cray to obtain, then that argues strongly against
>>the design of the Cray...

Sorry, but I can't help picking a nit when I see one :-)

Last I checked (6 months or so ago, so this is old), a Cray Y-MP/C90
1024 was only about $32,000,000, and as this is (was?) the flagship of
the Cray 'line', I doubt they have any $100,000,000 systems.  Thinking
Machine's MP-5 is several times faster than the C90, and it is priced
around $32,000,000.  This, combined with the fact that Cray is quickly
going bankrupt, keeps prices well under the $100M mark.

I don't have any experience with machines of that caliber, but I
imagine that 100,000 users would crash the thing.  That's a LOT of
users; more than live in the city where I am.  And if they're all
running gcc or LISP or emacs...

Quality software is a HELL of a lot easier to obtain, not to mention
cheaper (it's usually free), for Unix systems (which all of the Crays
I've seen are; there are exceptions) than for MS-DOS systems.
According to Business Week, there are more than 18,000 apps available
for Unix, with well under 10,000 (I think it may have been as low as
3,000...) apps available for MS-DOS.  Most Unix software tends to be
'hand-crafted', while most MS-DOS software is just assembly line crap
rushed to market before it was complete (cough, cough, Windows, cough,
cough).

I do, however, agree with the point that the quoted poster was making:
people should choose the solution that works best for them, not what
works best for their friend or what somebody's marketing department
_says_ will work.  Unfortunately, the only way to find out what works
is through experience, and you don't have much of that at first :-).

------>  Just another Unix geek.


--
Dan Odom
danodom@matt.ksu.ksu.edu -- Kansas State University, Manhattan, KS




Author: jimad@microsoft.com (Jim Adcock)
Date: 04 Oct 93 16:33:21 GMT
Raw View
|Right Jim.  I almost forgot that users just assume that the cost of *their*
|C++ compiler(s) can be amortized over those millions and millions of Crays
|sitting on everybody's desks. :-)

The development costs and development amortization are essentially the same
whether 100,000 people share one $100,000,000 Cray, or each of the 100,000
people have their own $1000 PC.  People should be free to choose the
solution that works best for them.  If for some reason quality software is more
difficult for user's of the Cray to obtain, then that argues strongly against
the design of the Cray.  Computers do no exist because they make pretty
metal to sit upon.  The exist to run user's software.

|(My point here being that not every compiler vendor has the luxury of being
|able to amortize the development costs over quite the same market breadth
|as, say for instance, a Microsoft can.)

We are seeing a market situation develop where few hardware makers can
get away with bundling software and hardware and selling the combination
at inflated prices.  Instead they license and/or offer more 'generic'
software such as un*x, gnu stuff, or NT.  True, this means that some
of the more bizarre hardware designs don't receive support from the
software community -- so what?  Are programmers to subsidize hardware
manufacturers' more bizarre efforts?  Yet that is what you propose when
you suggest we limit the language to what the hardware manufacturers
can easily write compilers for, rather than writing compilers based on
the needs and desires of the programming community.





Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Wed, 29 Sep 1993 08:14:58 GMT
Raw View
In article <1993Sep20.183133.25749@microsoft.com> jimad@microsoft.com (Jim Adcock) writes:
>In article <rfgCDHLB5.J3K@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>>In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca> cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
>|> Yeah, but what I have always considered the beautiful thing
>|>about C and even more so C++ is that you dont HAVE to use the complex
>|>features. You can just pretend it's a limited language like Pascal and
>|>not use pointers,(etc.)...
>|
>|End users ALWAYS say this sort of thing.
>|
>|It seems to constantly elude their notice that regardless of whether or not
>|THEY use feature X or feature Y, somebody still has to implement these things,
>|and the effort involved in doing that is not free.
>
>Not free, but from the customer's point of view, pricing and availability
>of C/C++ compilers or other moderately successful languages are low enough
>the the customer [correctly] perceives that the costs of the implementor
>are amortized over so many user copies of the compiler that the implementation
>effort is for all practical purposes free.

Right Jim.  I almost forgot that users just assume that the cost of *their*
C++ compiler(s) can be amortized over those millions and millions of Crays
sitting on everybody's desks. :-)

(My point here being that not every compiler vendor has the luxury of being
able to amortize the development costs over quite the same market breadth
as, say for instance, a Microsoft can.)

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: walker@twix.unx.sas.com (Doug Walker)
Date: Wed, 29 Sep 1993 15:23:36 GMT
Raw View
In article <rfgCE3w8z.LAK@netcom.com>, rfg@netcom.com (Ronald F. Guilmette) writes:
|> >|It seems to constantly elude their notice that regardless of whether or not
|> >|THEY use feature X or feature Y, somebody still has to implement these things,
|> >|and the effort involved in doing that is not free.

More importantly, they may end up having to support code written
by somebody else that DID use the feature.  Unless you write all
your own code, eventually this kind of thing will happen to you.

|> (My point here being that not every compiler vendor has the luxury of being
|> able to amortize the development costs over quite the same market breadth
|> as, say for instance, a Microsoft can.)

Very true.  As a vendor in the relatively tiny Amiga marketplace, I
can attest to this fact.

--
  *****                    / walker@unx.sas.com
 *|_o_o|\\     Doug Walker<  BIX, Portal: djwalker
 *|. o.| ||                \ CompuServe: 71165,2274
  | o  |//
  ======
Any opinions expressed are mine, not those of SAS Institute, Inc.





Author: ark@alice.att.com (Andrew Koenig)
Date: 17 Sep 93 13:25:40 GMT
Raw View
In article <27aqp8$l8@iskut.ucs.ubc.ca> fahller@elm.ppc.ubc.ca (Bjorn Fahller) writes:

> One exception. Local functions. The only way I know of, to translate a Pascal
> program where a local function recurses, is to bring all the variables of
> the "parent function" as parameters in the recursion. This is doable, but
> can be expensive, and it is ugly.

Take all the parent's variables that the local function uses and
make them into elements of a structure.  Pass a pointer to that
structure as a parameter to the recursion.  Still ugly, but not
particularly expensive.
--
    --Andrew Koenig
      ark@research.att.com




Author: progers@ajpo.sei.cmu.edu (Pat Rogers)
Date: Fri, 17 Sep 1993 16:34:35 EDT
Raw View
In article <rfgCDHMDH.Kvp@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>In article <1993Sep13.152835.11688@razor.genasys.com> dannya@razor.genasys.com (Danny Alberts) writes:
>>
>>Ada is fairly complex...
>
>>But its complexity need not be a hinderance...
>
>Once again, allow me to EMPHASIZE that the complexity of a language (any
>language) most definitely IS a hinderance... at least for the poor schmucks
>who get to try implement it (correctly, one hopes).
>

I very much agree, and would add that the users pay for it too.

Complexity, for the implementers, often translates into performance costs
for end users, since nasty interactions of features mean that unused
features are still "paid for" -- ie, "distributed costs".  Having said that,
however, let me hasten to add that the performance of good Ada compilers
will stand against any language, including assembly.  This is in part
due to the considerable man-hours put into the optimizers, versus the
man-hours of an individual programmer.  Certainly this principle applies
to any language, not just Ada.

Some of the complexity people cite for Ada is more a matter of being
well-defined.  Some of it is just complexity...  We've not found it
to be a problem, though.

>--
>
>-- Ronald F. Guilmette ------------------------------------------------------
>------ domain address: rfg@netcom.com ---------------------------------------
>------ uucp address: ...!uunet!netcom.com!rfg -------------------------------


Pat Rogers
SBS Engineering





Author: cs931003@ariel.yorku.ca (ANETA COSTIN)
Date: Sat, 18 Sep 1993 02:51:44 GMT
Raw View
In article <rfgCDHLB5.J3K@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca> cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
>>>The language is complex, but people seem to want it that way.  It seems
>>>that there is a kind of reverse-entropy at work in the design of popular
>>>procedural programming language in the last ten years or so.  Things get
>>>vastly MORE complex rather than less so.  (See Ada for another example.)
>>>
>>
>> Yeah, but what I have always considered the beautiful thing
>>about C and even more so C++ is that you dont HAVE to use the complex
>>features. You can just pretend it's a limited language like Pascal and
>>not use pointers,(etc.)...
>
>End users ALWAYS say this sort of thing.
>
>It seems to constantly elude their notice that regardless of whether or not
>THEY use feature X or feature Y, somebody still has to implement these things,
>and the effort involved in doing that is not free.
>

 What do you mean by END USERS? Programmers who write in the
laguage as opposed to compiler writers? I hate to be nasty, but I must
say that only a very small percentage of the programmers out there are
writing compilers and such... by this definition, 98% of all programmers
are END USERS.

 BTW, that percentage probably doesn't apply to the company on
this newsgroup, for obvious reasons (also so I don't get flamed!!)

 cs931003, aka Jonathan Shekter, from sunny Toronto, Canada





Author: fjh@munta.cs.mu.OZ.AU (Fergus James HENDERSON)
Date: Sun, 19 Sep 1993 15:51:46 GMT
Raw View
rfg@netcom.com (Ronald F. Guilmette) writes:

>dannya@razor.genasys.com (Danny Alberts) writes:
>>
>>Ada is fairly complex...
>
>>But its complexity need not be a hinderance...
>
>Once again, allow me to EMPHASIZE that the complexity of a language (any
>language) most definitely IS a hinderance... at least for the poor schmucks
>who get to try implement it (correctly, one hopes).

Not to mention the poor schmucks who as a result have to use buggy, incomplete,
or inefficient implementations and tools.

--
Fergus Henderson                     fjh@munta.cs.mu.OZ.AU




Author: kanze@us-es.sel.de (James Kanze)
Date: 20 Sep 93 13:13:05
Raw View
In article <rfgCDHLB5.J3K@netcom.com> rfg@netcom.com (Ronald F.
Guilmette) writes:

|> In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca> cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
|> >>The language is complex, but people seem to want it that way.  It seems
|> >>that there is a kind of reverse-entropy at work in the design of popular
|> >>procedural programming language in the last ten years or so.  Things get
|> >>vastly MORE complex rather than less so.  (See Ada for another example.)

|> > Yeah, but what I have always considered the beautiful thing
|> >about C and even more so C++ is that you dont HAVE to use the complex
|> >features. You can just pretend it's a limited language like Pascal and
|> >not use pointers,(etc.)...

|> End users ALWAYS say this sort of thing.

|> It seems to constantly elude their notice that regardless of whether or not
|> THEY use feature X or feature Y, somebody still has to implement these things,
|> and the effort involved in doing that is not free.

More significantly for the people not involved in compiler writing, if
you do accidentally use one of these features, Pascal will generate an
error message, whereas C or C++ will silently compile the program.

If you want to restrict yourself to the Pascal-like subset of C, why
not simply use Pascal?
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: davis@passy.ilog.fr (Harley Davis)
Date: 20 Sep 93 12:42:38 GMT
Raw View
In article <rfgCDHLB5.J3K@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:

   > Yeah, but what I have always considered the beautiful thing
   >about C and even more so C++ is that you dont HAVE to use the complex
   >features. You can just pretend it's a limited language like Pascal and
   >not use pointers,(etc.)...

   End users ALWAYS say this sort of thing.

   It seems to constantly elude their notice that regardless of
   whether or not THEY use feature X or feature Y, somebody still has
   to implement these things, and the effort involved in doing that is
   not free.

Not to mention the fact that even end users may have to use libraries
which use the complex features.  After all, one of the goals of OOP is
reusable libraries.

-- Harley Davis
--

------------------------------------------------------------------------------
nom: Harley Davis   ILOG S.A.
net: davis@ilog.fr   2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66   94253 Gentilly Cedex, France




Author: jimad@microsoft.com (Jim Adcock)
Date: 20 Sep 93 18:31:33 GMT
Raw View
In article <rfgCDHLB5.J3K@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca> cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
|> Yeah, but what I have always considered the beautiful thing
|>about C and even more so C++ is that you dont HAVE to use the complex
|>features. You can just pretend it's a limited language like Pascal and
|>not use pointers,(etc.)...
|
|End users ALWAYS say this sort of thing.
|
|It seems to constantly elude their notice that regardless of whether or not
|THEY use feature X or feature Y, somebody still has to implement these things,
|and the effort involved in doing that is not free.

Not free, but from the customer's point of view, pricing and availability
of C/C++ compilers or other moderately successful languages are low enough
the the customer [correctly] perceives that the costs of the implementor
are amortized over so many user copies of the compiler that the implementation
effort is for all practical purposes free.  So the customer rightly asks,
"what features could be added to the language to make my job easier" not
"what features could be removed to make the implementors life easier."
Not to imply that adding more and more features makes the users life
easier, just that focusing in on the compiler implementors point of view
is a mistake.





Author: dannya@razor.genasys.com (Danny Alberts)
Date: Mon, 13 Sep 1993 15:28:35 GMT
Raw View
In article <rfgCD5DvG.IMK@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>In article <1993Sep9.162740.28355@sei.cmu.edu> progers@ajpo.sei.cmu.edu (Pat Rogers) writes:
>>In article <rfgCD2yDs.363@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>>>
>>>...  It seems
>>>that there is a kind of reverse-entropy at work in the design of popular
>>>procedural programming language in the last ten years or so.  Things get
>>>vastly MORE complex rather than less so.  (See Ada for another example.)
>>>
>>
>>Could you elaborate on the complexities of Ada, viz C++ ?
>
>What is there to elaborate?  They are both huge languages.
>
>--
>
>-- Ronald F. Guilmette ------------------------------------------------------
>------ domain address: rfg@netcom.com ---------------------------------------
>------ uucp address: ...!uunet!netcom.com!rfg -------------------------------

i think that the point of the original question, is not that they are both
very capable (huge), but what are some of the complexities.  I see many
comments about Ada in this newsgroup, but never any technical details.  So
here are a few:

Ada is fairly complex, after all, it was designed by a multi-national
committee with the goal of enabling software to be developed, by independent
parties, to be integrated later, and also to reduce or at least control
the exponentially increasing costs and time required to maintain programs
(in the larger sense, systems).

But its complexity need not be a hinderance.  For example, there are many
common syntatical on-liner's and concepts that can be implemented which
can save dozens/hundreds (yes, i could even say thousands) of lines from
being written and maintained.  For example, with Ada, one can
define a type, and if this type is of another type (subtype), then
other tricks can be employed, such as range checking.  this can all be
easily specified in one Ada line, the compiler automatically generates the
necessary code to test ranges at each appropriate location, includeing
within other procedures.  Hundreds save here, and maintenance costs
are severely reduced.

Like some other languages, Ada can handle overloading, encapuslation,
inheritance for packaged procedures (objects), etc.  It can also easily
handle variable parameter lists, pointers, and shared memory constructs
with other programs.  These features are all part of the language.

Is it complex?  Probably, but not as complex as one might think.  The benefits
weigh heavily on the facts, that by the time the code compiles, more of
the logic bugs and probably all(?) of the syntactical errors have been
eliminated.  All interfaces to external procedures have been verified
(even if the external procedures have not been written - a mundane, yet
highly useful requirement of Ada).

I have been programming in a multitude of languages for close to 20 years,
even lower than assembly, i also have hacked on machine code.  so when
i got the opportunity to compare Ada with languages to which I have used and
am proficient, PASCAL, C, LISP, FORTRAN IV, V, 77, ASSEMBLY, etc.,
I actually found Ada to be highly useful.  Although its learning curve is
longer than some languages, its payback is, IMHO, better quality software
and lower maintenance cost for that software, overall, considering any
of the other languages I have dealt with.

These are the reasons the DOD helped to participate in its implementation,
and pushes it for its projects.  (The DOD is perhaps, still the largest single
implementer of sofware, either directly or through contractual vehicles).
I believe the reason Ada has not taken the civilian industry by storm is
because C, the fad language at the time Ada was being developed, was quickly
being learned by many, including those in school.  As is usually the case,
people, when hired and asked for suggestions regarding direction, etc.,
are more likely to recommend technology they are already familiar with.
Although the DOD began to require Ada within the last several years, about
the only people who learned it were under contract, of one sort or another,
to a DOD project.  Most of the Ada people I know are also proficient C
and other language programmers, and most, I believe, agree that Ada offers
far more, yet is a little more complex.

Perhaps, someone can comment on the following:

Many of my C++ friends have told me in the past, that C++ was the C's
answer to Ada.  From what I have seen of C++, I personally don't think
so.  It does appear to offer some very nice capabilities, including various
methods to implement OODs, but it also still appears to be very
"hackful".  This in my opinion, makes the language "too-easy" to continue
to produce more breakable code or code that has a higher maintenance cost
(ie. others will most likely be maintaining it and the learning curve is
higher).

hope this helps in your other discussions, since  i believe that some
actual information is better than negative insinuations.




Author: progers@ajpo.sei.cmu.edu (Pat Rogers)
Date: Mon, 13 Sep 1993 14:05:57 EDT
Raw View
In article <1993Sep13.152835.11688@razor.genasys.com> dannya@razor.genasys.com (Danny Alberts) writes:
>
  [deletia]

>Ada is fairly complex, after all, it was designed by a multi-national
>committee with the goal of enabling software to be developed, by independent
>parties, to be integrated later, and also to reduce or at least control
>the exponentially increasing costs and time required to maintain programs
>(in the larger sense, systems).
>

Just a quick correction: Ada was not designed by a multi-national
committee, or a committee of any sort.  The _requirements_ for the language
were developed on an international scale -- perhaps that is what you're
thinking of.  In terms of design, there was a design competition, and one
team won.

Pat Rogers
SBS Engineering




Author: peju@alice.att.com (Peter Juhl)
Date: 13 Sep 93 18:10:08 GMT
Raw View

dannya@razor.genasys.com (Danny Alberts @ Genasys II Inc., Fort Collins, CO) writes :

>I believe the reason Ada has not taken the civilian industry by storm is
>because C, the fad language at the time Ada was being developed, was quickly
>being learned by many, including those in school.

C can not have become a fad when Ada was developed. The Ada definition was
published and pretty much cast in stone in 1979.

K&R first edition came out in 1978, and was the first widely available C
book. In fact is was after 1985 that C really took off, when the PC market
discovered it.

--- peter (peju@research.att.com)




Author: osinski@hellgate.cs.nyu.edu (Ed Osinski)
Date: 13 Sep 1993 20:26:29 GMT
Raw View
In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca>, cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
|> >The language is complex, but people seem to want it that way.  It seems
|> >that there is a kind of reverse-entropy at work in the design of popular
|> >procedural programming language in the last ten years or so.  Things get
|> >vastly MORE complex rather than less so.  (See Ada for another example.)
|> >
|>
|>  Yeah, but what I have always considered the beautiful thing
|> about C and even more so C++ is that you dont HAVE to use the complex
|> features.

Are there any languages that force you to use their complex features?  If I use
Pascal, I don't have to use pointers or sets; if I use Ada, I don't have to use
tasks and exceptions; if I use Lisp, no one is making me use continuations.  It
seems to me that this "beautiful thing" about C and C++ is shared by all
languages.  Maybe there exist languages for which this is not true, but I can't
think of any offhand.

|>  [ stuff deleted ]

|>  cs931003, aka Jonathan Shekter, from sunny Toronto, Canada
|>

--
---------------------------------------------------------------------
 Ed Osinski                  |
 Computer Science Department | "I hope life isn't a big joke,
 New York University         |  because I don't get it."
 E-mail:  osinski@cs.nyu.edu |                           Jack Handey
---------------------------------------------------------------------




Author: jln2@cec2.wustl.edu (Sammy D.)
Date: Tue, 14 Sep 1993 15:18:36 GMT
Raw View
In article <26555@alice.att.com> peju@alice.att.com (Peter Juhl) writes:
>C can not have become a fad when Ada was developed. The Ada definition was
>published and pretty much cast in stone in 1979.
>
>K&R first edition came out in 1978, and was the first widely available C
>book. In fact is was after 1985 that C really took off, when the PC market
>discovered it.

Yes, but Ada is rather unique in that the definition was published well
before a compiler was ever produced.  Dr. Dobbs' Tiny-C did a lot to get
C rolling on CP/M machines while the Ada-ites were still struggling
with getting their proto-compilers to accept the grammar, much less
generate code.

P.S.  I love Ada.  I still have my copy of the LRM that SigLang sent to
all its members.  But no one I know codes in it.




Author: fahller@elm.ppc.ubc.ca (Bjorn Fahller)
Date: 16 Sep 1993 22:53:28 GMT
Raw View
In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca>, cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
|>
|>  Yeah, but what I have always considered the beautiful thing
|> about C and even more so C++ is that you dont HAVE to use the complex
|> features. You can just pretend it's a limited language like Pascal and
|> not use pointers, bitwise ops, the complex expression parsing available
|> in C, etc. That is why it is so easy to write an automatic Pascal to C
|> translator; thwere is an exact analog in C to everything in Pascal, and
|> if you were trained in Pascal, you will not find C limiting. The reverse
|> is not true.

One exception. Local functions. The only way I know of, to translate a Pascal
program where a local function recurses, is to bring all the variables of
the "parent function" as parameters in the recursion. This is doable, but
can be expensive, and it is ugly.
   _
/Bjorn.
--
UBC Pulp & Paper Centre | 4549 W. 11th Avenue    | A Swede in
2385 East Mall          | Vancouver, B.C.        | temporary
Vancouver, B.C.         | Canada V6R 2M5         | exile.
Canada V6T 1Z4          |                        |
Tel: +1 604/822-8567    | Tel: +1 604/222-4952   |




Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Fri, 17 Sep 1993 07:11:28 GMT
Raw View
In article <1993Sep12.152401.29864@newshub.ariel.yorku.ca> cs931003@ariel.yorku.ca (ANETA COSTIN) writes:
>>The language is complex, but people seem to want it that way.  It seems
>>that there is a kind of reverse-entropy at work in the design of popular
>>procedural programming language in the last ten years or so.  Things get
>>vastly MORE complex rather than less so.  (See Ada for another example.)
>>
>
> Yeah, but what I have always considered the beautiful thing
>about C and even more so C++ is that you dont HAVE to use the complex
>features. You can just pretend it's a limited language like Pascal and
>not use pointers,(etc.)...

End users ALWAYS say this sort of thing.

It seems to constantly elude their notice that regardless of whether or not
THEY use feature X or feature Y, somebody still has to implement these things,
and the effort involved in doing that is not free.

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Fri, 17 Sep 1993 07:34:29 GMT
Raw View
In article <1993Sep13.152835.11688@razor.genasys.com> dannya@razor.genasys.com (Danny Alberts) writes:
>
>Ada is fairly complex...

>But its complexity need not be a hinderance...

Once again, allow me to EMPHASIZE that the complexity of a language (any
language) most definitely IS a hinderance... at least for the poor schmucks
who get to try implement it (correctly, one hopes).

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: kanze@us-es.sel.de (James Kanze)
Date: 9 Sep 93 19:48:08
Raw View
In article <rfgCD2z7H.3u1@netcom.com> rfg@netcom.com (Ronald F.
Guilmette) writes:

|> In article <PENA.93Sep1134343@daredevil.hut.fi> pena@niksula.hut.fi (Olli-Matti Penttinen) writes:
|> >
|> >This, of course, does not mean that the language is simple in this
|> >respect.  The inherent problem here is much due to the somewhat
|> >arbitrary desicions that have to be made to disambiguate calls.  There
|> >just is no one definition of a "best match".  I can come up with only
|> >one clear and easy-to-understand rule: if no exact match is found and
|> >more than one function could be called, the call is ambiguous.  I
|> >could even happily live with such a rule, but many others couldn't.

|> I also could live with exactly that sort of nice simple rule, and I
|> sincerly wish that X3J16 would simply trash all of this undue complexity
|> and adopt *exactly* the rule you have suggested.

One of the frustrations in defining a language standard is that you
cannot start until the language has existed long enough to find out
what's wrong with it, but then you are not allowed to correct the
mistakes because that would bread existing code.

I think the suggested rule would be an improvement over what we now
have.  It is certainly easier to formulate and to remember.  But it
would also certainly break a lot of existing code.
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: progers@ajpo.sei.cmu.edu (Pat Rogers)
Date: Thu, 9 Sep 1993 16:27:40 EDT
Raw View
In article <rfgCD2yDs.363@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>
>...  It seems
>that there is a kind of reverse-entropy at work in the design of popular
>procedural programming language in the last ten years or so.  Things get
>vastly MORE complex rather than less so.  (See Ada for another example.)
>

Could you elaborate on the complexities of Ada, viz C++ ?

pat rogers
SBS Engineering




Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Fri, 10 Sep 1993 16:59:39 GMT
Raw View
In article <1993Sep9.162740.28355@sei.cmu.edu> progers@ajpo.sei.cmu.edu (Pat Rogers) writes:
>In article <rfgCD2yDs.363@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>>
>>...  It seems
>>that there is a kind of reverse-entropy at work in the design of popular
>>procedural programming language in the last ten years or so.  Things get
>>vastly MORE complex rather than less so.  (See Ada for another example.)
>>
>
>Could you elaborate on the complexities of Ada, viz C++ ?

What is there to elaborate?  They are both huge languages.

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: dak@hathi.informatik.rwth-aachen.de (David Kastrup)
Date: 11 Sep 1993 09:35:03 GMT
Raw View
rfg@netcom.com (Ronald F. Guilmette) writes:

>In article <1993Sep9.162740.28355@sei.cmu.edu> progers@ajpo.sei.cmu.edu (Pat Rogers) writes:
>>In article <rfgCD2yDs.363@netcom.com> rfg@netcom.com (Ronald F. Guilmette) writes:
>>>
>>>...  It seems
>>>that there is a kind of reverse-entropy at work in the design of popular
>>>procedural programming language in the last ten years or so.  Things get
>>>vastly MORE complex rather than less so.  (See Ada for another example.)
>>>
>>
>>Could you elaborate on the complexities of Ada, viz C++ ?

>What is there to elaborate?  They are both huge languages.

Yes, but Ada was designed from scratch with its current complexity, and
thus is rather coherent in its design, whereas C++'s growth was from C,
with compatibility to ANSI C in mind, and with a set of different concepts
being in separated leaps (templates (WHAT a stupefying syntax!), exceptions,
...). Some C++ problems result from these design decisions, such as
the lack of usable more-dimensional dynamic arrays, and the current
unbelievable type conversion rule complexity, which has baffled a lot
of users (and even specialists) as several threads have shown.
--
 David Kastrup        dak@pool.informatik.rwth-aachen.de
 Tel: +49-241-72419 Fax: +49-241-79502
 Goethestr. 20, D-52064 Aachen




Author: cs931003@ariel.yorku.ca (ANETA COSTIN)
Date: Sun, 12 Sep 1993 15:24:01 GMT
Raw View
>The language is complex, but people seem to want it that way.  It seems
>that there is a kind of reverse-entropy at work in the design of popular
>procedural programming language in the last ten years or so.  Things get
>vastly MORE complex rather than less so.  (See Ada for another example.)
>

 Yeah, but what I have always considered the beautiful thing
about C and even more so C++ is that you dont HAVE to use the complex
features. You can just pretend it's a limited language like Pascal and
not use pointers, bitwise ops, the complex expression parsing available
in C, etc. That is why it is so easy to write an automatic Pascal to C
translator; thwere is an exact analog in C to everything in Pascal, and
if you were trained in Pascal, you will not find C limiting. The reverse
is not true.

 As for C++, you can start out of course with plain C, then try
you hand at a few classes, then some inheritance, then maybe virtual
functions, up to multiple imheritance and all the interesing problems
that produces, then onto overloading and types, etc... You can use as
much or as little as you want. That is how I learned C++

 1) Hmmm... I have these related variables and functions..
wouldn't that be a good place for a class?

 2) Gee, this is almost the same as my class, only it does a
little bit more. Hey, inheritance, neat!

 3) I have this problem: I want to pass a pointer to an object to
a function, but it cold be one of two types... Wow virtual functions!

 4) The new class would combine both the properites of a Camera
and a WireObject (I was writing 3D graphics code at the time)...
Multiple inheritance fits here..

 5) Now I have this problem. Casting the pointer to a Camera and
accessing a base member gives me a different value than casting it to a
WireObject pointer and accessong the base member. Hmm... virtual base
classes....

 At each step I just thumbed through the manual until I found
what I needed. And so, I learned it step by step.

 Note that even if you do not use OOP, C++  is a nice extension
to C in that it provides much better type checking, function
overloading, the ability to return and assign structures,
reference types (finally!), etc.

 In other words, yes, it's complex, but you don't have to use it
all!

 cs931003, aka Jonathan Shekter, from sunny Toronto, Canada





Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Thu, 9 Sep 1993 09:03:53 GMT
Raw View
In article <CCLEEM.K3z@sugar.NeoSoft.COM> daniels@NeoSoft.com (Brad Daniels) writes:
>
>My point was that the existing operator<<(ostream&,const char *) operator
>should not even be considered as a match because calling it would require
>the creation of a non-const temporary, meaning it is illegal...

I believe that it is is known problem with both the ARM and the current
X3J16 working paper that it is not really entirely clear if the process
of overload resolution involves first finiding a set of functions which
could be legally called and then finding the "best match" among those, or
if it instead requires finding a "best match" from among all functions
of the given name and then checking to see if that best matching function
can legally be called.  (Different passages in chapter 13 seem to contradict
one another on this point.)

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Thu, 9 Sep 1993 09:29:51 GMT
Raw View
In article <BRUCE.93Aug30224446@utafll.utafll.uta.edu> bruce@utafll.uta.edu (Bruce Samuelson) writes:
>The analyses offered in the recent postings to this thread seem too
>complex for an average programmer to comprehend.

Did anyone ever seriously claim that C++ was for the "average" programmer?
(I believe that it quite clearly requires an ABOVE AVERAGE  memory to
remember all of the rules and all of their special case exceptions...
and X3J16 isn't even done cooking the language yet!  They are still
adding things!)

>Is this complexity
>inherent in the language, or is it due to flaws in the reference
>manual?

Both of the above.  By any measure, C++ is 4-5 times as complex as C
(which was itself about on a par with FORTRAN 77).

How could the language or manual be improved in this case?

The language is complex, but people seem to want it that way.  It seems
that there is a kind of reverse-entropy at work in the design of popular
procedural programming language in the last ten years or so.  Things get
vastly MORE complex rather than less so.  (See Ada for another example.)

As regards to the manual... don't get me started!


--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: rfg@netcom.com (Ronald F. Guilmette)
Date: Thu, 9 Sep 1993 09:47:41 GMT
Raw View
In article <PENA.93Sep1134343@daredevil.hut.fi> pena@niksula.hut.fi (Olli-Matti Penttinen) writes:
>
>This, of course, does not mean that the language is simple in this
>respect.  The inherent problem here is much due to the somewhat
>arbitrary desicions that have to be made to disambiguate calls.  There
>just is no one definition of a "best match".  I can come up with only
>one clear and easy-to-understand rule: if no exact match is found and
>more than one function could be called, the call is ambiguous.  I
>could even happily live with such a rule, but many others couldn't.

I also could live with exactly that sort of nice simple rule, and I
sincerly wish that X3J16 would simply trash all of this undue complexity
and adopt *exactly* the rule you have suggested.

--

-- Ronald F. Guilmette ------------------------------------------------------
------ domain address: rfg@netcom.com ---------------------------------------
------ uucp address: ...!uunet!netcom.com!rfg -------------------------------




Author: bruce@utafll.uta.edu (Bruce Samuelson)
Date: Tue, 31 Aug 1993 04:44:46 GMT
Raw View
The analyses offered in the recent postings to this thread seem too
complex for an average programmer to comprehend. Is this complexity
inherent in the language, or is it due to flaws in the reference
manual? How could the language or manual be improved in this case?
--
**********************************************************
* Bruce Samuelson Department of Linguistics  *
* bruce@ling.uta.edu University of Texas at Arlington *
**********************************************************






Author: daniels@NeoSoft.com (Brad Daniels)
Date: Tue, 31 Aug 1993 13:42:35 GMT
Raw View
In article <courtney.746755834@nebula> courtney@parc.xerox.com (Antony Courtney) writes:
>daniels@NeoSoft.com (Brad Daniels) writes:
[...]
>>Indeed, I can see no advantage to allowing an illegal match to be considered a
>>best match other than a possible small gain in compile-time efficiency and a
>>very small reduction in match searching complexity.
>>
>
>One might consider that the compiler catching the bug in the original code
>posted by Fouts to be one such "advantage".

I agree to a certain extent, but to anyone unaware of the fact that
const-ness is ignored during function matching, the code looks unambiguous,
though the behavior may take a little while to track down.

>In *this* instance, I don't think the wording of the ARM is motivated by
>"compile-time efficiency" or "an appology for CFront", as your article
>suggests.  In this case, I think safety is the motivation.  The ARM uses a
>single set of rules for picking a function from the "namespace" of functions
>with an overloaded name.  If the function which matches may not be called in
>the given context, it is reported as a compile time error.

The problem I have with this section is that the ARM introduces a seemingly
arbitrary, highly counter-intuitive rule which introduces unnecessary ambigu-
ity.  Clearly, some (many?) compiler designers have agreed with me, since at
least one implementation of C++ handled Fouts' example the way I expected
before Ellis pointed out the text I hadn't seen.

>The potential for ambiguity with overloaded function names is altogether
>dangerous, as Fouts' example shows us.  In the presence of such ambiguity, I
>definitely do NOT want the compiler to guess at "which function the programmer
>must have _meant_ to call in the given context".

Agreed.  However, this is more a case of "which function *can* the programmer
call in this context".  Given the "best match" rule, it seems unreasonable that
a compiler should ever consider an illegal match to be "best".  If we are going
to have such rules at all, they should be uniform and intuitive as possible,
so that a reasonably knowledgeable programmer can predict what the code will
do without needing to know obscure algorithmic details of function matching.

>Can you suggest a situation in which the compiler not flagging the type of
>call presented in the original example as an error would be an "advantage"?

That particular example would have benefited from the compiler flagging the
call, I agree.  It is difficult to contrive a concise example which would
show the benefit of allowing only legal matches to be considered candidates
for best match, since with most concise examples, there is an alternate way
to achieve the same effect.  I will, however, make an effort:

class ObjWithStatus {
    int stat;
public:
    ObjWithStatus() {}
    void set_status(int s) {stat=s;}
};

inline void FlagError(ObjWithStatus &s, const int errnum)
{
    s.set_status(errnum);
}

inline void FlagError(const ObjWithStatus &s, const char* msg)
{ /* Report msg */ }

class ErrorValue {
    int val;
public:
    enum vals { REALLY_UNSUAL_ERROR, ERROR_WE_NEED_TO_TRACK };
    ErrorValue(int v) : val(v) {}
    static const char *LookUpErrMsg(int v);
    operator const int() const { return val; }
    operator const char *() const { return LookUpErrMsg(val); }
}

int uFunc1(const ObjWithStatus &ob)
{
    ...
    if (ReallyUnusualError) FlagError(ob,REALLY_UNUSUAL_ERROR);
    // The above will either be an error, or will report the message
    // for ReallyUnusualError.
    ...
}

void userFunc1()
{
    ObjWithStatus ob;
    if (uFunc1(ob)) FlagError(ob, ERROR_WE_NEED_TO_TRACK);
    ...
}

This admittedly contrived and incomplete example is in error under the
current ARM definition, but is unambiguous if illegal matches are not
considered when determining the best match.  I probably wouldn't advocate
the above approach in a real program, so I can't defend it to strongly.  My
main objection here is that the ARM requires behavior which is highly counter-
intuitive, and which causes programs for which there is a reasonable and
intuitive interpretation to be considered illegal.

- Brad

--
Brad Daniels   |  "Let others praise ancient times.
daniels@neosoft.com  |   I am glad I was born in these."
I don't work for NeoSoft, and | - Ovid (43 B.C. - 17 A.D)
don't speak for my employer. |




Author: kanze@us-es.sel.de (James Kanze)
Date: 31 Aug 93 20:32:35
Raw View
In article <CCLEEM.K3z@sugar.NeoSoft.COM> daniels@NeoSoft.com (Brad
Daniels) writes:

|> Since we've gotten into an area where the ARM forwards one interpretation
|> and the working document promotes a different one, I thought I'd start
|> cross-posting to comp.std.c++.  For those just joining, here's the code
|> example (by Marty Fouts) which started the discussion:

|> #include <iostream.h>

|> class aClass {
|> public:
|>   char *message;
|>   aClass(const char *newmessage);
|>   ~aClass() { delete [] message; }
|> };

|> aClass::aClass(const char *newmessage)
|> {
|>   message = new char[strlen(newmessage)+1];
|>   strcpy(message, newmessage);
|> }

|> const ostream& operator<<(const ostream& o, const aClass &a)
|> {
|>   o << a.message;
|>   return o;
|> }

|> The output statement results in infinite recursion on Fouts' system.
|> John Ellis pointed out (correctly, according to the ARM) that the above
|> is not legal C++.  I, having only the working document, arrived at a
|> different interpretation.

|> In article <1993Aug30.192359.22584@parc.xerox.com>
|> ellis@parc.xerox.com (John Ellis) writes:

|> >Here are selected quotations from the May, 1991 reprint of the ARM:
|> >
|> >-   Note that functions with arguments of type T, const T, volatile T,
|> >    T&, const T&, and volatile T& accept exactly the same set of
|> >    values [for the purposes of argument matching]. [page 318]
|> >
|> >-   A temporary variable is needed for a formal argument of type T&
|> >    if the actual argument is not an lvalue, has a type different from
|> >    T, or is a volatile and T isn't.  This does not affect argument
|> >    matching.  It may, however, affect the legality of the resulting match
|> >    since a temporary may not be used to initialize a non-const
|> >    reference (r8.4.3). [page 318]
|> >
|> >-   In other words, "constness" acts as a tie-breaker where needed
|> >    but does not affect argument matching otherwise. [page 320]

Actually, it depends which page of the ARM you read.  The very first
sentence in section 13.2 states: "A call of a given function name
chooses, from among all functions by that name that are in scope AND
FOR WHICH A SET OF CONVERSIONS EXISTS SO THAT THE FUNCTION COULD
POSSIBLY BE CALLED, the function that best matches the actual
arguments."  If we are to believe this sentence (page 312), then the
sentence quoted several pages later implying that argument matching
may find a function which cannot be called (because it would require a
temporary for a non-const reference) seems a bit out of place.

|> Clearly, this is a place where the working document needs significant
|> clarification.  Do you favor clarifying it in the direction of counter-
|> intuitive behavior such as the example from the ARM you quoted, or should
|> it be clarified to codify a more rational definition of "best match"?  Really,
|> an illegal match shouldn't even be considered... and I don't want to hear
|> about it making things more complex for compiler developers.  As Fouts' ex-
|> perience has pointed out, there are already compilers which work the way
|> I would expect.  The ARM approach, if incorporated into the standard, would
|> force compilers to fail unnecessarily in order to conform to a particular,
|> irrational matching algorithm.

As it happens, this point was discussed in Munich; while there is not
yet a formal resolution, there was an overwhelming concensus for
maintaining the interpretation presented in the first sentence quoted.
In this case, the only operator<< that can be called is the recursive
call.  There is no ambiguity.
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: kanze@us-es.sel.de (James Kanze)
Date: 31 Aug 93 20:34:37
Raw View
In article <BRUCE.93Aug30224446@utafll.utafll.uta.edu>
bruce@utafll.uta.edu (Bruce Samuelson) writes:

|> The analyses offered in the recent postings to this thread seem too
|> complex for an average programmer to comprehend. Is this complexity
|> inherent in the language, or is it due to flaws in the reference
|> manual? How could the language or manual be improved in this case?

In this case, there is a definite flaw in the reference manual.  Which
is not to say that the language is simple.
--
James Kanze                             email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
Conseils en informatique industrielle --
                   -- Beratung in industrieller Datenverarbeitung




Author: pkt@lpi.liant.com (Scott Turner)
Date: Tue, 31 Aug 1993 17:51:28 GMT
Raw View
In article <BRUCE.93Aug30224446@utafll.utafll.uta.edu>, bruce@utafll.uta.edu (Bruce Samuelson) writes:
> The analyses offered in the recent postings to this thread seem too
> complex for an average programmer to comprehend. Is this complexity
> inherent in the language, or is it due to flaws in the reference
> manual?

It's inherent in the language.

Others probably have an entirely different point of view of this, but
here's how I believe it works.  The programmer writes an expression
using an overloaded function, expecting the named function to do a
certain kind of thing.  The compiler looks through the available functions
and attempts to use the one which is the best match for the given arguments
or operands.  Sometimes none of the candidates is definitely superior,
in which case the compiler issues a diagnostic.  In this case the
programmer can use an explicit conversion of one or more operands to
select the desired particular function.

Sometimes the compiler selects a function which does the wrong thing.
In this case the blame can often be laid on the programmer, because there's
a general principle that functions with the same name should do the
same thing -- only they do it do different types of operands.
So by following this guideline and the above-described method of
writing a call to an overloaded function, programmers get by.
Judging from the rarity of postings like yours and Fouts's, I guess
they get by quite well.

Complaints do come from programmers whose compiler issued a diagnostic
when the programmer thought it should have been able to choose one of
the functions.  So the overloading rules are refined to make lots of
fine distinctions regarding why one function should be chosen over another.
Thus, it's the needs of programmers which have caused the rules to
be incomprehensible.

Disclaimer:  I acquired this point of view while implementing C++.
--
Prescott K. Turner, Jr.
Liant Software Corp. (developers of LPI languages)
959 Concord St., Framingham, MA 01701 USA    (508) 872-8700
UUCP: uunet!lpi!pkt                          Internet: pkt@lpi.liant.com




Author: pena@niksula.hut.fi (Olli-Matti Penttinen)
Date: 1 Sep 93 13:43:43
Raw View
In article <KANZE.93Aug31203437@slsvhdt.us-es.sel.de> kanze@us-es.sel.de (James Kanze) writes:

   In article <BRUCE.93Aug30224446@utafll.utafll.uta.edu>
   bruce@utafll.uta.edu (Bruce Samuelson) writes:

   |> The analyses offered in the recent postings to this thread seem too
   |> complex for an average programmer to comprehend. Is this complexity
   |> inherent in the language, or is it due to flaws in the reference
   |> manual? How could the language or manual be improved in this case?

   In this case, there is a definite flaw in the reference manual.  Which
   is not to say that the language is simple.

The average programmer has no real need to understand the fine details
of argument matching until he encounters a surprising situation like
the one that started this thread.  During my five years of experience
of using and teaching the language, I have very seldom had to consider
the rules at all.  Usually, sound design prevents situations in which
several functions could *reasonably* be called from occuring.

This, of course, does not mean that the language is simple in this
respect.  The inherent problem here is much due to the somewhat
arbitrary desicions that have to be made to disambiguate calls.  There
just is no one definition of a "best match".  I can come up with only
one clear and easy-to-understand rule: if no exact match is found and
more than one function could be called, the call is ambiguous.  I
could even happily live with such a rule, but many others couldn't.


==pena
--
Olli-Matti Penttinen <pena@niksula.cs.hut.fi>
Lehdesniityntie 3 F 91
00340 HELSINKI, Finland            "When in doubt, use brute force."
tel. + 358 0 1399 0110                -- Ken Thompson




Author: rmartin@rcmcon.com (Robert Martin)
Date: Tue, 31 Aug 1993 21:33:41 GMT
Raw View
bruce@utafll.uta.edu (Bruce Samuelson) writes:

>The analyses offered in the recent postings to this thread seem too
>complex for an average programmer to comprehend. Is this complexity
>inherent in the language, or is it due to flaws in the reference
>manual? How could the language or manual be improved in this case?


       "The reason for accepting this extra complexity is
       that it permits a wider range of concepts to be
       conveniently expressed."
                                 Ole-Johan Dahl
                                 Structured Programming,
                                 Acedemic Press, 1972
                                 Page 179.

Dahl was referring to the complexity of the object paradigm implemented
in the Simula language.

Complexity is a dual edged sword.  It is bad for those who have to deal
with it, but it greatly benefits those who don't.  i.e. a microwave oven
or a VCR.

In this case, the language user does not generally have to deal with the
wierd complexities of the language spec.  But the implementors and
specifiers do.




--
Robert Martin       | Design Consulting   | Training courses offered:
Object Mentor Assoc.| rmartin@rcmcon.com  |   Object Oriented Analysis
2080 Cranbrook Rd.  | Tel: (708) 918-1004 |   Object Oriented Design
Green Oaks IL 60048 | Fax: (708) 918-1023 |   C++




Author: daniels@NeoSoft.com (Brad Daniels)
Date: Mon, 30 Aug 1993 21:59:09 GMT
Raw View
Since we've gotten into an area where the ARM forwards one interpretation
and the working document promotes a different one, I thought I'd start
cross-posting to comp.std.c++.  For those just joining, here's the code
example (by Marty Fouts) which started the discussion:

#include <iostream.h>

class aClass {
public:
  char *message;
  aClass(const char *newmessage);
  ~aClass() { delete [] message; }
};

aClass::aClass(const char *newmessage)
{
  message = new char[strlen(newmessage)+1];
  strcpy(message, newmessage);
}

const ostream& operator<<(const ostream& o, const aClass &a)
{
  o << a.message;
  return o;
}

The output statement results in infinite recursion on Fouts' system.
John Ellis pointed out (correctly, according to the ARM) that the above
is not legal C++.  I, having only the working document, arrived at a
different interpretation.

In article <1993Aug30.192359.22584@parc.xerox.com> ellis@parc.xerox.com (John Ellis) writes:
>Brad Daniels writes:
>
>    In the example given, the first argument is const ostream&, not
>    ostream&.  In order to convert a const ostream& to an ostream&, a
>    temporary is required.  Section 13.2 expressly forbids the
>    creation of the temporary, however, meaning that the ostream
>    operator<< functions are not even examined.  Only operators taking
>    const ostream& as the first operand will be searched, meaning that
>    the given operator is the only legitimate one, resulting,
>    correctly, in infinite recursion.
>
>I believe Daniels has misread the ARM (not hard to do, unfortunately).
>My original analysis stands -- Fouts's example is not legal ARM C++.

Hmm, perhaps I'm being bitten yet again by my lack of an ARM.  I was using
the working document (Jan. 28 1993.)

>Daniels says:
>
>    In order to convert a const ostream& to an ostream&, a temporary
>    is required.
>
>ARM C++ never allows a T& to be initialized with a const T& (r8.4.3).
>In no circumstances would a temporary *ever* be introduced to
>"convert" a const T& to a T&.  So the part from 13.2 quoted by
>Daniels:

Yes...  I was assuming that you were making the assumption that it could
happen...  I was trying to point out where it was said it couldn't happen.
I suppose it's irrelevant, in any case.

>    No temporaries will be introduced for this extra argument
>    [representing "this"]...  to achieve a type match.
>
>does not apply to Fouts's example.
>
>ARM 13.2 is explicit that const-ness and the creation of temporaries
>do not affect selection of overloaded functions.  A function is first
>selected without regard to const-ness and temporaries, and only then
>is the call examined to see if it is legal.

Is there equivalent text in any of the other documents?  This rule
means that the compiler would consider an illegal call a match, potentially
resulting in a case where there is no best match.  The closest I can find
in the working document is the text below saying T, const T, etc. accept
the exact same set of values [for matching purposes].  In fact, as the
working document states it, it's just plain wrong.  They do not accept
the exact same set of values, but are considered to do so for matching
purposes (which qualification the working document leaves out).  As it
stands, only the additional information you quote below states the actual
nature of the beast.

My point was that the existing operator<<(ostream&,const char *) operator
should not even be considered as a match because calling it would require
the creation of a non-const temporary, meaning it is illegal.  Having
eliminated that possible match from consideration, only the
operator<<(const ostream&, const aClass &) is a legal match for the second
argment, meaning it is a valid best match for both arguments.

>Here are selected quotations from the May, 1991 reprint of the ARM:
>
>-   Note that functions with arguments of type T, const T, volatile T,
>    T&, const T&, and volatile T& accept exactly the same set of
>    values [for the purposes of argument matching]. [page 318]
>
>-   A temporary variable is needed for a formal argument of type T&
>    if the actual argument is not an lvalue, has a type different from
>    T, or is a volatile and T isn't.  This does not affect argument
>    matching.  It may, however, affect the legality of the resulting match
>    since a temporary may not be used to initialize a non-const
>    reference (r8.4.3). [page 318]
>
>-   In other words, "constness" acts as a tie-breaker where needed
>    but does not affect argument matching otherwise. [page 320]

Reading these, I see your contention more clearly.  To summarize:

ostream &operator<<(ostream&,const char*)
const ostream &operator<<(const ostream&,const aClass&)

While the set best matching on the second argument is:

ostream &operator<<(ostream&,const char*)

Meaning that the intersection is the function which is not legal, thus
making the call illegal.

Is this truly the intent of the ARM, or is it simply a weird side-effect?
It seems to me that illegal matches should be eliminated before determining
the best match.  In the absence of verbiage saying it doesn't do so, one would
tend to assume that the language would require that illegal matches not be
considered when determining the best match.  An illegal match does not satisfy
any reasonable definition of "best match".  Of course, there is the later
text:

>-   Note that the use of references does not change the set of values
>    accepted compared to their corresponding object types. ...  On the
>    other hand, a temporary may not be used to initialize a non-const
>    reference, so a call may be determined to be an error after the
>    selection of a function:
>
>        void f(char&);
>        void f(short);
>        void g() {
>            f('c'); }    // error: call f(char&), but requires temporary
>
>    Here, f(char&) is preferred to f(short) exactly as in the example
>    above, but since a temporary is required to hold 'c' and the
>    reference is not const, the call is not accepted.  This does *not*
>    lead to an otherwise legal call of f(short).  [page 323]
>
>    [This last example best illustrates the issue: const-ness of an
>    actual argument does not affect the selection of overloaded
>    functions.   The example directly contradicts Daniels misreading.]

Is it just me, or does the above quote seem simply to be an apology for a
failing of Cfront?  It is certainly possible to write a compiler which
would handle the above case correctly according to a reasonable interpretation
of section 13.2 of the working document in the absence of the above annotations.
(The working document makes no mention of ignoring const-ness for matching
purposes.)  Indeed, I can see no advantage to allowing an illegal match
to be considered a best match other than a possible small gain in compile-
time efficiency and a very small reduction in match searching complexity.

Indeed, the annotations you quote all seem to be discussing a particular,
and in my opinion flakey, implementation of C++, rather than language
issues.  It's certainly relevant if one is using said implementation,
but it's hardly something that should be codified into a standard.

Clearly, this is a place where the working document needs significant
clarification.  Do you favor clarifying it in the direction of counter-
intuitive behavior such as the example from the ARM you quoted, or should
it be clarified to codify a more rational definition of "best match"?  Really,
an illegal match shouldn't even be considered... and I don't want to hear
about it making things more complex for compiler developers.  As Fouts' ex-
perience has pointed out, there are already compilers which work the way
I would expect.  The ARM approach, if incorporated into the standard, would
force compilers to fail unnecessarily in order to conform to a particular,
irrational matching algorithm.

[ Example by me and Ellis' response deleted.  I misunderstood
  the reasons behind his contentions, and so posted an irrelevant
  example. ]

- Brad
--
Brad Daniels   |  "Let others praise ancient times.
daniels@neosoft.com  |   I am glad I was born in these."
I don't work for NeoSoft, and | - Ovid (43 B.C. - 17 A.D)
don't speak for my employer. |


--
Brad Daniels   |  "Let others praise ancient times.
daniels@neosoft.com  |   I am glad I was born in these."
I don't work for NeoSoft, and | - Ovid (43 B.C. - 17 A.D)
don't speak for my employer. |




Author: courtney@parc.xerox.com (Antony Courtney)
Date: 31 Aug 93 00:10:34 GMT
Raw View
daniels@NeoSoft.com (Brad Daniels) writes:
>ellis@parc.xerox.com (John Ellis) writes:
>>
>>ARM 13.2 is explicit that const-ness and the creation of temporaries
>>do not affect selection of overloaded functions.  A function is first
>>selected without regard to const-ness and temporaries, and only then
>>is the call examined to see if it is legal.
>
>This rule means that the compiler would consider an illegal call a match,
>potentially resulting in a case where there is no best match.
>
>My point was that the existing operator<<(ostream&,const char *) operator
>should not even be considered as a match because calling it would require
>the creation of a non-const temporary, meaning it is illegal.
>
>Indeed, I can see no advantage to allowing an illegal match to be considered a
>best match other than a possible small gain in compile-time efficiency and a
>very small reduction in match searching complexity.
>

One might consider that the compiler catching the bug in the original code
posted by Fouts to be one such "advantage".

In *this* instance, I don't think the wording of the ARM is motivated by
"compile-time efficiency" or "an appology for CFront", as your article
suggests.  In this case, I think safety is the motivation.  The ARM uses a
single set of rules for picking a function from the "namespace" of functions
with an overloaded name.  If the function which matches may not be called in
the given context, it is reported as a compile time error.

The potential for ambiguity with overloaded function names is altogether
dangerous, as Fouts' example shows us.  In the presence of such ambiguity, I
definitely do NOT want the compiler to guess at "which function the programmer
must have _meant_ to call in the given context".

Can you suggest a situation in which the compiler not flagging the type of
call presented in the original example as an error would be an "advantage"?

 -antony
--
---
Antony Courtney      courtney@parc.xerox.com
Xerox PARC / Trinity College, Dublin   acourtny@unix1.tcd.ie