Topic: argument matching
Author: pstemari@well.sf.ca.us (Paul J. Ste. Marie)
Date: Fri, 10 Sep 1993 19:56:59 GMT Raw View
> In article <CCrpH2.H9t@well.sf.ca.us>, pstemari@well.sf.ca.us (Paul J. Ste. Mari
> > > pg. 317: "For a given actual argument, no sequence of
> > > conversions will be considered that contains more than one
> > > user-defined conversion or that can be shortened by deleting
> > > one or more conversions into another sequence that leads to
> > > the type of the corresponding formal argument of any function
> > > in consideration. Such a sequence is called a best-matching
> > > sequence.
> > >
> > > "For example, int -> float -> double is a sequence of
> > > conversions from int to double, but it is not a best-matching
> > > sequence because it contains the shorter sequence int ->
> > > double."
> >
> > That seems perfectly clear. Always take the shortest possibly route,
> > in the absence of other considerations.
>
> That doesn't seem clear to me. In particular, what does it mean by
> "deleting one or more conversions"? The obvious ways in which to
> delete one conversion from
> int -> float -> double
> result in
> int -> float
> or float -> double
> not
> int -> double
> as in the example. I'd call what happens in the example "deleting
> an intermediate type from the conversion sequence". On the other hand,
> deleting an intermediate type from the conversion sequence can never
> lead to the "corresponding formal argument of any [other] function
> in consideration". So if the example demonstrates the intent, then
> why does the rule get involved with "*any* function in consideration"?
Say that we start with int, and have functions f(X) and f(Y), where X has
an int ctor and is derived from Y. Then we have the sequences
int->X->Y
vs
int->X
The second path is a subset of the first, visiting one less datatype, and
would seem to be preferred.
Likewise, in the following case:
class T {
T(int);
T(float):
};
void f(T);
it would seem that f(1) should use the conversion int->T and not
int->float->T under this rule.
> > > "If user-defined conversions are needed for an argument, no
> > ^^
> > > account is taken of any standard coercions that might also be
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > involved. For example:
[...]
> > >
> > > "The call to f(1) is ambiguous despite f(y(long(1))) needing
> > > one more standard conversion than f(x(1))."
> >
> > Am I reading this correctly?
>
> Not quite. Here the conversion sequences involved are
> int -> class x
> int -> long -> class y
> The latter cannot be shortened into the former by any means. The
> section you first quoted does not apply, so there is no contradiction.
True, but the wording is much more general, ie the previous example with
int -> class T
and
int -> float -> class T
would also seem ambiguous under this section, since the extra std conversion
doesn't count once a user defined conversion gets in there.
> > My reading of the first parts of the ARM cited is that the conversions
> > considered for cout << i and cout << n should be IntWrapper& -> int&
> > -> int and IntWrapper& -> int, and that IntWrapper& -> int should win
> > by virtue of being shorter. The last part cited, OTOH, seems to say
> > that this is ambiguous. Which is the correct interpretation?
>
> Given the difficulty of interpreting the ARM, the fact that the
> standards committee's working paper says the same thing, and that
> implementations disagree, IMO there is no correct interpretation.
> Caveat programmer.
True enough.
Author: pstemari@well.sf.ca.us (Paul J. Ste. Marie)
Date: Fri, 3 Sep 1993 07:43:49 GMT Raw View
Well, I've been doing some ARM reading, attempting to to solve
inconsistent behavior between overloading and argument matching
between BC++, VC++, g++, and ObjectCenter, and I will now admit to
total confusion. Let me quote from the ARM:
> pg. 317: "For a given actual argument, no sequence of
> conversions will be considered that contains more than one
> user-defined conversion or that can be shortened by deleting
> one or more conversions into another sequence that leads to
> the type of the corresponding formal argument of any function
> in consideration. Such a sequence is called a best-matching
> sequence.
>
> "For example, int -> float -> double is a sequence of
> conversions from int to double, but it is not a best-matching
> sequence because it contains the shorter sequence int ->
> double."
That seems perfectly clear. Always take the shortest possibly route,
in the absence of other considerations. Skipping over the commentary
and the five other considerations (trivial, promotions, standard conv,
user-defined conv, and ellipsis, the text picks up again on page 326:
> "User-defined conversions are selected based on the type of
> variable being initialized or assigned to.
>
> class Y {
> // ...
> public:
> operator int();
> operator double();
> };
>
> void f(Y y)
> {
> int i = y; // call Y::operator int()
> double d;
> d = y; // call Y::operator double()
> float f = y; // error: ambiguous
> }
This seems to confirm--go by the shortest route. If you can get to
int directly, you don't go by way of double. This is all what I
expected. Here comes the contradictory part, though (pg 326-327):
> "If user-defined conversions are needed for an argument, no
^^
> account is taken of any standard coercions that might also be
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> involved. For example:
>
> class x {
> public:
> x(int);
> };
>
> class y {
> public:
> y(long);
> };
>
> void f(x);
> void f(y);
>
> void g()
> {
> f(1); // ambiguous
> }
>
> "The call to f(1) is ambiguous despite f(y(long(1))) needing
> one more standard conversion than f(x(1))."
Am I reading this correctly? This contradicts the prior comments. If
it works for class Y and automatic casts from int and double, why
should it not work for classes x and y? Why should the argument
matching be different between operator= and other function calls?
Dropping the shortest sequence rule in the presence of user defined
types doesn't seem to make much sense.
In my particular case, I have an object that wraps tightly around a
fundemental type, and operators that (should) return type for const
objects and type& for non-const ones--i.e.
#include <iostream.h>
template <class T>
class Wrapper {
protected:
T data;
public:
Wrapper() {};
Wrapper(const T& src) : data(src) {};
operator T&() {return data;};
operator T() const {return data;};
};
// note: typedef class Wrapper<int> IntWrapper;
// doesn't work for reasons unknown
typedef Wrapper<int> IntWrapper;
void main() {
IntWrapper i = 5, n;
n = 6;
cout << "i = " << i
<< " n = " << n << endl;
cout << "i: "; cin >> i;
cout << "n: "; cin >> n;
cout << "i = " << i
<< " n = " << n << endl;
}
My reading of the first parts of the ARM cited is that the conversions
considered for cout << i and cout << n should be IntWrapper& -> int&
-> int and IntWrapper& -> int, and that IntWrapper& -> int should win
by virtue of being shorter. The last part cited, OTOH, seems to say
that this is ambiguous. Which is the correct interpretation? FWIW,
BC++ does what I wanted it to do, VC++ is OK if you replace the
template decl with a hardcoded int, g++ produces code that only runs
if the debugger is loaded (go figure that one out--shades of DOS), and
ObjectCenter complains of a type ambiguity and lists *every* insertion
operator that takes an integral or pseudo-integral type.
BTW, the sequence IntWrapper& -> int& as used by n = 6 and cin >> n
seems to be definitely OK, and every compiler I've tried allows that.
--
--Paul Ste. Marie (psmarie@cbis.com, pstemari@well.sf.ca.us)
--
--Paul (psmarie@cbis.com, pstemari@well.sf.ca.us)
President, MIVARS, quondam Apogee production troll
(officer of rocket org == sucker!!)
Author: pkt@lpi.liant.com (Scott Turner)
Date: Fri, 3 Sep 1993 16:21:51 GMT Raw View
In article <CCrpH2.H9t@well.sf.ca.us>, pstemari@well.sf.ca.us (Paul J. Ste. Marie) writes:
> > pg. 317: "For a given actual argument, no sequence of
> > conversions will be considered that contains more than one
> > user-defined conversion or that can be shortened by deleting
> > one or more conversions into another sequence that leads to
> > the type of the corresponding formal argument of any function
> > in consideration. Such a sequence is called a best-matching
> > sequence.
> >
> > "For example, int -> float -> double is a sequence of
> > conversions from int to double, but it is not a best-matching
> > sequence because it contains the shorter sequence int ->
> > double."
>
> That seems perfectly clear. Always take the shortest possibly route,
> in the absence of other considerations.
That doesn't seem clear to me. In particular, what does it mean by
"deleting one or more conversions"? The obvious ways in which to
delete one conversion from
int -> float -> double
result in
int -> float
or float -> double
not
int -> double
as in the example. I'd call what happens in the example "deleting
an intermediate type from the conversion sequence". On the other hand,
deleting an intermediate type from the conversion sequence can never
lead to the "corresponding formal argument of any [other] function
in consideration". So if the example demonstrates the intent, then
why does the rule get involved with "*any* function in consideration"?
> > "If user-defined conversions are needed for an argument, no
> ^^
> > account is taken of any standard coercions that might also be
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > involved. For example:
> >
> > class x {
> > public:
> > x(int);
> > };
> >
> > class y {
> > public:
> > y(long);
> > };
> >
> > void f(x);
> > void f(y);
> >
> > void g()
> > {
> > f(1); // ambiguous
> > }
> >
> > "The call to f(1) is ambiguous despite f(y(long(1))) needing
> > one more standard conversion than f(x(1))."
>
> Am I reading this correctly?
Not quite. Here the conversion sequences involved are
int -> class x
int -> long -> class y
The latter cannot be shortened into the former by any means. The
section you first quoted does not apply, so there is no contradiction.
> My reading of the first parts of the ARM cited is that the conversions
> considered for cout << i and cout << n should be IntWrapper& -> int&
> -> int and IntWrapper& -> int, and that IntWrapper& -> int should win
> by virtue of being shorter. The last part cited, OTOH, seems to say
> that this is ambiguous. Which is the correct interpretation?
Given the difficulty of interpreting the ARM, the fact that the
standards committee's working paper says the same thing, and that
implementations disagree, IMO there is no correct interpretation.
Caveat programmer.
--
Prescott K. Turner, Jr.
Liant Software Corp. (developers of LPI languages)
959 Concord St., Framingham, MA 01701 USA (508) 872-8700
UUCP: uunet!lpi!pkt Internet: pkt@lpi.liant.com