Topic: aliases, ptr coercions, and optimizations


Author: shap@thebeach.wpd.sgi.com (Jonathan Shapiro)
Date: 2 Aug 90 18:15:35 GMT
Raw View
In article <56170@microsoft.UUCP>, jimad@microsoft.UUCP (Jim ADCOCK) writes:

> My general proposal is that C++ be very restrictive in terms of what pointer
> casts are "guaranteed" to work, allowing for maximum compiler optimizations,
> while minimizing the pointer hacks that are "guaranteed" to work.  IE,
> good C++ citizens should get well optimized code, but C++ hacker code may
> not work at all.

This is not in the spirit of the language at all.  There is nothing to
prevent the optimizer from recognizing the non-optimizable cases and
simply not optimizing them.

Jon




Author: jimad@microsoft.UUCP (Jim ADCOCK)
Date: 6 Aug 90 21:45:25 GMT
Raw View
In article <11295@odin.corp.sgi.com| shap@sgi.com writes:
|In article <56170@microsoft.UUCP>, jimad@microsoft.UUCP (Jim ADCOCK) writes:
|
|> My general proposal is that C++ be very restrictive in terms of what pointer
|> casts are "guaranteed" to work, allowing for maximum compiler optimizations,
|> while minimizing the pointer hacks that are "guaranteed" to work.  IE,
|> good C++ citizens should get well optimized code, but C++ hacker code may
|> not work at all.
|
|This is not in the spirit of the language at all.  There is nothing to
|prevent the optimizer from recognizing the non-optimizable cases and
|simply not optimizing them.
|
|Jon

I disagree.  As I stated in my prefix, there is indeed something preventing
compilers from recognizing the non-optimizable cases.  That something is
the traditional "un*x" model of separate compilation and linking.  If, for
example a vendor provides a pre-compiled library containing a method say:
void doSomething(const FOO& foo);

then an optimizing compiler has two choices:

1) it can assuming doSomething really does treat foo as a constant.  In which
case the optimizing compiler can safely enregister fields of foo across the
doSomething call.

2) it can pessimitically assume doSomething violates its pledge of const'ness.
Then any enregistered fields of foo need to be reloaded after the doSomething
call....

Either way, under the traditional "un*x" model of separate compilation and
linking -- including separate libraries delivered precompiled, there is no
reasonable way for compilers to verify the truthfullness of the const'ness
of any function [short of automatic decompilation and analysis]

Certainly one can imagine adding informational libraries to state "is this
const function *really const?*" or add additional name mangling to indicate
"this function says its const but it really isn't"....  But const'ness or
not const'ness is just one flavor of the lies that a prepackaged routine
can present to the outside world.  Shouldn't compilers be able to assume
that functions honour the contract implied by their signatures?  Shouldn't
attempts by programmers to violate their signature contracts be flagged as
errors, or at least warnings?