Topic: Is C++ to be made compatible with C99?


Author: kanze@gabi-soft.de (James Kanze)
Date: Wed, 13 Feb 2002 17:45:33 GMT
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:<3C62A08B.3A1A4566@acm.org>...
> Garry Lancaster wrote:

> > Hmm. Seems to me this case (and others like it) require changing
> > the integral promotion rules, so that no implicit conversions
> > occur [see note 1].  Then it doesn't affect the universality of
> > any ban on lossy implicit conversions.

> unsigned char ch = UCHAR_MAX;
> int i = ch + 1;

> On a platform with 8-bit chars this change would quietly change the
> meaning of this code.

I suspect that changing integral promotion rules would break a lot of
code.  I suspect that it won't fly because of this.

I'd very much like to see some sort of restrictions on lossy
conversions.  But I don't pretend that it will be easy.  Integral
promotion, for example, is ingrained in every C programmers mind, and
I imagine most DO write code which depends on it.  Another frequent
idiom is something like:

    istream input ;
    char* dest ;
    //  ...
    int ch = input.get() ;
    while ( ch != EOF /* && ... */ ) {
        *dest ++ = ch ;
    }

(You can write the same thing using std::string and +=, of course.)

In this case, I'd almost go for requiring an explicit cast, but it is
a common idiom.  (To begin with, in this case, because it is such a
common idiom, I'd also go for modifying the standard so that it is
defined, since if char is signed, the results of the conversion are
not defined for a lot of letters.  But that's another issue.)

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Thu, 14 Feb 2002 17:08:30 GMT
Raw View
James Kanze:
> I suspect that changing integral promotion rules
> would break a lot of code.  I suspect that it won't
> fly because of this.

We must distinguish between two types of code
breakage:

1. Noisy. Code that compiles under C++98 but
won't under C++0x. The incompatibility is thus
drawn forcefully to the programmer's attention.

2. Quiet. Code that compiles under both C++98
and C++0x, but where the meaning silently
changes. The programmer may not notice the
change.

I hope we can agree that we should make strenuous
efforts to avoid quiet breakages. More controversially
perhaps, I believe the C++0x standard will be
seriously dull if no noisy breakages are allowed.

> I'd very much like to see some sort of restrictions
> on lossy conversions.  But I don't pretend that it
> will be easy.

I feel the same way.

I have thought about ways of changing the integral
promotion rules in order to work better with a type
system with no lossy implicit conversions. But I
haven't found a way of doing it that wouldn't cause
quiet breakages. Although I don't think the cases
of quiet breakage would be that common [1], it is
still a serious problem considering the relatively
modest benefits of the change.

> Integral promotion, for example, is
> ingrained in every C programmers mind, and
> I imagine most DO write code which depends on it.

I'm not so sure. It was only when I looked
at the standard that I realised *exactly* what
the rules were.  The simpler heuristic I'd been
employing up to then seemed to work
reasonably well.

> Another frequent
> idiom is something like:
>
>     istream input ;
>     char* dest ;
>     //  ...
>     int ch = input.get() ;
>     while ( ch != EOF /* && ... */ ) {
>         *dest ++ = ch ;
>     }

(For anyone who was unclear (as I was,
initially) - this shows a lossy implicit
conversion but no integral promotion is
involved.)

> (You can write the same thing using std::string and +=, of course.)

> In this case, I'd almost go for requiring an explicit cast, but it is
> a common idiom.  (To begin with, in this case, because it is such a
> common idiom, I'd also go for modifying the standard so that it is
> defined, since if char is signed, the results of the conversion are
> not defined for a lot of letters.  But that's another issue.)

I'd prefer requiring a cast over making it a
special case [2]. But it seems like neither of us
would advocate either change without a lot more
cogitation.

One compromise I thought of: instead of
adopting the C99 fixed-size int types we define
some new built-in fixed-size types. We are then free
to define tighter conversion rules for the new types
without affecting existing code using ints, shorts
and so on. Unlike the current C++ system or
the C99 system, the conversions would be the
same on all implementations.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

NOTES:

1. Off the top of my head I can't think of a non-contrived
example.

2. Even now, my style would be to write that code with
a cast.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Al Grant" <tnarga@arm.REVERSE-NAME.com>
Date: Mon, 11 Feb 2002 18:15:33 GMT
Raw View
"Pete Becker" <petebecker@acm.org> wrote in message
news:3C62A08B.3A1A4566@acm.org...
> Garry Lancaster wrote:
> > Hmm. Seems to me this case (and others like it)
> > require changing the integral promotion rules, so
> > that no implicit conversions occur [see note 1].
>
> unsigned char ch = UCHAR_MAX;
> int i = ch + 1;
>
> On a platform with 8-bit chars this change would quietly change the
> meaning of this code.

No.  1 is int.  You have written unsigned char + int.  Either
  (a) type mismatch is illegal
  (b) unsigned char + int is defined, and presumably has result
      type at least int
  (c) for dyadic operators you do promote, to the wider type.

None is a quiet change.



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Tue, 12 Feb 2002 15:43:15 GMT
Raw View
Al Grant wrote:
>
> "Pete Becker" <petebecker@acm.org> wrote in message
> news:3C62A08B.3A1A4566@acm.org...
> > Garry Lancaster wrote:
> > > Hmm. Seems to me this case (and others like it)
> > > require changing the integral promotion rules, so
> > > that no implicit conversions occur [see note 1].
> >
> > unsigned char ch = UCHAR_MAX;
> > int i = ch + 1;
> >
> > On a platform with 8-bit chars this change would quietly change the
> > meaning of this code.
>
> No.  1 is int.  You have written unsigned char + int.  Either
>   (a) type mismatch is illegal

Maybe, if that's the rule you decide on.

>   (b) unsigned char + int is defined, and presumably has result
>       type at least int

Maybe, if that's the rule you decide on.

>   (c) for dyadic operators you do promote, to the wider type.

Maybe, if that's the rule that you decide on.

>
> None is a quiet change.
>

Maybe, if that's the rule you decide on. But do you really want adding 1
to be illegal?

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Al Grant" <tnarga@arm.REVERSE-NAME.com>
Date: Wed, 13 Feb 2002 01:25:44 GMT
Raw View
"Pete Becker" <petebecker@acm.org> wrote in message
news:3C6871E1.39FD8120@acm.org...
> Al Grant wrote:
> > "Pete Becker" <petebecker@acm.org> wrote in message
> > news:3C62A08B.3A1A4566@acm.org...
> > > Garry Lancaster wrote:
> > > > Hmm. Seems to me this case (and others like it)
> > > > require changing the integral promotion rules, so
> > > > that no implicit conversions occur [see note 1].
> > >
> > > unsigned char ch = UCHAR_MAX;
> > > int i = ch + 1;
> > >
> > > On a platform with 8-bit chars this change would quietly change the
> > > meaning of this code.
> >
> > No.  1 is int.  You have written unsigned char + int.  Either
> >   (a) type mismatch is illegal
>
> Maybe, if that's the rule you decide on.
>
> >   (b) unsigned char + int is defined, and presumably has result
> >       type at least int
>
> Maybe, if that's the rule you decide on.
>
> >   (c) for dyadic operators you do promote, to the wider type.
>
> Maybe, if that's the rule that you decide on.
>
> > None is a quiet change.
>
> Maybe, if that's the rule you decide on. But do you really want adding 1
> to be illegal?

No, but removing implicit conversions doesn't always give you
what you want!  But you don't get the 8-bit addition that makes
your "quiet change" unless you narrow 1 to a char.



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 7 Feb 2002 18:30:46 GMT
Raw View
Garry Lancaster wrote:
>
>
> Hmm. Seems to me this case (and others like it)
> require changing the integral promotion rules, so
> that no implicit conversions occur [see note 1].
> Then it doesn't affect the universality of any ban
> on lossy implicit conversions.
>

unsigned char ch = UCHAR_MAX;
int i = ch + 1;

On a platform with 8-bit chars this change would quietly change the
meaning of this code.

> NOTES:
>
> 1. Arguably, many C++ programmers (those
> who haven't swallowed a copy of the standard)
> would not expect the code fragment to lead to
> any conversions. It would be a good idea to
> make the language more intuitive unless there
> is any compelling reason not to.
>

Quietly breaking existing code is a good reason not to.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Thu, 7 Feb 2002 04:29:42 GMT
Raw View
I wrote:
> > James Kanze:

I should have written:
> > James Kuyper Jr.:

[snip wrongly attributed quote]

James Kanze:
> That quote is from James Kuyper, not James Kanze.
>
> This posting can be traced back to one of my postings, where I
> expressed myself in favour of banning implicit lossy conversions.

Oops. Apologies to both James's and to anyone who
was confused by my mistake.

> Pete Becker has pointed out some potential problems.  I feel rather
> certain that the problems can be solved satisfactorly, but since the
> are real, and I don't have time right now to work the solutions out in
> detail, I just shut up.
>
> FWIW: any restrictions with regards to implicit lossy conversions
> should still leave the following legal:
>
>     short a, b, c;
>     a = b + c;

I agree.

> Which means that any ban cannot be universal, and will have to be
> limited to certain contexts.

Hmm. Seems to me this case (and others like it)
require changing the integral promotion rules, so
that no implicit conversions occur [see note 1].
Then it doesn't affect the universality of any ban
on lossy implicit conversions.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

NOTES:

1. Arguably, many C++ programmers (those
who haven't swallowed a copy of the standard)
would not expect the code fragment to lead to
any conversions. It would be a good idea to
make the language more intuitive unless there
is any compelling reason not to.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Mon, 4 Feb 2002 12:34:43 GMT
Raw View
> > Pete Becker:
> > > Initialization from a constant is simple
> > > to resolve.

Garry Lancaster:
> > I listed a couple of ways of resolving it, but you have
> > not commented on them. As I mentioned before,
> > I want to reach an agreement on the intialization
> > before moving on to the assignment.

Pete Becker:
> It's not a good idea to decide whether to fly or to walk
> before you've decided where you're going.

I know where *I* want to go: a sane discussion on
removing lossy implicit conversions from C++.

> Having to write a cast in malloc(strlen(str)
> + 1) makes this idea untenable, so there is no point
> in designing its details.

If you want to resume a sane discussion, you should
clarify which language you're discussing. malloc is
largely an irrelevance in C++.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Mon, 4 Feb 2002 18:51:37 GMT
Raw View
Garry Lancaster:
> > None of the options I suggest, require any change for
> > the behaviour of typedefs. Option 2 requires that
> > int_fast8_t, if it is adopted, become a proper type,
> > not a typedef'd alias. Sorry if that wasn't clear.

James Kuyper Jr.:
> It was not clear. int_fast8_t is a typedef in the only standard that I
> know of which currently specifies it. I can't find any wording in your
> previous messages that suggests you were calling for that to be changed,
> if and when it gets imported into C++.

The wording was "Requires additional language changes."
I gave a summary of each option I proposed. To discuss
every detail would be premature and require a message
of excessive length. You made an incorrect assumption
about the meaning of something I wrote, so I clarified my
meaning. If you want an additional clarification about
anything I write, you only have to ask. That is the way a
sensible discussion proceeds.

[snip discussion about nonportable C99 code]

Much as I'd like to resolve this point, this thread is
already discussing too many subjects.

I'm much more interested in discussing how lossy
implicit conversions might be removed from C++.

> > Which feature? We (at least you and I - I'm not even clear
> > which language Pete Becker is discussing) are discussing
> > two at once: C99 int types and changes to the implicit
> > conversion rules. That's one too many for a sane
> > discussion IMHO.
>
> The two discussions are tied together. Unless I've misunderstood you,
> the only changes you're suggesting for the implicit conversion rules are
> motivated by one of the options you suggested for implementing the C99
> int types.

I'm afraid you did misunderstand (this thread is starting
to do my head in too). It's the other way round: I was
motivated to discuss the code containing the C99 int types
because of my interest in discussing the removal of
lossy implicit conversions from C++. In retrospect I regret
discussing the C99 int types here - it has added to the
confusion.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Tue, 5 Feb 2002 01:46:52 GMT
Raw View
Garry Lancaster wrote:
>
> > > Pete Becker:
> > Having to write a cast in malloc(strlen(str)
> > + 1) makes this idea untenable, so there is no point
> > in designing its details.
>
> If you want to resume a sane discussion, you should
> clarify which language you're discussing. malloc is
> largely an irrelevance in C++.
>

"Largely an irrelevance," even if correct, is not the same as "never
used." Is it your position that using typedef names for integral types
is so rare that it should not be given any consideration when
redesigning C++'s integer type system?

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: kanze@gabi-soft.de (James Kanze)
Date: Tue, 5 Feb 2002 17:47:38 GMT
Raw View
"Garry Lancaster" <glancaster@ntlworld.com> wrote in message
news:<Vqv68.6604$Cs.931922@news11-gui.server.ntli.net>...
> James Kuyper Jr. <kuyper@wizard.net> wrote in message
> news:3C59EC47.C82DF338@wizard.net...
> > Garry Lancaster wrote:

> > > > > James Kanze:
>  Garry Lancaster:
> > > > > 2. The above code is illegal (on any platform) because
> > > > > a cast is always required to convert from int to int_fast8_t.
> > > > > Requires additional language changes.

> Pete Becker:
> > > > Yup. Numerous. Beginning with making typedefs useless.

> > > I doubt it, but feel free to justify your remark.

> James Kanze:
> > The current C99 and C++ standards use only the actual type of an
> > expression to determine the validity of the code. Your suggestion
> > would treat a typedef as a different type from the one that it's a
> > typedef for.

That quote is from James Kuyper, not James Kanze.

This posting can be traced back to one of my postings, where I
expressed myself in favour of banning implicit lossy conversions.
Pete Becker has pointed out some potential problems.  I feel rather
certain that the problems can be solved satisfactorly, but since the
are real, and I don't have time right now to work the solutions out in
detail, I just shut up.

FWIW: any restrictions with regards to implicit lossy conversions
should still leave the following legal:

    short a, b, c;
    a = b + c;

Which means that any ban cannot be universal, and will have to be
limited to certain contexts.

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Tue, 5 Feb 2002 17:53:34 GMT
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:3C5F31E8.916E4558@acm.org...
> Garry Lancaster wrote:
> >
> > > > Pete Becker:
> > > Having to write a cast in malloc(strlen(str)
> > > + 1) makes this idea untenable, so there is no point
> > > in designing its details.
> >
> > If you want to resume a sane discussion, you should
> > clarify which language you're discussing. malloc is
> > largely an irrelevance in C++.
> >
>
> "Largely an irrelevance," even if correct, is not the same as "never
> used." Is it your position that using typedef names for integral types
> is so rare that it should not be given any consideration when
> redesigning C++'s integer type system?

No.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 31 Jan 2002 10:46:49 CST
Raw View
James Kanze wrote:
>
> In the case of lossy conversions, there is an alternative which
> removes little or nothing from the expressivity of the language, and
> provides significant added (but not perfect) safety.  That is to
> require them to be explicit.

There is also a high cost in non-portability. For example,

#include <stdint.h>

int_fast8_t val = 3;
val = val + 1;

And, of course, on your development system int_fast8_t happens to be a
typedef for int, so there is nothing wrong with this code. But when you
ship it to your customer's site for evaluation they use a compiler that
typedef's int_fast8_t to a type that's smaller than int, and suddenly
your code won't compile. Of course, the 8 in the name should have
suggested that you add apparently unnecessary casts all over the place,
so this is arguably a problem that shouldn't have arisen. But the same
thing can happen with size_t.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Thu, 31 Jan 2002 11:49:39 CST
Raw View
James Kanze:
> > In the case of lossy conversions, there is an alternative which
> > removes little or nothing from the expressivity of the language, and
> > provides significant added (but not perfect) safety.  That is to
> > require them to be explicit.

Pete Becker:
> There is also a high cost in non-portability. For example,
>
> #include <stdint.h>
>
> int_fast8_t val = 3;
> val = val + 1;

(Obviously, this is not standard C++. For the purposes
of answering, I'll assume int_fast8_t is added to the
language at the same time as lossy conversions have
been removed.)

There are at least 3 options for looking at this
situation:

1. The above code contains undefined behaviour
because a cast may be required to convert from
int to int_fast8_t. This option requires no additional
language changes.

2. The above code is illegal (on any platform) because
a cast is always required to convert from int to int_fast8_t.
Requires additional language changes.

2. The above code is legal (on any platform), because
the compiler can see that the value of the int literal is
within range for any legal underlying type for int_fast8_t.
Requires more complex additional language changes.

You assume option (1). Still, I don't see the problem -
your code is relying on undefined behaviour so its
non-portability should not be surprising.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 31 Jan 2002 12:08:15 CST
Raw View
David R Tribble wrote:
>
> Garry Lancaster wrote:
> >> Very true. However, I don't think it's any secret that
> >> some of the worst bits of C++ were those inherited
> >> from C. One thinks of complicated variable
> >> declaration syntax and lossy implicit conversions,
> >> for example.
>
> Pete Becker wrote:
> > It's not at all clear that lossy implicit conversions are a bad thing.
> > Languages that insist on making those conversions explicit have failed
> > in the market. Not because they weren't based on C, but because
> > programmers have outgrown the game "Mother, May I".
>
> The Java programming community would disagree with you on that.

C++'s type system is much richer than Java's, so what may or may not be
tolerable in Java has little bearing on what's appropriate for C++.

> Frankly, I don't mind having the compiler force me to explicitly
> cast when losing precision.  It serves as documentation in the code,
> if nothing else.

Comments are the best way to document code, in part because they have no
side effects.

>
> On the other hand, they made a mistake with Java by making the
> 'byte' primitive type signed instead of unsigned.  My code is riddled
> with 'b[i] & 0xFF' expressions to force unsigned behavior.
>

It's not just bytes, but the complete lack of unsigned arithmetic types.
Try implementing java.math.BigInteger...

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 31 Jan 2002 12:21:47 CST
Raw View
Garry Lancaster wrote:
>
> James Kanze:
> > > In the case of lossy conversions, there is an alternative which
> > > removes little or nothing from the expressivity of the language, and
> > > provides significant added (but not perfect) safety.  That is to
> > > require them to be explicit.
>
> Pete Becker:
> > There is also a high cost in non-portability. For example,
> >
> > #include <stdint.h>
> >
> > int_fast8_t val = 3;
> > val = val + 1;
>
> (Obviously, this is not standard C++. For the purposes
> of answering, I'll assume int_fast8_t is added to the
> language at the same time as lossy conversions have
> been removed.)

Sigh. int_fast8_t is a typedef.

>
> There are at least 3 options for looking at this
> situation:
>
> 1. The above code contains undefined behaviour
> because a cast may be required to convert from
> int to int_fast8_t. This option requires no additional
> language changes.

Huh? Why would a typedef to an int be undefined behavior?

>
> 2. The above code is illegal (on any platform) because
> a cast is always required to convert from int to int_fast8_t.
> Requires additional language changes.

Yup. Numerous. Beginning with making typedefs useless.

>
> 2. The above code is legal (on any platform), because
> the compiler can see that the value of the int literal is
> within range for any legal underlying type for int_fast8_t.
> Requires more complex additional language changes.

But it can't be. If the initial value is the maximum that can be stored,
then adding 1 puts the reuslt out of range. The compiler in general
can't "see" this.

>
> You assume option (1).

No, I assume that int_fast8_t is a typedef, defined in <stdint.h>, as it
is in standard C.

> Still, I don't see the problem -
> your code is relying on undefined behaviour so its
> non-portability should not be surprising.
>

Using typedefs is not undefined behavior. But since you don't seem to
like int_fast8_t, replace it with size_t, as I suggested.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Thu, 31 Jan 2002 15:23:07 CST
Raw View
> > James Kanze:
> > > > In the case of lossy conversions, there is an alternative which
> > > > removes little or nothing from the expressivity of the language, and
> > > > provides significant added (but not perfect) safety.  That is to
> > > > require them to be explicit.
> >
> > Pete Becker:
> > > There is also a high cost in non-portability. For example,
> > >
> > > #include <stdint.h>
> > >
> > > int_fast8_t val = 3;
> > > val = val + 1;
> >
> > (Obviously, this is not standard C++. For the purposes
> > of answering, I'll assume int_fast8_t is added to the
> > language at the same time as lossy conversions have
> > been removed.)

Pete Becker:
> Sigh. int_fast8_t is a typedef.

I know. In C99, not C++.

> > There are at least 3 options for looking at this
> > situation:
> >
> > 1. The above code contains undefined behaviour
> > because a cast may be required to convert from
> > int to int_fast8_t. This option requires no additional
> > language changes.
>
> Huh? Why would a typedef to an int be undefined behavior?

It isn't. It's the initialisation of a possibly-
smaller-than-int type with an int that is (or
rather *would be*, in our hypothetically
extended version of the language).

> > 2. The above code is illegal (on any platform) because
> > a cast is always required to convert from int to int_fast8_t.
> > Requires additional language changes.
>
> Yup. Numerous. Beginning with making typedefs useless.

I doubt it, but feel free to justify your remark.

> > 2.

I meant 3 ;-)

> > The above code is legal (on any platform), because
> > the compiler can see that the value of the int literal is
> > within range for any legal underlying type for int_fast8_t.
> > Requires more complex additional language changes.
>
> But it can't be. If the initial value is the maximum that can be stored,
> then adding 1 puts the reuslt out of range. The compiler in general
> can't "see" this.

I don't refer to that line. I''m talking about the
initialisation. Maybe if we agree on the status
of the first line, we can move along to the
second ;-)

> > You assume option (1).
>
> No, I assume that int_fast8_t is a typedef, defined
> in <stdint.h>, as it is in standard C.

As do I.

> > Still, I don't see the problem -
> > your code is relying on undefined behaviour so its
> > non-portability should not be surprising.
> >
>
> Using typedefs is not undefined behavior.

Agree. However, you can get undefined behaviour
by using the alias created by the typedef in certain
ways.

For example,

int_fast8_t i = 1000;

results in undefined behaviour in C99 even now,
even though it may work fine on some platforms
(working fine is one implementation of undefined
behaviour).

> But since you don't seem to
> like int_fast8_t, replace it with size_t, as I suggested.

It's not that I don't like it (though I don't, but
that's a separate issue), it's that it isn't part
of C++ and this is a C++ newsgroup.

Taking your suggestion:

size_t val = 3;

The non-exhaustive list of options now looks like
this:

1 The above code is illegal because a cast is
required to convert from an int literal to any
unsigned type.

2. The above code is legal (on any platform), because
the compiler can see that the value of the int literal is
within range for any legal underlying type for size_t.
Requires language changes.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 31 Jan 2002 16:06:43 CST
Raw View
Garry Lancaster wrote:
>
> Pete Becker:
> > Sigh. int_fast8_t is a typedef.
>
> I know. In C99, not C++.
>

And the title of this thread is...

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 31 Jan 2002 16:20:23 CST
Raw View
Garry Lancaster wrote:
>
> Taking your suggestion:
>
> size_t val = 3;
>
> The non-exhaustive list of options now looks like
> this:
>
> 1 The above code is illegal because a cast is
> required to convert from an int literal to any
> unsigned type.
>
> 2. The above code is legal (on any platform), because
> the compiler can see that the value of the int literal is
> within range for any legal underlying type for size_t.
> Requires language changes.
>

Please stop dodging the issue. Initialization from a constant is simple
to resolve. That's why the example I gave involved assignment from a
computed expression. Do you want to require a cast for portable code
whenever someone computes a value to assign to an object of type size_t?
In particular:

void f(size_t sz)
{
sz = sz + 1; // would this be non-portable without a cast?
++sz;  // how about this?
sz += 1; // and this?
}

Or how about the common usage malloc(strlen(str) + 1)?

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Thu, 31 Jan 2002 19:06:30 CST
Raw View
Pete Becker wrote:
>
> Garry Lancaster wrote:
....
> > 1. The above code contains undefined behaviour
> > because a cast may be required to convert from
> > int to int_fast8_t. This option requires no additional
> > language changes.
>
> Huh? Why would a typedef to an int be undefined behavior?

I don't know. Why do you ask? He made no such suggestion.
He did suggest something that would correspond to a new constraint on
conversions, a constraint that would only be workable if int_fast8_t
were a standard type in it's own right, rather than being a typedef.
However, that's a completely different issue.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Thu, 31 Jan 2002 23:22:28 CST
Raw View
Garry Lancaster wrote:
>
> > > James Kanze:
....
> > > 2. The above code is illegal (on any platform) because
> > > a cast is always required to convert from int to int_fast8_t.
> > > Requires additional language changes.
> >
> > Yup. Numerous. Beginning with making typedefs useless.
>
> I doubt it, but feel free to justify your remark.

The current C99 and C++ standards use only the actual type of an
expression to determine the validity of the code. Your suggestion would
treat a typedef as a different type from the one that it's a typedef
for. If that's to be done in general, then typedefs become nearly
useless. The only way you could do something like that would be to treat
int_fast8_t as a first-class type, rather than an ordinary typedef.

> int_fast8_t i = 1000;
>
> results in undefined behaviour in C99 even now,
> even though it may work fine on some platforms
> (working fine is one implementation of undefined
> behaviour).

Incorrect. It has undefined behavior on any C99 implementation platform
where INT_FAST8_MAX (#defined in <stdint.h>) is less than 1000. However,
it's perfectly well-defined behavior on other C99 platforms. Platforms
where INT_FAST8_MAX is 32767 or even 2147483647 should be fairly common
(once C99 itself is fairly common). What that code isn't, is "strictly
conforming"; a concept that isn't part of C++. The closest concept in
C++ is "well formed" which is a much less strict conformance category.

> > But since you don't seem to
> > like int_fast8_t, replace it with size_t, as I suggested.
>
> It's not that I don't like it (though I don't, but
> that's a separate issue), it's that it isn't part
> of C++ and this is a C++ newsgroup.

Yes, which means it's exactly the correct place to discuss whether or
not it's a good idea to add this feature to C++.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Fri, 1 Feb 2002 16:25:34 GMT
Raw View
James Kuyper Jr. <kuyper@wizard.net> wrote in message
news:3C59EC47.C82DF338@wizard.net...
> Garry Lancaster wrote:
> >
> > > > James Kanze:
Garry Lancaster:
> > > > 2. The above code is illegal (on any platform) because
> > > > a cast is always required to convert from int to int_fast8_t.
> > > > Requires additional language changes.

Pete Becker:
> > > Yup. Numerous. Beginning with making typedefs useless.
> >
> > I doubt it, but feel free to justify your remark.

James Kanze:
> The current C99 and C++ standards use only the actual type of an
> expression to determine the validity of the code. Your suggestion would
> treat a typedef as a different type from the one that it's a typedef
> for.

None of the options I suggest, require any change for
the behaviour of typedefs. Option 2 requires that
int_fast8_t, if it is adopted, become a proper type,
not a typedef'd alias. Sorry if that wasn't clear.

> If that's to be done in general, then typedefs become nearly
> useless. The only way you could do something like that would be to treat
> int_fast8_t as a first-class type, rather than an ordinary typedef.

Exactly.

> > int_fast8_t i = 1000;
> >
> > results in undefined behaviour in C99 even now,
> > even though it may work fine on some platforms
> > (working fine is one implementation of undefined
> > behaviour).
>
> Incorrect. It has undefined behavior on any C99 implementation platform
> where INT_FAST8_MAX (#defined in <stdint.h>) is less than 1000. However,
> it's perfectly well-defined behavior on other C99 platforms. Platforms
> where INT_FAST8_MAX is 32767 or even 2147483647 should be fairly common
> (once C99 itself is fairly common). What that code isn't, is "strictly
> conforming"; a concept that isn't part of C++. The closest concept in
> C++ is "well formed" which is a much less strict conformance category.

It seems to fit the C99 definition of undefined behaviour
to me:

"3.4.3 undefined behavior

behavior, on use of a *nonportable* or erroneous program
construct or of erroneous data, for which this International
Standard imposes no requirements
....
EXAMPLE: An example of undefined behavior is the
behavior on integer overflow."

(my emphasis)

or maybe it's "unspecified behavior":

"3.4.4 unspecified behavior

behavior where this International Standard provides two
or more possibilities and imposes no further
requirements on which is chosen in any instance."

I'm certainly none the wiser. In a sense it doesn't
matter - since we all agree the problem is that the
code Pete Becker posted is nonportable, the answer is
as obvious as it is tautological: don't be
surprised if your nonportable code turns out not
to be portable ;-) Or, don't write nonportable code.

> > > But since you don't seem to
> > > like int_fast8_t, replace it with size_t, as I suggested.
> >
> > It's not that I don't like it (though I don't, but
> > that's a separate issue), it's that it isn't part
> > of C++ and this is a C++ newsgroup.
>
> Yes, which means it's exactly the correct place
> to discuss whether or not it's a good idea to
> add this feature to C++.

Which feature? We (at least you and I - I'm not even clear
which language Pete Becker is discussing) are discussing
two at once: C99 int types and changes to the implicit
conversion rules. That's one too many for a sane
discussion IMHO.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Fri, 1 Feb 2002 16:38:46 GMT
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:3C59BF4A.B42B672@acm.org...
> Garry Lancaster wrote:
> >
> > Taking your suggestion:
> >
> > size_t val = 3;
> >
> > The non-exhaustive list of options now looks like
> > this:
> >
> > 1 The above code is illegal because a cast is
> > required to convert from an int literal to any
> > unsigned type.
> >
> > 2. The above code is legal (on any platform), because
> > the compiler can see that the value of the int literal is
> > within range for any legal underlying type for size_t.
> > Requires language changes.
> >

Pete Becker:
> Please stop dodging the issue.

No dodging is occurring, but there is clearly
a failure to communicate, for which the blame
could lie with sender, receiver or both.

> Initialization from a constant is simple
> to resolve.

I listed a couple of ways of resolving it, but you have
not commented on them. As I mentioned before,
I want to reach an agreement on the intialization
before moving on to the assignment.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Fri, 1 Feb 2002 16:38:29 GMT
Raw View
Garry Lancaster:
>>>> (Obviously, this is not standard C++. For the purposes
>>>> of answering, I'll assume int_fast8_t is added to the
>>>> language at the same time as lossy conversions have
>>>> been removed.)

> > Pete Becker:
> > > Sigh. int_fast8_t is a typedef.

Garry Lancaster:
> > I know. In C99, not C++.

Pete Becker:
> And the title of this thread is...

there for everyone to see. Anyone who's been reading
this offshoot of the thread will also note that it had morphed
from the original subject in order to discuss removing
lossy implicit conversions from the C++ language.

You have either (a) abruptly reintroduced the original topic
in tandem with the new one or (b) switched from C++ to
C99.

I'm trying to work out which it is, but you're not making it
easy. At first I assumed you'd reintroduced the original
topic, but your response to my comment at the top of
this message ("Obviously this is not standard C++..."),
and your mention of the use of malloc as "common
usage" on a more recent message makes me think
you meant to change languages.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Sat, 2 Feb 2002 00:50:51 GMT
Raw View
"James Kuyper Jr." wrote:
>
> Pete Becker wrote:
> >
> > Garry Lancaster wrote:
> ....
> > > 1. The above code contains undefined behaviour
> > > because a cast may be required to convert from
> > > int to int_fast8_t. This option requires no additional
> > > language changes.
> >
> > Huh? Why would a typedef to an int be undefined behavior?
>
> I don't know. Why do you ask?
>

Because the part that got snipped said:

> And, of course, on your development system int_fast8_t happens to be a
> typedef for int

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Sat, 2 Feb 2002 16:17:23 GMT
Raw View
Garry Lancaster wrote:
>
> James Kuyper Jr. <kuyper@wizard.net> wrote in message
> news:3C59EC47.C82DF338@wizard.net...
....
> None of the options I suggest, require any change for
> the behaviour of typedefs. Option 2 requires that
> int_fast8_t, if it is adopted, become a proper type,
> not a typedef'd alias. Sorry if that wasn't clear.

It was not clear. int_fast8_t is a typedef in the only standard that I
know of which currently specifies it. I can't find any wording in your
previous messages that suggests you were calling for that to be changed,
if and when it gets imported into C++.

....
> > Incorrect. It has undefined behavior on any C99 implementation platform
> > where INT_FAST8_MAX (#defined in <stdint.h>) is less than 1000. However,
> > it's perfectly well-defined behavior on other C99 platforms. Platforms
> > where INT_FAST8_MAX is 32767 or even 2147483647 should be fairly common
> > (once C99 itself is fairly common). What that code isn't, is "strictly
> > conforming"; a concept that isn't part of C++. The closest concept in
> > C++ is "well formed" which is a much less strict conformance category.
>
> It seems to fit the C99 definition of undefined behaviour
> to me:
>
> "3.4.3 undefined behavior
>
> behavior, on use of a *nonportable* or erroneous program
> construct or of erroneous data, for which this International
> Standard imposes no requirements

That's not what that definition means. Non-portable code does not in
itself qualify as allowing undefined behavior. The key phrase in that
definition is "for which this International Standard imposes no
requirements". Non-portable code for which the standard does impose
requirements does not have undefined behavior. On a platform where
INT_FAST8_MAX is greater than or equal to 1000, the standard does impose
requirements on the behavior of that statement. Specifically, 6.3.1.3 p1
says that "When a value with integer type is converted to another
integer type other than _Bool, if the value can be represented by the
new type, it is unchanged.", and 6.7.8p8 says "An initializer specifies
the initial value stored in an object."

It's implementation-defined whether or not INT_FAST8_MAX is less than
1000. On platforms where it is, then the behavior is still not
necessarily undefined. Earlier I said that it was, but I've been doing
some checking since then. Per 6.3.1.3p3, either the result of the
conversion is an implementation-defined value, or an
implementation-defined signal is raised. Implementation-defined values
are, per 3.17.1 and 3.17.3, required to be valid values, which removes
the possibility that it is a trap representation. If the signal raised
is one of the ones listed in 7.14.1.1p3 (which seems likely), the
behavior becomes undefined as soon as the corresponding signal handler
returns. Otherwise, someone who develops code specifically for that
platform can choose to handle the implementation-defined signal. If such
code conforms to all the rules governing handling signals (which is
tricky, but possible), then the behavior is well-defined, and per
7.14.1.1p3, "the program will resume execution at the point it was
interrupted."

It's implementation-dependent whether or not this code has undefined
behavior. If there is any platform for which the code has undefined
behavior, then the code is not strictly conforming. However, on a
platform where INT_FAST8_MAX is greater than or equal to 1000, the
behavior is unambiguously defined.

....
> or maybe it's "unspecified behavior":
>
> "3.4.4 unspecified behavior
>
> behavior where this International Standard provides two
> or more possibilities and imposes no further
> requirements on which is chosen in any instance."

No, it's implementation-defined, which is a different category from
unspecified behavior.

....
> Which feature? We (at least you and I - I'm not even clear
> which language Pete Becker is discussing) are discussing
> two at once: C99 int types and changes to the implicit
> conversion rules. That's one too many for a sane
> discussion IMHO.

The two discussions are tied together. Unless I've misunderstood you,
the only changes you're suggesting for the implicit conversion rules are
motivated by one of the options you suggested for implementing the C99
int types.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Sat, 2 Feb 2002 16:20:10 GMT
Raw View
Garry Lancaster wrote:
>
> Pete Becker <petebecker@acm.org> wrote in message
> news:3C59BF4A.B42B672@acm.org...
> > Initialization from a constant is simple
> > to resolve.
>
> I listed a couple of ways of resolving it, but you have
> not commented on them. As I mentioned before,
> I want to reach an agreement on the intialization
> before moving on to the assignment.
>

It's not a good idea to decide whether to fly or to walk before you've
decided where you're going. Having to write a cast in malloc(strlen(str)
+ 1) makes this idea untenable, so there is no point in designing its
details.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Mon, 4 Feb 2002 08:06:41 GMT
Raw View
Pete Becker wrote:
>
> "James Kuyper Jr." wrote:
> >
> > Pete Becker wrote:
> > >
> > > Garry Lancaster wrote:
> > ....
> > > > 1. The above code contains undefined behaviour
                                   ^^^^^^^^^^^^^^^^^^^
> > > > because a cast may be required to convert from
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > > int to int_fast8_t. This option requires no additional
> > > > language changes.
> > >
> > > Huh? Why would a typedef to an int be undefined behavior?
> >
> > I don't know. Why do you ask?
> >
>
> Because the part that got snipped said:
>
> > And, of course, on your development system int_fast8_t happens to be a
> > typedef for int

It's not a point that's relevant to my question, so I snipped it. Let's
get back to my question. Why are you asking why a typedef would have
undefined behavior? It's not as if he had suggested that it would. It's
the conversion, not the typedef, which he's suggesting could have
undefined behavior.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Balog Pal" <pasa@lib.hu>
Date: Mon, 28 Jan 2002 20:49:32 GMT
Raw View
"Pete Becker" <petebecker@acm.org> wrote in message news:3C504B4E.4F3FBF34@acm.org...

> > | It's easy to end up adding casts without thinking, just to
> > | make the compiler shut up. And once  programmers start thinking that way
> > | their code suffers.

I've seen this happening with newbies.

> > Then part of the problem is in education.  It is terrible an act to
> > add cast without thinking, just to make the compiler shut up.

And I have to agree with this. I shall shout "stop right there, and explain what you're doing!" when I spot it. Coding without thinking (which is the issue in the case) is something to be banihed. And it is not limited to adding casts but to a plenty of bad habits and practices.

> Education isn't sufficient when the compiler insists that it knows more
> than you do about what your code should do.

I doesn't. It just tells something is not safe in general. So likeness is great you mistyped something. (Most things I get compiler errors for are typos.)

When you know what you're doing, the cast goes right in the code. And I don't think the user code is supposed to be infested with casts. I rather isolate it in (inline) functions that have nothing but the conversion, possibly in a company of some asserts and checks.
Then the 'user code' calls that function, and may feel safe.

As others pointed out, lossy conversions are not that widespread to be considered 'I really meant that' by default.

> The problem is that it
> doesn't, so you have to keep on telling it that you know what you're
> doing, and you get more and more impatient with it. After enough of this
> your goal becomes to write code that the compiler will accept, and you
> lose sight of writing code that is correct.

Hmm, my goal always was to write _correct_ code. And a sub-goal to also make it look correct at glance. So when something possibly dangerous is going on, there must be an explanation (using comments, or descriptive identifiers).
Experience shows, that when there's a bug somewhere, people (me including) tend to search for it in those areas known-to-be problematic, and possibly spend time rewriting or re-evaluating correct portions. So it's better marked to be thoroughly designed the way it is, and tested.

> Tom Demarco suggested in one of his books that developers not be allowed
> to use compilers. If their code didn't compile that was a defect that
> would be logged and fixed, just like any other. That would eliminate the
> write-compile-write cycle that developers often fall into.

Catching typos in source code would be pretty impractical that way. Wasn't that suggestion worked out on old-time machines using paper cards as input? :-)

Paul

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Balog Pal" <pasa@lib.hu>
Date: Mon, 28 Jan 2002 20:49:36 GMT
Raw View
"Pete Becker" <petebecker@acm.org> wrote in message news:3C52A141.21326C86@acm.org...

> Typos aren't the issue, and if that were the only thing that
> write-compile-write found it wouldn't be a problem.
> The problem is that
> programmers use compilers to find syntactic and semantic errors, which
> reflect lack of thought.

Well, that is another class of problem. My suggestion of cure would be simply to sack those (so-called) programmers, and leave the rest follow their ways. :)

As I said elsewhere programming and careful thinking always must go together. Otherwise it is just like driving car blindfolded. Including the possible consequences. I guess someone caught causing car crashes will find himself without license soon. Too bad programming is not tied to having some sort of license, like healing or engineering.

> And code that's written without sufficient
> thought often won't get the right result, even if it passes the
> compile-and-link test.

Sure, but it has no real relation to the original thread of discussion. IMHO no kind of syntax and set of restrictions (or lack of those) will make someone think, unless he want to in the first place.

Paul

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Tue, 29 Jan 2002 09:56:10 CST
Raw View
Balog Pal wrote:
>
> "Pete Becker" <petebecker@acm.org> wrote in message news:3C504B4E.4F3FBF34@acm.org...
>
> > > | It's easy to end up adding casts without thinking, just to
> > > | make the compiler shut up. And once  programmers start thinking that way
> > > | their code suffers.
>
> I've seen this happening with newbies.
>
> > > Then part of the problem is in education.  It is terrible an act to
> > > add cast without thinking, just to make the compiler shut up.
>
> And I have to agree with this. I shall shout "stop right there, and explain what you're doing!" when I spot it. Coding without thinking (which is the issue in the case) is something to be banihed. And it is not limited to adding casts but to a plenty of bad habits and practices.
>

Of course. But can you say that since you've applied this policy you've
never written without thinking, just to shut the compiler up?

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: kanze@gabi-soft.de (James Kanze)
Date: Tue, 29 Jan 2002 10:39:14 CST
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:<3C504B4E.4F3FBF34@acm.org>...
> Gabriel Dos Reis wrote:

> > Pete Becker <petebecker@acm.org> writes:

> > [...]

> > | It's easy to end up adding casts without thinking, just to make
> > | the compiler shut up. And once programmers start thinking that
> > | way their code suffers.

> > Then part of the problem is in education.  It is terrible an act
> > to add cast without thinking, just to make the compiler shut up.

> Education isn't sufficient when the compiler insists that it knows
> more than you do about what your code should do. The problem is that
> it doesn't, so you have to keep on telling it that you know what
> you're doing, and you get more and more impatient with it. After
> enough of this your goal becomes to write code that the compiler
> will accept, and you lose sight of writing code that is correct.

I understand what you are saying.  When most of the time the compiler
complains, it's an error, it's good.  When most of the time the
compiler complains, it's about what you really wanted to do, it's
bad.  I agree with that.  Where we disagree is which category lossy
conversions fall into.  And I suppose that this really depends on the
environment; if the only lossy conversions in your code are after
range checks, then all of the lossy conversions are what you really
wanted.  In my opinion, however, the risk of introducing lossy
conversions during program maintenance is high, and the errors are
worth signalling.

This probably depends partially on the language.  I never felt
bothered by this aspect of Pascal, for example, but I'll admit that
I've been annoyed by warnings from compilers when assigning the
results of getc to a char (after having checked for EOF).  The worse
part of it is that this "standard practice" is actually unspecified
behavior, since it involves converting a value in the range
[0...UCHAR_MAX] into a char, where it often doesn't fit.

> Tom Demarco suggested in one of his books that developers not be
> allowed to use compilers. If their code didn't compile that was a
> defect that would be logged and fixed, just like any other. That
> would eliminate the write-compile-write cycle that developers often
> fall into.

It might not be a bad idea.  I know that if I were running things, I
wouldn't allow linking the code with anything until after code review.

But most of the problems I'd expect to be caught by a rule banning
implicit lossy conversions are a result of maintenance.  Someone
changes an int into a long, and forgets to verify all of the places
where the variable is used.  Suddenly, what wasn't a lossy conversion
is.  And the worst part is that the code with the error hasn't been
modified, so probably won't be reviewed, and that the regression tests
probably don't have any checks for the new values (since they weren't
supported before the modification).  The fact that the compiler can
detect such errors more than makes up for the added noise I have to
put up with.  IMHO, of course, and for this one particular case.

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: kanze@gabi-soft.de (James Kanze)
Date: Tue, 29 Jan 2002 14:32:38 CST
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:<3C503266.9E59B2C7@acm.org>...

> > The question is always, what are the alternatives?  If I need a
> > lossy conversion, I certainly should be able to get one.  It just
> > shouldn't be the default.  With regards to your other points:
> > certainly pointers lead to buggy programs, but what are the
> > alternatives?

> Exactly. Asserting that "X can lead to buggy programs" does not mean
> that X should be changed.

Right.  But it does mean that one should consider the alternatives,
and what effect they may have.  Floating point is probably the cause
of more errors than implicit lossy conversions, but what are the
alternatives?  And do they solve the problem?  As far as I know, the
only alternative which would solve the problem is infinite precision,
and there are definite problems in the hardware implementation of
that.  In the case of lossy conversions, there is an alternative which
removes little or nothing from the expressivity of the language, and
provides significant added (but not perfect) safety.  That is to
require them to be explicit.  The major cost is, I believe, one which
you pointed out in another posting: people would be required to use
casts too often, which would banalize them, and lead to their being
used without thought.  I'll admit that I'm sceptical about this one,
however.

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: David R Tribble <david@tribble.com>
Date: Tue, 29 Jan 2002 21:11:53 CST
Raw View
Garry Lancaster wrote:
>> Very true. However, I don't think it's any secret that
>> some of the worst bits of C++ were those inherited
>> from C. One thinks of complicated variable
>> declaration syntax and lossy implicit conversions,
>> for example.

Pete Becker wrote:
> It's not at all clear that lossy implicit conversions are a bad thing.
> Languages that insist on making those conversions explicit have failed
> in the market. Not because they weren't based on C, but because
> programmers have outgrown the game "Mother, May I".

The Java programming community would disagree with you on that.
Frankly, I don't mind having the compiler force me to explicitly
cast when losing precision.  It serves as documentation in the code,
if nothing else.  And it's not really all that common (at least
in my code).

On the other hand, they made a mistake with Java by making the
'byte' primitive type signed instead of unsigned.  My code is riddled
with 'b[i] & 0xFF' expressions to force unsigned behavior.

Seems there is no such thing as a perfect programming language.

-- David R Tribble, mailto:david@tribble.com --

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Michiel.Salters@cmg.nl (Michiel Salters)
Date: Wed, 30 Jan 2002 10:16:56 CST
Raw View
Pete Becker <petebecker@acm.org> wrote in message news:<3C52A141.21326C86@acm.org>...
> Michiel Salters wrote:
> >
> > Pete Becker <petebecker@acm.org> wrote in message news:<3C504B4E.4F3FBF34@acm.org>...
> >
> > > Tom Demarco suggested in one of his books that developers not be allowed
> > > to use compilers. If their code didn't compile that was a defect that
> > > would be logged and fixed, just like any other. That would eliminate the
> > > write-compile-write cycle that developers often fall into.
> >
> > At the expense of replacing it by a
> > write-paperwork-compile-paperwork-write cycle. There was a time that
> > computers were not the most efficient way to find your typos, but I
> > doubt that that was the case for professional programmers when Tom
> > DeMarco wrote his book (around 1975 IIRC?). It certainly isn't today.
> >
>
> Typos aren't the issue, and if that were the only thing that
> write-compile-write found it wouldn't be a problem. The problem is that
> programmers use compilers to find syntactic and semantic errors, which
> reflect lack of thought. And code that's written without sufficient
> thought often won't get the right result, even if it passes the
> compile-and-link test.

I agree with the latter part of your response. However, typos and other
trivial errors are indeed the problem, because they affect the quality
of the following review. Reviewers don't find the complex issues, if
they have to focus on the much more common typos/thinkos. Who hasn't
exchnaged . and -> ? Finding it is hard as a reviewer, especially
when using smart pointers.

The coding standard I currenly use is restricted to 100 items. Reviewers
can't check for much more rules. So all things that can be found
mechanically are off that list, maximizing the total (human+compiler)
number of checked rules.

Regards,
--
Michiel Salters

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Balog Pal" <pasa@lib.hu>
Date: Wed, 30 Jan 2002 11:08:12 CST
Raw View
"Pete Becker" <petebecker@acm.org> wrote in message news:3C55F4C2.E79EDC86@acm.org...

> Of course. But can you say that since you've applied this policy you've
> never written without thinking, just to shut the compiler up?

Honestly, I believe I can say that.
(But I know I'm an extremely bad reference, being very different than other people.)

Paul

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Michiel.Salters@cmg.nl (Michiel Salters)
Date: Fri, 25 Jan 2002 15:17:56 GMT
Raw View
Pete Becker <petebecker@acm.org> wrote in message news:<3C504B4E.4F3FBF34@acm.org>...

> Tom Demarco suggested in one of his books that developers not be allowed
> to use compilers. If their code didn't compile that was a defect that
> would be logged and fixed, just like any other. That would eliminate the
> write-compile-write cycle that developers often fall into.

At the expense of replacing it by a
write-paperwork-compile-paperwork-write cycle. There was a time that
computers were not the most efficient way to find your typos, but I
doubt that that was the case for professional programmers when Tom
DeMarco wrote his book (around 1975 IIRC?). It certainly isn't today.

Regards,
--
Michiel Salters

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Sat, 26 Jan 2002 16:20:22 GMT
Raw View
Michiel Salters wrote:
>
> Pete Becker <petebecker@acm.org> wrote in message news:<3C504B4E.4F3FBF34@acm.org>...
>
> > Tom Demarco suggested in one of his books that developers not be allowed
> > to use compilers. If their code didn't compile that was a defect that
> > would be logged and fixed, just like any other. That would eliminate the
> > write-compile-write cycle that developers often fall into.
>
> At the expense of replacing it by a
> write-paperwork-compile-paperwork-write cycle. There was a time that
> computers were not the most efficient way to find your typos, but I
> doubt that that was the case for professional programmers when Tom
> DeMarco wrote his book (around 1975 IIRC?). It certainly isn't today.
>

Typos aren't the issue, and if that were the only thing that
write-compile-write found it wouldn't be a problem. The problem is that
programmers use compilers to find syntactic and semantic errors, which
reflect lack of thought. And code that's written without sufficient
thought often won't get the right result, even if it passes the
compile-and-link test.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: comeau@panix.com (Greg Comeau)
Date: Wed, 23 Jan 2002 09:53:12 CST
Raw View
In article <3c49afa8$0$22482$4c41069e@reader1.ash.ops.us.uu.net>,
P.J. Plauger <pjp@dinkumware.com> wrote:
>"Solosnake" <solosnake@solosnake.without_this.freeserve.co.uk> wrote in message news:a2c47f$mo2$1@news8.svr.pol.co.uk...
>> Is there any plans to incorporate C99 into ANSI/ISO C++ ? Many of the new
>> library functions guaranteed by C99 seem very attractive to me (cbrt [cube
>> root],  and fma(float a, float b, float c) [returns a*b+c, presumably
>> optimal] to name two) as well as some nice ideas: _Imaginary [keyword
>> denoting integral imaginary type] and similarly _Complex, and the strange
>> qualifier 'restrict'.
>>
>> If there are no plans then why not?

There are [informal] plans to consider them.  C++0x is still not
an official priority so anything is game really.  But what
happens in the end is well, exactly what the C++0x process will
be all about (obviously it won't be just about C99, but it
will surely come up, and parts will no doubt get incorporated).

>> It seems to me that one of C++ selling
>> points, at least initially, was its compatibility with legacy code. However
>> as C99 becomes more popular amongst those who choose to code in C, as I
>> presume it will, then we will be building new 'legacy' code, which will not
>> work with C++. This disadvantage might also bias C programmers against
>> taking advantage of the capabilities of the new language, which would be a
>> shame.

This is indeed a concern if not even an eventual problem.
There is no perfect solution.  And it'll be interesting
to see how things get resolved, or not.

>At the last C++ standards meeting in Redmond, there was general agreement
>that the various additions to C90 incorporated into C99 should at least be
>considered for inclusion in a future version of C++. In fact, I've agreed
>to provide a comprehensive proposal for adding essentially all the library
>changes, at least. Whatever the committee decides to pick up, it will see
>the light of day as part of a non-normative Technical Report well before
>the C++ standard is formally revised.

Somehow I missed this... Are you saying that an official TR
is going to be done by the C++ committee on C99?

>The proposal is based on work we've already done at Dinkumware. See our
>on-line copy of the Dinkum C Library Reference, which identifies C99
>additions and the additions we've made to C++ to gain access to them.
>We've managed, for example, to reconcile the existing C++ template class
>complex reasonably well with the new builtin C99 complex types. You can
>also license our C99/C++ library packaged for gcc or Comeau C++, on
>either PC Linux or Sparc Solaris, if you want to kick the tires.
>
>Please note, however, that these compilers have not yet fully reconciled
>the C99 language additions with C++. Sometimes you have to write separate
>modules in C and C++ to do all you might like.

We're talking a wait and see approach, but doing so with our eyes
wide open :)
--
Greg Comeau   What's next: additional Windows backends and 'export'!
Comeau C/C++ ONLINE ==>     http://www.comeaucomputing.com/tryitout
World Class Compilers:  Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: fjh@cs.mu.OZ.AU (Fergus Henderson)
Date: Wed, 23 Jan 2002 10:03:39 CST
Raw View
Pete Becker <petebecker@acm.org> writes:

>Garry Lancaster wrote:
>>
>> Very true. However, I don't think it's any secret that
>> some of the worst bits of C++ were those inherited
>> from C. One thinks of complicated variable
>> declaration syntax and lossy implicit conversions,
>> for example.
>
>It's not at all clear that lossy implicit conversions are a bad thing.

It's quite clear that lossy implicit conversions can easily lead to
accidentally buggy programs, and it's very clear that buggy programs
are a bad thing.

Good compilers already warn about lossy implicit conversions.
I'd like to see them deprecated in a future C++ standard.

>Languages that insist on making those conversions explicit have failed
>in the market.

I don't think that is reflected by recent history.  Java, for example,
seems to be pretty successful in a variety of markets.  C# also looks
likely to succeed in its niche (programming for the .NET CLR), having
already achieved backing from a major corporation and now having
three independent implementations.  And Ada seems to be the market
leader in the area of high-reliability applications, such as defence
and aviation.

>Not because they weren't based on C, but because
>programmers have outgrown the game "Mother, May I".

This is an appeal to emotion.

One could equally well say that the success of Java (and languages like it)
indicates that programmers have outgrown the game "Look Ma, no safety net!".

Or we could avoid emotive arguments, and consider which approach is
more likely to make programmers more productive and to lead to more
reliable programs.  If we do that, I think it is pretty clear that the
disadvantages of lossy implicit conversions far outweigh their advantages.

--
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Wed, 23 Jan 2002 10:04:35 CST
Raw View
> > > Garry Lancaster wrote:
> > > > Very true. However, I don't think it's any secret that
> > > > some of the worst bits of C++ were those inherited
> > > > from C. One thinks of complicated variable
> > > > declaration syntax and lossy implicit conversions,
> > > > for example.
> >
> > Pete Becker:
> > > It's not at all clear that lossy implicit conversions are a bad thing.
> > > Languages that insist on making those conversions explicit have failed
> > > in the market. Not because they weren't based on C, but because
> > > programmers have outgrown the game "Mother, May I".

Garry Lancaster:
> > I'm sure there are many factors that cause any given
> > programming language's failure to take off.
> >
> > Have you any specific examples?

Pete Becker:
> Pascal comes immediately to mind.
> The marketing claims for
> Pascal were that it was so picky that once you got your code to
> compile it was probably correct. The result was that Pascal
> was best written by teams of two: one to write the code and
> one to write the semicolons.

I don't recall whether Pascal has lossy implicit conversions
or not, but I assume you are correct.

If you say Pascal failed in the market, you set a very high
benchmark for success which only a small number of
languages have surpassed. Pascal was promoted as
a teaching language if I recall correctly, and was widely
used in academic institutions at one time. (The OOP
Pascal variant, Delphi, is very successful to this day,
incidentally.)

Moreover, the pickiness you describe extends much
further than the lack of lossy implicit conversion. Again,
if I recall correctly, Pascal heavily limits (prohibits?)
*any* lossy conversion, implicit or explicit. Also,
in it's unextended form, it does not support
object-oriented programming well. Lastly, it is
very weak in the bit twiddling operator department,
lacks a C-like syntax and doesn't have any big
marketing department behind it. It's misleading
to claim, as you seem to be doing, that the lack of
lossy implicit conversions is the only reason it
isn't as popular as, say, C++ or Java.

The trend in general purpose language design is clearly
towards fewer lossy implicit conversions. C++ has fewer
than C. Java and C# have fewer than C++ (and it doesn't
seem to have done them any harm). I find this trend
agreeable and would like to see it continue in C++0x.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "P.J. Plauger" <pjp@dinkumware.com>
Date: Wed, 23 Jan 2002 10:45:45 CST
Raw View
"Greg Comeau" <comeau@panix.com> wrote in message news:a2l9e5$cma$1@panix3.panix.com...

> >At the last C++ standards meeting in Redmond, there was general agreement
> >that the various additions to C90 incorporated into C99 should at least be
> >considered for inclusion in a future version of C++. In fact, I've agreed
> >to provide a comprehensive proposal for adding essentially all the library
> >changes, at least. Whatever the committee decides to pick up, it will see
> >the light of day as part of a non-normative Technical Report well before
> >the C++ standard is formally revised.
>
> Somehow I missed this... Are you saying that an official TR
> is going to be done by the C++ committee on C99?

No, I'm assuming that the already proposed library TR will probably
incorporate whatever the committee chooses to pick up from C99.
That was my understanding from the various discussions in Redmond.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Wed, 23 Jan 2002 11:45:49 CST
Raw View
> > > Pete Becker:
> > > > It's not at all clear that lossy implicit conversions are a bad
thing.
> > > > Languages that insist on making those conversions explicit have
failed
> > > > in the market. Not because they weren't based on C, but because
> > > > programmers have outgrown the game "Mother, May I".
>
> Garry Lancaster:
> > > I'm sure there are many factors that cause any given
> > > programming language's failure to take off.
> > >
> > > Have you any specific examples?

> Pete Becker:
> > Pascal comes immediately to mind.

I now have it on good authority that Pascal includes
lossy implicit conversions.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: kanze@gabi-soft.de (James Kanze)
Date: Wed, 23 Jan 2002 11:50:51 CST
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:<3C4DBFAE.71DDF8ED@acm.org>...
> Garry Lancaster wrote:

> > > Garry Lancaster wrote:
> > > > Very true. However, I don't think it's any secret that some of
> > > > the worst bits of C++ were those inherited from C. One thinks
> > > > of complicated variable declaration syntax and lossy implicit
> > > > conversions, for example.
> > Pete Becker:
> > > It's not at all clear that lossy implicit conversions are a bad
> > > thing.  Languages that insist on making those conversions
> > > explicit have failed in the market. Not because they weren't
> > > based on C, but because programmers have outgrown the game
> > > "Mother, May I".

> > I'm sure there are many factors that cause any given programming
> > language's failure to take off.

> > Have you any specific examples?

> Pascal comes immediately to mind. The marketing claims for Pascal
> were that it was so picky that once you got your code to compile it
> was probably correct. The result was that Pascal was best written by
> teams of two: one to write the code and one to write the semicolons.

I've done some projects in Pascal, and the semicolons were never a
problem:-).  I suspect that lack of a standard separate compile would
have been a sufficient technical reason for Pascal not to have taken
off.

More to the point, however, I suspect that one of the main reasons for
C's original success is that it addressed a very important practical
problem of that time: how to get reasonably efficient code from an
everyday compiler.  When C was first introduced, things like pointer
arithmetic allowed it to be used as a sort of a portable assembler,
and compilers of the day weren't very good at converting Pascal's
array references to pointer based loops.

Even more to the point, languages don't sink or swim on technical
issues.  If lossy implicit conversions were the key, PL/I would have
been a roaring success.  Political issues generally play more
important roles, and much of C's success is due to simply being in the
right place at the right time.

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Wed, 23 Jan 2002 11:52:23 CST
Raw View
Fergus Henderson wrote:
>
> Pete Becker <petebecker@acm.org> writes:
>
> >Garry Lancaster wrote:
> >>
> >> Very true. However, I don't think it's any secret that
> >> some of the worst bits of C++ were those inherited
> >> from C. One thinks of complicated variable
> >> declaration syntax and lossy implicit conversions,
> >> for example.
> >
> >It's not at all clear that lossy implicit conversions are a bad thing.
>
> It's quite clear that lossy implicit conversions can easily lead to
> accidentally buggy programs, and it's very clear that buggy programs
> are a bad thing.

Those are true statements. It's also quite clear that pointers can
easily lead to accidentally buggy programs, that explicit memory
management can easily lead to accidentally buggy programs, and that
floating point math can easily lead to accidentally buggy programs.
Indeed, there are many language features for which "it's quite clear
that <language feature X> can easily lead to accidentally buggy
programs." It doesn't follow that such things are "the worst bits" of C
or C++.

>
> Good compilers already warn about lossy implicit conversions.

Many compilers warn about lossy implicit conversions, and many
programmers find this annoying.

>
> >Languages that insist on making those conversions explicit have failed
> >in the market.
>
> I don't think that is reflected by recent history.  Java, for example,
> seems to be pretty successful in a variety of markets.  C# also looks
> likely to succeed in its niche

Java and C# have powerful marketing departments, which means that
technical considerations are not necessarily paramount.

> >Not because they weren't based on C, but because
> >programmers have outgrown the game "Mother, May I".
>
> This is an appeal to emotion.

No, it is an analogy, one that some people find insightful, and one that
you, apparently, find inciteful.

>
> One could equally well say that the success of Java (and languages like it)
> indicates that programmers have outgrown the game "Look Ma, no safety net!".
>
> Or we could avoid emotive arguments,

Are you referring here to tactics like labeling compilers that warn
about possible loss of precision as "good"? Or does this only apply to
arguments that support a position that differs from yours?

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Wed, 23 Jan 2002 12:03:53 CST
Raw View
Garry Lancaster wrote:
>
> Java and C# have fewer than C++ (and it doesn't
> seem to have done them any harm).

Yup. Good marketing can overcome technical deficiencies.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Wed, 23 Jan 2002 12:13:16 CST
Raw View
James Kanze wrote:
>
> Even more to the point, languages don't sink or swim on technical
> issues.  If lossy implicit conversions were the key, PL/I would have
> been a roaring success.  Political issues generally play more
> important roles, and much of C's success is due to simply being in the
> right place at the right time.

For me, moving from Pascal to C was a great relief. I spent far less
time making the compiler happy, which made me far more productive.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Al Grant" <tnarga@arm.REVERSE-NAME.com>
Date: Wed, 23 Jan 2002 12:39:50 CST
Raw View
"Garry Lancaster" <glancaster@ntlworld.com> wrote in message
news:BnC38.9894$ka7.1549349@news6-win.server.ntlworld.com...
> I now have it on good authority that Pascal includes
> lossy implicit conversions.

Maybe Pete was thinking of Modula-2.

Knuth wrote TeX in Pascal.  It's a big program, nor was he
a programming novice.



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Wed, 23 Jan 2002 13:37:28 CST
Raw View
Al Grant wrote:
>
> "Garry Lancaster" <glancaster@ntlworld.com> wrote in message
> news:BnC38.9894$ka7.1549349@news6-win.server.ntlworld.com...
> > I now have it on good authority that Pascal includes
> > lossy implicit conversions.
>
> Maybe Pete was thinking of Modula-2.
>

Sigh. No. I was talking about Pascal. The fact that Pascal includes some
lossy implicit conversions in no way invalidates the statement that I
made.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: kanze@gabi-soft.de (James Kanze)
Date: Thu, 24 Jan 2002 09:51:16 CST
Raw View
"Al Grant" <tnarga@arm.REVERSE-NAME.com> wrote in message
news:<a2mvin$44n$1@cam-news1.cambridge.arm.com>...
> "Garry Lancaster" <glancaster@ntlworld.com> wrote in message
> news:BnC38.9894$ka7.1549349@news6-win.server.ntlworld.com...
> > I now have it on good authority that Pascal includes
> > lossy implicit conversions.

Which Pascal?  I don't think standard Pascal has any implicit
conversions.

> Maybe Pete was thinking of Modula-2.

> Knuth wrote TeX in Pascal.  It's a big program, nor was he a
> programming novice.

Actually, he wrote it in Web, which can generate both Pascal and TeX.
And have you actually looked at the code?  It's not typical of Pascal
in any way.  There is practically only one type, which is a variant
record revealing different low level types.  HIS code would have been
a lot cleaner in C.

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: kanze@gabi-soft.de (James Kanze)
Date: Thu, 24 Jan 2002 09:53:21 CST
Raw View
Pete Becker <petebecker@acm.org> wrote in message
news:<3C4EF629.1111082A@acm.org>...
> Fergus Henderson wrote:

> > Pete Becker <petebecker@acm.org> writes:

> > >Garry Lancaster wrote:

> > >> Very true. However, I don't think it's any secret that some of
> > >> the worst bits of C++ were those inherited from C. One thinks
> > >> of complicated variable declaration syntax and lossy implicit
> > >> conversions, for example.

> > >It's not at all clear that lossy implicit conversions are a bad
> > >thing.

> > It's quite clear that lossy implicit conversions can easily lead
> > to accidentally buggy programs, and it's very clear that buggy
> > programs are a bad thing.

> Those are true statements. It's also quite clear that pointers can
> easily lead to accidentally buggy programs, that explicit memory
> management can easily lead to accidentally buggy programs, and that
> floating point math can easily lead to accidentally buggy programs.

The question is always, what are the alternatives?  If I need a lossy
conversion, I certainly should be able to get one.  It just shouldn't
be the default.  With regards to your other points: certainly pointers
lead to buggy programs, but what are the alternatives?  Pointers
(implicit or explicit) are necessary to implement just about any
dynamic data structure.  Making pointer arithmetic require some extra
effort would probably be an improvement, but would represent such a
radical change, I don't see how we could fit it into the language.
Explicit memory management is necessary in a few cases, but C++ would
certainly be a better language if it weren't necessary for everything,
or even if it wasn't the default.  As for floating point math -- what
are the alternatives?

> Indeed, there are many language features for which "it's quite clear
> that <language feature X> can easily lead to accidentally buggy
> programs." It doesn't follow that such things are "the worst bits"
> of C or C++.

> > Good compilers already warn about lossy implicit conversions.

> Many compilers warn about lossy implicit conversions, and many
> programmers find this annoying.

It's annoying when you wanted the conversion, and a God send when you
didn't.  It is simple to avoid the warning by means of a static_cast.
Of course, typing static_cast is a pain, but how frequently *do* you
really want lossy conversions.

Note that we're getting away from the original post, however.  I'd
rather see lossy conversions go, but given all of the other ways to
screw yourself because of values that don't fit (silent numeric
overflow, etc.), I find it a minor point.  Not in anyway comparible
with the problems in the declaration syntax, for example.

> > >Languages that insist on making those conversions explicit have
> > >failed in the market.

> > I don't think that is reflected by recent history.  Java, for
> > example, seems to be pretty successful in a variety of markets.
> > C# also looks likely to succeed in its niche

> Java and C# have powerful marketing departments, which means that
> technical considerations are not necessarily paramount.

Technical considerations are never paramount.  C/C++ aren't as
important as they are because of technical considerations.  And many
good languages have failed.  You said "Languages that insist on making
those conversions explicit have failed in the market." Either you mean
to imply that requiring those conversions to be explicit will cause
failure in the market (in which case, you have to explain Java, or
even Ada, which while not as popular as C++, can't really be
considered a failure either), or your sentence doesn't mean anything
-- there are also many languages which didn't make them explicit which
have failed in the market.  (Most languages fail in the market.)

> > >Not because they weren't based on C, but because programmers have
> > >outgrown the game "Mother, May I".

> > This is an appeal to emotion.

> No, it is an analogy, one that some people find insightful, and one
> that you, apparently, find inciteful.

I wouldn't call it an analogy.  Following the preceding sentence, it
sounds like an explination of why such languages failed.  (Emotively
expressed, but there is a real, arguable statement there.)  Again, if
languages fail because programmers don't like this sort of hand
holding, you have to explain the success of Java -- apparently, this
sort of hand-holding didn't prevent its success.

But this is getting ridiculous.  C wasn't successful because it
allowed lossy conversions, and Java wasn't a success because it banned
them.  Both were, in one way or another, "the right language at the
right time."  Certainly, Java had an enormous marketing push behind
it, but this push was largely made possible by poor marketing on the
part of C++.  The "right language" was one which *apparently*
addressed the *pretended* (and some real) flaws in C++.

--
James Kanze                                   mailto:kanze@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
                             -- Conseils en informatique orient   e objet
Ziegelh   ttenweg 17a, 60598 Frankfurt, Germany, T   l.: +49 (0)69 19 86 27

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 24 Jan 2002 11:02:38 CST
Raw View
James Kanze wrote:
>
> Technical considerations are never paramount.  C/C++ aren't as
> important as they are because of technical considerations.  And many
> good languages have failed.  You said "Languages that insist on making
> those conversions explicit have failed in the market." Either you mean
> to imply that requiring those conversions to be explicit will cause
> failure in the market (in which case, you have to explain Java, or
> even Ada, which while not as popular as C++, can't really be
> considered a failure either), or your sentence doesn't mean anything
> -- there are also many languages which didn't make them explicit which
> have failed in the market.  (Most languages fail in the market.)
>

What I also said was:

> >Not because they weren't based on C, but because
> >programmers have outgrown the game "Mother, May I".

Languages that require programmers to say "I really mean this" are
tedious to program in (and compilers that require this are tedious to
use). When you keep having to say "I really mean this" it becomes less
meaningful. It's easy to end up adding casts without thinking, just to
make the compiler shut up. And once  programmers start thinking that way
their code suffers. C got a pretty good balance here; Pascal didn't.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Gabriel Dos Reis <dosreis@cmla.ens-cachan.fr>
Date: Thu, 24 Jan 2002 11:39:45 CST
Raw View
Pete Becker <petebecker@acm.org> writes:

[...]

| It's easy to end up adding casts without thinking, just to
| make the compiler shut up. And once  programmers start thinking that way
| their code suffers.

Then part of the problem is in education.  It is terrible an act to
add cast without thinking, just to make the compiler shut up.

--
Gabriel Dos Reis, dosreis@cmla.ens-cachan.fr

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 24 Jan 2002 11:53:20 CST
Raw View
James Kanze wrote:
>
> Pete Becker <petebecker@acm.org> wrote in message
> news:<3C4EF629.1111082A@acm.org>...
> > Fergus Henderson wrote:
>
> > > Pete Becker <petebecker@acm.org> writes:
>
> > > >Garry Lancaster wrote:
>
> > > >> Very true. However, I don't think it's any secret that some of
> > > >> the worst bits of C++ were those inherited from C. One thinks
> > > >> of complicated variable declaration syntax and lossy implicit
> > > >> conversions, for example.
>
> > > >It's not at all clear that lossy implicit conversions are a bad
> > > >thing.
>
> > > It's quite clear that lossy implicit conversions can easily lead
> > > to accidentally buggy programs, and it's very clear that buggy
> > > programs are a bad thing.
>
> > Those are true statements. It's also quite clear that pointers can
> > easily lead to accidentally buggy programs, that explicit memory
> > management can easily lead to accidentally buggy programs, and that
> > floating point math can easily lead to accidentally buggy programs.
>
> The question is always, what are the alternatives?  If I need a lossy
> conversion, I certainly should be able to get one.  It just shouldn't
> be the default.  With regards to your other points: certainly pointers
> lead to buggy programs, but what are the alternatives?

Exactly. Asserting that "X can lead to buggy programs" does not mean
that X should be changed.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Thu, 24 Jan 2002 12:02:38 CST
Raw View
Gabriel Dos Reis wrote:
>
> Pete Becker <petebecker@acm.org> writes:
>
> [...]
>
> | It's easy to end up adding casts without thinking, just to
> | make the compiler shut up. And once  programmers start thinking that way
> | their code suffers.
>
> Then part of the problem is in education.  It is terrible an act to
> add cast without thinking, just to make the compiler shut up.
>

Education isn't sufficient when the compiler insists that it knows more
than you do about what your code should do. The problem is that it
doesn't, so you have to keep on telling it that you know what you're
doing, and you get more and more impatient with it. After enough of this
your goal becomes to write code that the compiler will accept, and you
lose sight of writing code that is correct.

Tom Demarco suggested in one of his books that developers not be allowed
to use compilers. If their code didn't compile that was a defect that
would be logged and fixed, just like any other. That would eliminate the
write-compile-write cycle that developers often fall into.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Thu, 24 Jan 2002 12:07:50 CST
Raw View
Garry Lancaster:
> > > I now have it on good authority that Pascal includes
> > > lossy implicit conversions.

James Kanze:
> Which Pascal?  I don't think standard Pascal has any implicit
> conversions.

I'm no Pascal expert, but my source tells me that:

"[In Pascal] an integer type could always be assigned to a
subranged integer type that was not necessarily wide
enough to hold the result, and the conversion was implicit."

If you're just searching for implicit conversions and don't care
whether they're lossy or not, Pascal also appears to support
the implicit conversion of integer to real. See, for example,
the web site "Pascal and C++  Side By Side" at

http://www.skylit.com/pascpp/#assign

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "P.J. Plauger" <pjp@dinkumware.com>
Date: Sat, 19 Jan 2002 18:15:38 GMT
Raw View
"Solosnake" <solosnake@solosnake.without_this.freeserve.co.uk> wrote in message news:a2c47f$mo2$1@news8.svr.pol.co.uk...

> Is there any plans to incorporate C99 into ANSI/ISO C++ ? Many of the new
> library functions guaranteed by C99 seem very attractive to me (cbrt [cube
> root],  and fma(float a, float b, float c) [returns a*b+c, presumably
> optimal] to name two) as well as some nice ideas: _Imaginary [keyword
> denoting integral imaginary type] and similarly _Complex, and the strange
> qualifier 'restrict'.
>
> If there are no plans then why not? It seems to me that one of C++ selling
> points, at least initially, was its compatibility with legacy code. However
> as C99 becomes more popular amongst those who choose to code in C, as I
> presume it will, then we will be building new 'legacy' code, which will not
> work with C++. This disadvantage might also bias C programmers against
> taking advantage of the capabilities of the new language, which would be a
> shame.

At the last C++ standards meeting in Redmond, there was general agreement
that the various additions to C90 incorporated into C99 should at least be
considered for inclusion in a future version of C++. In fact, I've agreed
to provide a comprehensive proposal for adding essentially all the library
changes, at least. Whatever the committee decides to pick up, it will see
the light of day as part of a non-normative Technical Report well before
the C++ standard is formally revised.

The proposal is based on work we've already done at Dinkumware. See our
on-line copy of the Dinkum C Library Reference, which identifies C99
additions and the additions we've made to C++ to gain access to them.
We've managed, for example, to reconcile the existing C++ template class
complex reasonably well with the new builtin C99 complex types. You can
also license our C99/C++ library packaged for gcc or Comeau C++, on
either PC Linux or Sparc Solaris, if you want to kick the tires.

Please note, however, that these compilers have not yet fully reconciled
the C99 language additions with C++. Sometimes you have to write separate
modules in C and C++ to do all you might like.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Scott Robert Ladd" <scott@coyotegulch.com>
Date: Sun, 20 Jan 2002 05:29:32 GMT
Raw View
Hi,

David Tribble has a comprehensive list of "Incompatibilities between ISO C
and ISO C++" at the following link:

http://david.tribble.com/text/cdiffs.htm

As you will see, the task goes beyond a simple set of function
implementations and the matter of a few keywords. The problems are
surmountable, but not easily or quickly. For at least the next few years,
C99 and C++ will not be compatible at the standards level.

That isn't to say that some compiler writers don't have an interest in
producing a C99/C++98 hybrid. The Gnu Compiler Collection (gcc) is already
moving to support C99, for example, and Dinkumware (as mentioned in P.J.'s
reply) is producing a C99 library. I also have suspicions that Intel will
support C(( with their compilers, especially in the Linux world where they
desire source code compatability with gcc.

--
Scott Robert Ladd
Master of Complexity, Destroyer of Order and Chaos
  Visit CoyoteGulch at http://www.coyotegulch.com
    No ads -- just info, algorithms, and very free code.


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Sun, 20 Jan 2002 15:51:35 GMT
Raw View
Solosnake:
> Is there any plans to incorporate C99 into ANSI/ISO C++ ? Many of the new
> library functions guaranteed by C99 seem very attractive to me (cbrt [cube
> root],  and fma(float a, float b, float c) [returns a*b+c, presumably
> optimal] to name two) as well as some nice ideas: _Imaginary [keyword
> denoting integral imaginary type] and similarly _Complex, and the strange
> qualifier 'restrict'.
>
> If there are no plans then why not? It seems to me that one of C++ selling
> points, at least initially, was its compatibility with legacy code.

Very true. However, I don't think it's any secret that
some of the worst bits of C++ were those inherited
from C. One thinks of complicated variable
declaration syntax and lossy implicit conversions,
for example. So there was a price for the (extensive,
but not total) backwards compatibility, but without
paying that price C++ would probably not have taken
off.

> However
> as C99 becomes more popular amongst those who choose to code in C, as I
> presume it will, then we will be building new 'legacy' code, which will
not
> work with C++. This disadvantage might also bias C programmers against
> taking advantage of the capabilities of the new language, which would be a
> shame.

There are fewer C99 programmers now than there were
C90 programmers when C++ was being first developed
then standardised. So, fewer people to be annoyed by
any breaks in compatibility. Also (and I don't think I'm
alone here) the advantages of the C compatibilty that
is already present seem less and less, particularly
as more people are learning C++ without first having
learnt C.

Nonetheless, I suspect what will happen is more or
less what you want: C++0x will become an almost
superset of C99.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Francis Glassborow <francis.glassborow@ntlworld.com>
Date: Mon, 21 Jan 2002 17:29:33 GMT
Raw View
In article <2hq28.377453$oj3.73586995@typhoon.tampabay.rr.com>, Scott
Robert Ladd <scott@coyotegulch.com> writes
>As you will see, the task goes beyond a simple set of function
>implementations and the matter of a few keywords. The problems are
>surmountable, but not easily or quickly. For at least the next few years,
>C99 and C++ will not be compatible at the standards level.

But they never have been. The target should be to support a compatible
kernel. To that end the long tern incompatibilities of things such as
const are unfortunate to say the least.

It is my belief that we should look to some joint TR (if such is
possible) that provide guidance (therefore a type 3, I think) on writing
source code that is portable between C and C++.


--
Francis Glassborow
Check out the ACCU Spring Conference 2002
4 Days, 4 tracks, 4+ languages, World class speakers
For details see: http://www.accu.org/events/public/accu0204.htm

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Mon, 21 Jan 2002 17:36:02 GMT
Raw View
Garry Lancaster wrote:
>
> Very true. However, I don't think it's any secret that
> some of the worst bits of C++ were those inherited
> from C. One thinks of complicated variable
> declaration syntax and lossy implicit conversions,
> for example.

It's not at all clear that lossy implicit conversions are a bad thing.
Languages that insist on making those conversions explicit have failed
in the market. Not because they weren't based on C, but because
programmers have outgrown the game "Mother, May I".

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Al Grant" <tnarga@arm.REVERSE-NAME.com>
Date: Mon, 21 Jan 2002 17:49:46 GMT
Raw View
"Scott Robert Ladd" <scott@coyotegulch.com> wrote in message
news:2hq28.377453$oj3.73586995@typhoon.tampabay.rr.com...
> David Tribble has a comprehensive list of "Incompatibilities between ISO C
> and ISO C++" at the following link:
>
> http://david.tribble.com/text/cdiffs.htm
>
> As you will see, the task goes beyond a simple set of function
> implementations and the matter of a few keywords. The problems are
> surmountable, but not easily or quickly. For at least the next few years,
> C99 and C++ will not be compatible at the standards level.

One difference is that C does lvalue-to-rvalue conversion in
more places than C++.  Is it planned to change C to be like C++?
If not, how can the languages ever be compatible?



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Jack Klein <jackklein@spamcop.net>
Date: Mon, 21 Jan 2002 17:52:06 GMT
Raw View
On Sun, 20 Jan 2002 15:51:35 GMT, "Garry Lancaster"
<glancaster@ntlworld.com> wrote in comp.std.c++:

 [snip]

> There are fewer C99 programmers now than there were
> C90 programmers when C++ was being first developed
> then standardised.

 [snip]

I am just curious as to whether the statement above is a "gut feeling"
or if you have some actual data to back it up.

There are enormously more programmers in general now than when C++ was
first being developed.  There were no C90 programmers then, as
development of C++ began long before there was an ANSI 89 standard or
an ISO 90 one.

In those days embedded systems were programmed almost exclusively in
assembly language, incredibly weak tools like PL/M, or niche languages
like Forth.  Today embedded systems are programmed almost exclusively
in C, including things like DSPs that didn't really even exist in the
days you are talking about.  And you can add in the large number of
programmers working on open source projects like Linux, Free BSD, etc.

While there is no doubt that there are more C++ programmers than C
programmers today, especially in the desktop/GUI/multitasking arena,
it is quite possible that there are more programmers writing C today
than there were 16 or 17 years ago.

--
Jack Klein
Home: http://JK-Technology.Com

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Mon, 21 Jan 2002 19:18:03 GMT
Raw View
Jack Klein <jackklein@spamcop.net> wrote in message
news:gspn4usq60tr6q8u42b82j25o4mp3q2e44@4ax.com...
> On Sun, 20 Jan 2002 15:51:35 GMT, "Garry Lancaster"
> <glancaster@ntlworld.com> wrote in comp.std.c++:
>
> [snip]
>
> > There are fewer C99 programmers now than there were
> > C90 programmers when C++ was being first developed
> > then standardised.
>
> [snip]
>
> I am just curious as to whether the statement above is a "gut feeling"
> or if you have some actual data to back it up.

A gut feeling. I don't know where one could find
reliable historical "programmer population" data.

> There are enormously more programmers in general now than
> when C++ was first being developed.

More, certainly.

> There were no C90 programmers then, as
> development of C++ began long before there was an ANSI 89 standard or
> an ISO 90 one.

Picky, picky. There *were* C90 programmers in the period
I referred to, which ended just prior to C++ standardisation in
1998. But you do have a point: I meant "C90 and K&R C
programmers".

> In those days embedded systems were programmed almost exclusively in
> assembly language, incredibly weak tools like PL/M, or niche languages
> like Forth.  Today embedded systems are programmed almost exclusively
> in C, including things like DSPs that didn't really even exist in the
> days you are talking about.  And you can add in the large number of
> programmers working on open source projects like Linux, Free BSD, etc.

Most of them won't be using C99 though. Hardly any
compilers support it yet, although this is improving.

> While there is no doubt that there are more C++ programmers than C
> programmers today, especially in the desktop/GUI/multitasking arena,
> it is quite possible that there are more programmers writing C today
> than there were 16 or 17 years ago.

It's possible, but I doubt it. Anyway, that isn't quite the
statement I made as I specifically referred to C99
programmers.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Michael S. Terrazas" <michael.s.terrazas@worldnet.att.net>
Date: Tue, 22 Jan 2002 15:40:37 GMT
Raw View
Just a bit of background, I teach C++ to a lot people, many of
whom program for embedded systems.  I have kept track of what
my students are using for the past 7 years, overall and with
an distinction between enbedded and hosted.  My numbers
may or may not be a statistically important dataset, but I felt
somehat qualified to comment...

"Jack Klein" <jackklein@spamcop.net> wrote in message
news:gspn4usq60tr6q8u42b82j25o4mp3q2e44@4ax.com...
> On Sun, 20 Jan 2002 15:51:35 GMT, "Garry Lancaster"
> <glancaster@ntlworld.com> wrote in comp.std.c++:
>
> [snip]
>
> > There are fewer C99 programmers now than there were
> > C90 programmers when C++ was being first developed
> > then standardised.
>
> [snip]
>
> I am just curious as to whether the statement above is a "gut feeling"
> or if you have some actual data to back it up.

To date I have yet to teach anyone who has access to a C
compiler that supports much of C99 at all.  As far as C90
goes, embedded systems programmers that use C numbers
are: strictly K&R (with vendor extensions) [KRVE]: 87%,
C90: 13%.  For hosted systems: KRVE: 41%, C90: 59%.
Overall: KRVE: 78%, C90: 22%.

> In those days embedded systems were programmed almost exclusively in
> assembly language, incredibly weak tools like PL/M, or niche languages
> like Forth.  Today embedded systems are programmed almost exclusively
> in C, including things like DSPs that didn't really even exist in the
> days you are talking about.  And you can add in the large number of
> programmers working on open source projects like Linux, Free BSD, etc.

I guess it depends on your definition of "almost
exclusively".  Several of my clients are moving to C++
because they are being told by their chipset or compiler
provider that C++ is the way thay are going and that the
only C they will support will be that which is a strict
subset of C++.

WRT DSPs: I was working on them in 1982.

> While there is no doubt that there are more C++ programmers than C
> programmers today, especially in the desktop/GUI/multitasking arena,
> it is quite possible that there are more programmers writing C today
> than there were 16 or 17 years ago.

But given the lack of available C99 compilers and the
reluctance of most C programmers that I have worked
with to adopt a 12-year old standard, I don't see the
relevance.  I agree that a compiler vendor may want to
support both, but that is only one consideration in
standards work.  I don't see C99 having any effect at
all on the popularity of C++, rather the other way
around.

--
Mike Terrazas
MLT & Associates


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Tue, 22 Jan 2002 19:02:10 GMT
Raw View
> Garry Lancaster wrote:
> > Very true. However, I don't think it's any secret that
> > some of the worst bits of C++ were those inherited
> > from C. One thinks of complicated variable
> > declaration syntax and lossy implicit conversions,
> > for example.

Pete Becker:
> It's not at all clear that lossy implicit conversions are a bad thing.
> Languages that insist on making those conversions explicit have failed
> in the market. Not because they weren't based on C, but because
> programmers have outgrown the game "Mother, May I".

I'm sure there are many factors that cause any given
programming language's failure to take off.

Have you any specific examples?

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net


---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Pete Becker <petebecker@acm.org>
Date: Tue, 22 Jan 2002 20:28:11 GMT
Raw View
Garry Lancaster wrote:
>
> > Garry Lancaster wrote:
> > > Very true. However, I don't think it's any secret that
> > > some of the worst bits of C++ were those inherited
> > > from C. One thinks of complicated variable
> > > declaration syntax and lossy implicit conversions,
> > > for example.
>
> Pete Becker:
> > It's not at all clear that lossy implicit conversions are a bad thing.
> > Languages that insist on making those conversions explicit have failed
> > in the market. Not because they weren't based on C, but because
> > programmers have outgrown the game "Mother, May I".
>
> I'm sure there are many factors that cause any given
> programming language's failure to take off.
>
> Have you any specific examples?
>

Pascal comes immediately to mind. The marketing claims for Pascal were
that it was so picky that once you got your code to compile it was
probably correct. The result was that Pascal was best written by teams
of two: one to write the code and one to write the semicolons.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Solosnake" <solosnake@solosnake.without_this.freeserve.co.uk>
Date: Sat, 19 Jan 2002 10:51:54 CST
Raw View
Hello

Is there any plans to incorporate C99 into ANSI/ISO C++ ? Many of the new
library functions guaranteed by C99 seem very attractive to me (cbrt [cube
root],  and fma(float a, float b, float c) [returns a*b+c, presumably
optimal] to name two) as well as some nice ideas: _Imaginary [keyword
denoting integral imaginary type] and similarly _Complex, and the strange
qualifier 'restrict'.

If there are no plans then why not? It seems to me that one of C++ selling
points, at least initially, was its compatibility with legacy code. However
as C99 becomes more popular amongst those who choose to code in C, as I
presume it will, then we will be building new 'legacy' code, which will not
work with C++. This disadvantage might also bias C programmers against
taking advantage of the capabilities of the new language, which would be a
shame.

Thanks,

D. Stockdale





---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]