Topic: C++0x int types


Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Wed, 23 Jan 2002 00:54:56 GMT
Raw View
Garry Lancaster wrote:
....
> Could I just clarify something? I was interested in the use
> of these types for C++, not C (sorry if I didn't make that clear

You made that clear. However, since they only currently exist in C, I've
been talking about them, as they currently exist. There's also important
issues about the proper way to bring them into C++, which haven't been
brought up in this thread yet. I've seen arguments for several different
options. The two most popular are:

 int var:32;
 int<16> var;

....
> at the start). I assume you would use int_fastN_t in C++ too,
> were it standard?

I would, but my shop wouldn't. We're only authorized to use C90, Fortran
77, Fortran 90, or Ada in delivered code. Very annoying. If I can
arrange it, my next project will involve C++. Scientific applications
written in C++ seem to be fairly rare, unfortunately.

....
> Are you saying that if you changed to the new
> standard int types, were they to become available,
> you wouldn't do this testing? Otherwise, you're not
> comparing like for like.

Yes, I would not bother with detailed testing of int_fast32_t on
multiple platforms, just like I don't currently test to make sure that
malloc() works as defined by the standard on all the platforms we use.
I'd be responsible for the portability of "my_stdint.h", if I used one;
the implementor is responsible for the portability of <stdint.h>. We do
whole-program tests on all the platforms we have to, but we don't have
the resources to do detailed tests on more than one platform, except
when there's a known platform-dependent bug to be investigated (which
hasn't happened yet).

....
> Sounds like you'd get an advantage from optimizing
> int types just on this one platform and leaving all the
> others on safe defaults.

Yes, and int_fast32_t would be one of those safe defaults, if C99 were
widely enough available.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Wed, 23 Jan 2002 11:49:04 CST
Raw View
Garry Lancaster:
> > Are you saying that if you changed to the new
> > standard int types, were they to become available,
> > you wouldn't do this testing? Otherwise, you're not
> > comparing like for like.

James Kuyper Jr.:
> Yes, I would not bother with detailed testing of int_fast32_t on
> multiple platforms, just like I don't currently test to make sure that
> malloc() works as defined by the standard on all the platforms we use.
> I'd be responsible for the portability of "my_stdint.h", if I used one;
> the implementor is responsible for the portability of <stdint.h>. We do
> whole-program tests on all the platforms we have to, but we don't have
> the resources to do detailed tests on more than one platform, except
> when there's a known platform-dependent bug to be investigated (which
> hasn't happened yet).

The difference between the two cases is that adding
a new malloc won't  require textual changes throughout
your existing codebase.

I feel there is a hole in your coding standard if you
would really do this without retesting.

> > Sounds like you'd get an advantage from optimizing
> > int types just on this one platform and leaving all the
> > others on safe defaults.
>
> Yes, and int_fast32_t would be one of those safe defaults, if C99 were
> widely enough available.

long being the safe default you'd have to use now
instead of int_fast32_t.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Thu, 24 Jan 2002 09:30:12 CST
Raw View
Garry Lancaster wrote:
....
> James Kuyper Jr.:
> > Yes, I would not bother with detailed testing of int_fast32_t on
> > multiple platforms, just like I don't currently test to make sure that
> > malloc() works as defined by the standard on all the platforms we use.
....
> The difference between the two cases is that adding
> a new malloc won't  require textual changes throughout
> your existing codebase.

What textual changes are you referring to? Switching our code to use
int_fast32_t would require textual changes, but once we've made those
changes, we don't need to make any further changes to port the code to a
new platform. Are you referring to the fact that <stdint.h> itself would
be different on different platforms? The same is true about malloc; it
could be a function-like macro, whose definition could be different on
different platforms.

> I feel there is a hole in your coding standard if you
> would really do this without retesting.

No, just a shortage of funding. We perform the testing we can afford to
perform. Actually, we perform somewhat more testing than our client
would like us to perform.

> > > Sounds like you'd get an advantage from optimizing
> > > int types just on this one platform and leaving all the
> > > others on safe defaults.
> >
> > Yes, and int_fast32_t would be one of those safe defaults, if C99 were
> > widely enough available.
>
> long being the safe default you'd have to use now
> instead of int_fast32_t.

Correct; and I prefer int_fast32_t; it might be faster than long, or at
least smaller than it.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Thu, 24 Jan 2002 11:52:53 CST
Raw View
> > James Kuyper Jr.:
> > > Yes, I would not bother with detailed testing of int_fast32_t on
> > > multiple platforms, just like I don't currently test to make sure that
> > > malloc() works as defined by the standard on all the platforms
> > > we use.

Garry Lancaster:
> > The difference between the two cases is that adding
> > a new malloc won't  require textual changes throughout
> > your existing codebase.

James Kuyper Jr.:
> What textual changes are you referring to? Switching our code
> to use int_fast32_t would require textual changes, ...

Yes, they're the ones I was referring to.

What you seem to be saying is that you'd
make those changes to a tested codebase,
then build and ship without any further testing.

Garry Lancaster:
> > > > Sounds like you'd get an advantage from optimizing
> > > > int types just on this one platform and leaving all the
> > > > others on safe defaults.

> > James Kuyper Jr.:
> > > Yes, and int_fast32_t would be one of those safe defaults,
> > > if C99 were widely enough available.

Garry Lancaster:
> > long being the safe default you'd have to use now
> > instead of int_fast32_t.

> Correct; and I prefer int_fast32_t; it might be faster than long,
> or at least smaller than it.

And nothing in the C99 standard currently prevents
it from being slower (for your particular application)
and/or bigger than a long. You pays your money
and you takes your choice.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Sat, 19 Jan 2002 17:34:55 GMT
Raw View
> "James Kuyper Jr." wrote:
> > For instance, a desire to
> > achieve a fixed type size across multiple platforms might force a
> > sub-optimal choice on some of them.

In theory, yes. But how common has this been in practice?

Ron Natalie:
> Tell me about it.  While it was pretty obvious on a PDP-11 that
> an int should be 16 bits and on a VAX it should be 32, it was more
> fun on bigger architectures.  When we did the compilers for the
> Denelcor HEP supercomputer, the "natural" size really is 64.  We
> were perplexed as to what to do about the sizes of the integers.
> We could do:  short-16, int-32, long-64 or we could have done the
> short-16 int-64 long-64.  This meant we had to figure out what to
> call the 32 bit type.  There were discussions of "short longs"
> and a new keyword called "medium", we settled on _int32.
>
> Of course things were really fun on the Crays where:
> char - 8
> short - 64
> int - 64
> long - 64

So, in summary, int used the fastest integer type on all
the platforms you mention?

[snip]

Garry Lancaster:
> > > ... Has anyone here ever changed the integer
> > > type they used in a program to anything other than an int (or
> > > it's unsigned equivalent), on the basis of speed measurements?
> >
> > I've never bothered, because it was inconvenient, and because there was
> > no way to do it portably. int_fastN_t is convenient, and it should
> > become portable once there's sufficiently many C99 implementations out
> > there.

Certainly rolling your own int_fastN_t is more trouble
than using a standard one. But not significantly more
work I would have thought. I guess it can't have been
that important for your program if you didn't think it
worth doing.

> Of course, nothing says that it's possible to properly express what you
> need with even the C99 types.  We have spent time tuning our app on a
> couple of platforms.  If you wrote your program portably, you probably
> already have the infrastructure to do the per-platform tuning in place
> in your design.
>
> > > 2. Given that most platforms have 8 bit bytes, can we really
> > > justify adding int_leastX_t? How many people are using C++
> > > to program machines that don't have 8 bit bytes? What are
> > > these machines?
> >
> > There have been machines where the addressable unit was 9 bits or 12
> > bits, though I've never heard anyone confirm whether there was C
> > implementation for those machines, much less C++.
>
> The computers based on the IBM 709 (which include the UNIVAC/SPERRY/UNISYS
> and DEC 10/20 mainframes) all had 36 bit words and a fairly rich partial
> word format that let you carve it up into arbitrary subwords.  Of course
> 8 doesn't pack well into 36, so the C implementations tended to use 9 bit
> chars.
>
> The CRAY technically had a 24 bit quantity (the A registers) but the C
> compiler just tended to ignore that entirely.

Useful background information for sure, but the question
I posed at the start was not whether any machines ever
had non-8 bit types, nor whether there were ever any C
compilers for these machines, but whether anyone was
now using C++ on one.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Sat, 19 Jan 2002 21:54:04 GMT
Raw View
Garry Lancaster wrote:
....
> Garry Lancaster:
> > > > ... Has anyone here ever changed the integer
> > > > type they used in a program to anything other than an int (or
> > > > it's unsigned equivalent), on the basis of speed measurements?
> > >
> > > I've never bothered, because it was inconvenient, and because there was
> > > no way to do it portably. int_fastN_t is convenient, and it should
> > > become portable once there's sufficiently many C99 implementations out
> > > there.
>
> Certainly rolling your own int_fastN_t is more trouble
> than using a standard one. But not significantly more

Sure, you can roll your own, but why should you have to? Do you want to
roll your own FLOAT_MAX, or your own offsetof()? These are
implementation-specific details. They should be done once by the
implementation itself, not by the user.

To use the standard one, you add "#include <stdio.h>" to your code.
Once. You're done.
To achieve the same effect by rolling your own, you'd have to do
detailed testing to determine the fastest integer type. Then, every time
you port to a new platform, you'd have to re-do the testing on that
platform. That's far too expensive a task for me to bother with.

> work I would have thought. I guess it can't have been
> that important for your program if you didn't think it
> worth doing.

If your personal computer system were destroyed, and someone offered you
an exact replacement for $100,000, would you accept? I assume not
(unless you're very rich). Should that person therefore assume your
computer system wasn't important to you? After all, you weren't willing
to pay their price for it.

The price for a roll-your-own is too high; the price for a
standard-defined int_fastN_t is almost 0.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Mon, 21 Jan 2002 17:30:09 GMT
Raw View
> > Garry Lancaster:
>>>>> ... Has anyone here ever changed the integer
>>>>> type they used in a program to anything other than an int (or
>>>>> it's unsigned equivalent), on the basis of speed measurements?

James Kuyper Jr:
>>>> I've never bothered, because it was inconvenient, and because there was
>>>> no way to do it portably. int_fastN_t is convenient, and it should
>>>> become portable once there's sufficiently many C99 implementations out
>>>> there.

Garry Lancaster:
> > Certainly rolling your own int_fastN_t is more trouble
> > than using a standard one. But not significantly more

[snipped analogy]

James Kuyper Jr:
> To achieve the same effect by rolling your own, you'd have to do
> detailed testing to determine the fastest integer type. Then, every time
> you port to a new platform, you'd have to re-do the testing on that
> platform. That's far too expensive a task for me to bother with.

Most processor manufacturers document what the fastest
int types are on their chip or chip families.

If desired, programmers can do extensive testing in order to
determine what is fastest for a particular program. But it is
misleading to compare this with how standard int_fastN_t
types would be assigned. The compiler writers won't
have access to the programmer's program. They will read the
relevant part of the processor docs and/or run simple
benchmarks of their own.

I reckon that digging out the correct section of the processor
docs and running a simple int performance benchmark would
take a competent programmer less than a day per platform.
That's why I wrote...

> >I guess it can't have been that important for your
> > program if you didn't think it worth doing.

[snipped another analogy]

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Mon, 21 Jan 2002 22:50:07 GMT
Raw View
Garry Lancaster wrote:
....
> Garry Lancaster:
> > > Certainly rolling your own int_fastN_t is more trouble
> > > than using a standard one. But not significantly more
>
> [snipped analogy]
>
> James Kuyper Jr:
> > To achieve the same effect by rolling your own, you'd have to do
> > detailed testing to determine the fastest integer type. Then, every time
> > you port to a new platform, you'd have to re-do the testing on that
> > platform. That's far too expensive a task for me to bother with.
....
> I reckon that digging out the correct section of the processor
> docs and running a simple int performance benchmark would
> take a competent programmer less than a day per platform.

It would take less than a day to run those tests, which is several
thousand times longer than it would take to type "#include <stdint.h>".
Integer processing performance isn't important enough in our shop to
justify such tests - programmer time, portability, file I/O speeds, and
floating point performance are all much more important. However, integer
processing performance is still important enough to justify spending the
minimal amount of programmer time needed to insert "#include
<stdint.h>". That is, it will be justified, as soon as C99 compilers are
sufficiently widely available to meet our portability needs. That won't
occur any time soon. :-(

In any event, our coding standards prohibit me from writing the kind of
implementation-specific code needed to "roll our own". Even if they
allowed it, our testing standards would require a full re-test of every
module using any one of those hand-rolled types which was different on
one platform than on our home system. That brings the total time up to
about a week per platform (per release of our softare - regression
testing). As it currently stands, we restrict detailed testing to the
one platform that form 90% of our target market, and only do regression
testing on the other platforms. We promise that our code should works on
any standard-conforming implementation that also meets a small list of
additional requirements, and test claims of non-portability when users
report them. So far, none of those claims have proven valid; they've all
traced back to mis-configuration of their systems by our users.

> That's why I wrote...
>
> > >I guess it can't have been that important for your
> > > program if you didn't think it worth doing.

I suspect there are shops where integer processing speeds are important
enough to justify the approach you describe, though I can't vouch for
their existence. However, there's lots of shops where it's important
enough to justify the minimal effort needed by C99's <stdint.h>.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Tue, 22 Jan 2002 15:34:55 GMT
Raw View
> > Garry Lancaster:
> > > > Certainly rolling your own int_fastN_t is more trouble
> > > > than using a standard one. But not significantly more
> >
> > [snipped analogy]
> >
> > James Kuyper Jr:
> > > To achieve the same effect by rolling your own, you'd have to do
> > > detailed testing to determine the fastest integer type. Then, every
time
> > > you port to a new platform, you'd have to re-do the testing on that
> > > platform. That's far too expensive a task for me to bother with.

Garry Lancaster:
> > I reckon that digging out the correct section of the processor
> > docs and running a simple int performance benchmark would
> > take a competent programmer less than a day per platform.

James Kuyper Jr:
> It would take less than a day to run those tests, which is several
> thousand times longer than it would take to type "#include <stdint.h>".

If you compare them relatively it's a big difference, but
in absolute terms a day here and there is not so much,
I think.

> Integer processing performance isn't important enough in our shop to
> justify such tests - programmer time, portability, file I/O speeds, and
> floating point performance are all much more important.

Thanks, that's what I thought you meant, but I'm
pleased to see you clarify it.

> However, integer
> processing performance is still important enough to justify spending the
> minimal amount of programmer time needed to insert "#include
> <stdint.h>".
> That is, it will be justified, as soon as C99 compilers are
> sufficiently widely available to meet our portability needs. That won't
> occur any time soon. :-(

Could I just clarify something? I was interested in the use
of these types for C++, not C (sorry if I didn't make that clear
at the start). I assume you would use int_fastN_t in C++ too,
were it standard?

> In any event, our coding standards prohibit me from writing the kind of
> implementation-specific code needed to "roll our own". Even if they
> allowed it, our testing standards would require a full re-test of every
> module using any one of those hand-rolled types which was different on
> one platform than on our home system. That brings the total time up to
> about a week per platform (per release of our softare - regression
> testing).

Are you saying that if you changed to the new
standard int types, were they to become available,
you wouldn't do this testing? Otherwise, you're not
comparing like for like.

> As it currently stands, we restrict detailed testing to the
> one platform that form 90% of our target market, and only do regression
> testing on the other platforms.

Sounds like you'd get an advantage from optimizing
int types just on this one platform and leaving all the
others on safe defaults.

> We promise that our code should works on
> any standard-conforming implementation that also meets a small list of
> additional requirements, and test claims of non-portability when users
> report them. So far, none of those claims have proven valid; they've all
> traced back to mis-configuration of their systems by our users.
>
> > That's why I wrote...
> >
> > > >I guess it can't have been that important for your
> > > > program if you didn't think it worth doing.
>
> I suspect there are shops where integer processing speeds are important
> enough to justify the approach you describe, though I can't vouch for
> their existence.

Neither can I. I was trying to find someone who'd done
it and had some real data by posting my question here.

> However, there's lots of shops where it's important
> enough to justify the minimal effort needed by C99's <stdint.h>.

Which is a reasonable reason for inclusion in
C++0x, if it's true.

But don't forget that on most (possibly all, in practice)
platforms C++'s int is effectively int_fast16_t. It's only
if you use any of the other int_fastN_t types that you
stand much of a chance of achieving any performance
improvement. The more marginal the general gains
are, the more likely they are to be swamped by program-
specific performance issues.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Garry Lancaster" <glancaster@ntlworld.com>
Date: Wed, 16 Jan 2002 16:14:41 GMT
Raw View
Hi All

C99 includes various new, optional, integer types. Among
them are:

* intX_t for types with exactly X bits e.g. int16_t.
* int_leastX_t for types that are the closest to X bits in
size but no less.
* int_fastX_t for types that are the fastest possible with
at least X bits in size.

C++0x may include these.

Although I think intX_t could be useful, I wonder if the others
are unnecessary.

1. Given that int is likely to be the fastest integer type on any
given platform, can we really justify adding int_fastX_t to select
fast integer types? Has anyone here ever changed the integer
type they used in a program to anything other than an int (or
it's unsigned equivalent), on the basis of speed measurements?

2. Given that most platforms have 8 bit bytes, can we really
justify adding int_leastX_t? How many people are using C++
to program machines that don't have 8 bit bytes? What are
these machines?

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "James Kuyper Jr." <kuyper@wizard.net>
Date: Thu, 17 Jan 2002 15:58:53 GMT
Raw View
Garry Lancaster wrote:
....
> 1. Given that int is likely to be the fastest integer type on any
> given platform, ...


While the standard says that 'int' is the natural type for a given
machine, that's not necessarily the same as the fastest type. It might
not even be possible for 'int' to be the fastest type, if that type
holds less than 16 bits. In any event, "natural" is too vague a
specification to be enforceable. Implementors often let considerations
other than speed affect their decision. For instance, a desire to
achieve a fixed type size across multiple platforms might force a
sub-optimal choice on some of them.

> ... can we really justify adding int_fastX_t to select
> fast integer types? ...

Even if 'int' is the fastest integer type, it's not guaranteed to be big
enough for any given context. 'int' is not an acceptable substitute for
'int_fast32_t', because on some platforms it's not big enough. That's
even more true of 'int_fast64_t'. There are, of course, even fewer speed
guarantees (read: none) for 'long', so please don't suggest that as an
alternative.

> ... Has anyone here ever changed the integer
> type they used in a program to anything other than an int (or
> it's unsigned equivalent), on the basis of speed measurements?

I've never bothered, because it was inconvenient, and because there was
no way to do it portably. int_fastN_t is convenient, and it should
become portable once there's sufficiently many C99 implementations out
there.

> 2. Given that most platforms have 8 bit bytes, can we really
> justify adding int_leastX_t? How many people are using C++
> to program machines that don't have 8 bit bytes? What are
> these machines?

There have been machines where the addressable unit was 9 bits or 12
bits, though I've never heard anyone confirm whether there was C
implementation for those machines, much less C++. There have even been
machines where it was 7 bits, though you can't implement C/C++ with that
byte size. More to the point, there have also been implementations of C
where some integer types were implemented using the mantissa bits of a
floating point type, which was usually not a multiple of 8 bits; I don't
know about C++, though.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: Ron Natalie <ron@sensor.com>
Date: Thu, 17 Jan 2002 17:12:39 GMT
Raw View

"James Kuyper Jr." wrote:
> For instance, a desire to
> achieve a fixed type size across multiple platforms might force a
> sub-optimal choice on some of them.

Tell me about it.  While it was pretty obvious on a PDP-11 that
an int should be 16 bits and on a VAX it should be 32, it was more
fun on bigger architectures.  When we did the compilers for the
Denelcor HEP supercomputer, the "natural" size really is 64.  We
were perplexed as to what to do about the sizes of the integers.
We could do:  short-16, int-32, long-64 or we could have done the
short-16 int-64 long-64.  This meant we had to figure out what to
call the 32 bit type.  There were discussions of "short longs"
and a new keyword called "medium", we settled on _int32.

Of course things were really fun on the Crays where:
char - 8
short - 64
int - 64
long - 64

Not only was it a naming problem, there were no 16 or 32 bit
integers.  This made it real fun to rewrite things like the
TIFF library which assumed you could come up with "some" type
that binary overlaid what was in the file.

> > ... Has anyone here ever changed the integer
> > type they used in a program to anything other than an int (or
> > it's unsigned equivalent), on the basis of speed measurements?
>
> I've never bothered, because it was inconvenient, and because there was
> no way to do it portably. int_fastN_t is convenient, and it should
> become portable once there's sufficiently many C99 implementations out
> there.
>

Of course, nothing says that it's possible to properly express what you
need with even the C99 types.  We have spent time tuning our app on a
couple of platforms.  If you wrote your program portably, you probably
already have the infrastructure to do the per-platform tuning in place
in your design.

> > 2. Given that most platforms have 8 bit bytes, can we really
> > justify adding int_leastX_t? How many people are using C++
> > to program machines that don't have 8 bit bytes? What are
> > these machines?
>
> There have been machines where the addressable unit was 9 bits or 12
> bits, though I've never heard anyone confirm whether there was C
> implementation for those machines, much less C++.

The computers based on the IBM 709 (which include the UNIVAC/SPERRY/UNISYS
and DEC 10/20 mainframes) all had 36 bit words and a fairly rich partial
word format that let you carve it up into arbitrary subwords.  Of course
8 doesn't pack well into 36, so the C implementations tended to use 9 bit
chars.

The CRAY technically had a 24 bit quantity (the A registers) but the C compiler
just tended to ignore that entirely.

---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]





Author: "Al Grant" <tnarga@arm.REVERSE-NAME.com>
Date: Thu, 17 Jan 2002 17:36:40 GMT
Raw View
"Ron Natalie" <ron@sensor.com> wrote in message
news:3C46FDEF.309EB38F@sensor.com...
> Tell me about it.  While it was pretty obvious on a PDP-11 that
> an int should be 16 bits and on a VAX it should be 32, it was more
> fun on bigger architectures.  When we did the compilers for the
> Denelcor HEP supercomputer, the "natural" size really is 64.  We
> were perplexed as to what to do about the sizes of the integers.
> We could do:  short-16, int-32, long-64 or we could have done the
> short-16 int-64 long-64.  This meant we had to figure out what to
> call the 32 bit type.  There were discussions of "short longs"
> and a new keyword called "medium", we settled on _int32.

I wonder why nobody suggested following Algol 68 and having
"short short int".  It didn't seem to be a major syntactic
problem to add "long long int" later.



---
[ comp.std.c++ is moderated.  To submit articles, try just posting with ]
[ your news-reader.  If that fails, use mailto:std-c++@ncar.ucar.edu    ]
[              --- Please see the FAQ before posting. ---               ]
[ FAQ: http://www.research.att.com/~austern/csc/faq.html                ]