Topic: larger int types?


Author: mharriso@digi.lonestar.org (Mark Harrison)
Date: 23 Aug 93 18:53:47 GMT
Raw View
In article <258m8m$m4q@agate.berkeley.edu> jbuck@forney.eecs.berkeley.edu (Joe Buck) writes:
>mharriso@digi.lonestar.org (Mark Harrison) writes:
>
>>Some implementations use "long long" for this.  Will this be addressed
>>by the standard?  Or will C++ be a "32-bit" language for the forseeable
>>future?
>
>C++ is not a 32-bit language, except for those with "all the world's a
>Vax" disease.  Since that is a common disease, many implementations
>provide "long long".

Unfortunately, this disease *is* common.  Not in my code, and probably
not in your code either.  I suspect, however, that if we were to redefine
longs to be 8 bytes, many of the libraries and support systems we use
regularly would break, taking our well written programs with them.

My compiler vendors (Sun and Lucid) both use the excuse that they are
tracking the ARM, and the ARM doesn't have a method of specifying an
int longer than a long.  Since this is on Unix, they point out,
redefining the size of long is not feasible for the reason mentioned
above.

[Disclaimer: my biggest portable program is a commercial data query
program that runs on VMS, Unix, and MS-DOS.  It supports European
(8-bit) and Kanji (16-bit), in both ASCII and EBCDIC.  All of this
by setting a single TARGET= macro when running make.  I *know* what
a trouble it is to be really portable.]

So, any hope for being able to declare a "longer than long"?
--
Mark Harrison, mharriso@dsccc.com, (214)519-6517
                        ^^^^^^^^^ No matter what the headers say!!!




Author: spitzak@mizar.usc.edu (William Spitzak)
Date: 23 Aug 1993 20:04:11 -0700
Raw View
Insane suggestion:

There are only two integral types called "int" and "unsigned int".  However
you can specify how big these are by using bitfield syntax on the type:

int i;  // i is whatever size is most "efficient" on machine,
  // in the opinion of the compiler designer.
int:43 foo; // foo has at least 43 bits of precision, possibly more

int:12:20 bar; // bar has at least 12 bits of precision, no more than 20

The compilers would come with (compiler-specific) built-in typedefs,
which are documented in the compiler documentation, these seem to be
the most popular definitions:

typedef int:8:8 char;
typedef int:32  int; /* not really */
typedef int:32:32 long;

The compiler does not have to support all possible field sizes.  They
produce fatal errors if a definition is outside what it supports.

In a structure definition the :n:n may be put on the end of a field
name, to match existing bitfield syntax.  There is no guarantee that
an int:12 outside a structure is actually the same size as on inside
a structure.

Will this work?  Or have I just stumbled into the same morass as
everybody else.

Bill







Author: db@argon.Eng.Sun.COM (David Brownell)
Date: 24 Aug 1993 15:38:02 GMT
Raw View
> So, any hope for being able to declare a "longer than long"?

... or a "doubled double"?  We now start to see 128 bit floating
point numbers as well as 64 bit ints.  Though Alpha doesn't have the
128 bit floats native, the architecture seems to have plans afoot for
them; and some of the other 64 bit architectures (e.g.  SPARC v9) have
those "doubled double" values native.

--
David Brownell                        db@Eng.Sun.COM.
Distributed Object Management





Author: pabloh@hpwala.wal.hp.com (Pablo Halpern )
Date: Tue, 24 Aug 1993 18:09:27 GMT
Raw View
This is not a complete solution to the problem, but don't forget that you
can define a new class:

  class VeryLong
  {
      long          hiWord;
      unsigned long loWord;

    public:
      VeryLong();
      VeryLong(const VeryLong& v);
      VeryLong(long lo);
      VeryLong(long hi, long lo);
      VeryLong(const char* textrep);

      friend VeryLong operator+ (VeryLong v1, VeryLong v2);
      ...
  };

  class UnsignedVeryLong
  {
    ...
  };

Then, implementations that have a "long long" type (or whatever), could
override this with:

  typedef long long VeryLong;
  typedef unsigned long long UnsignedVeryLong;

Like I said, its not perfect (VeryLong might be more than 32 bits, for
example), but it may help some people.  Anybody have a functional VeryLong
class out there?

--

- Pablo

-------------------------------------------------------------------------
Pablo Halpern          Permanent: (508) 435-5274   phalpern@world.std.com
                       Thru 3/94: (508) 659-4639   pabloh@wal.hp.com
                       (Mail to either address)
-------------------------------------------------------------------------




Author: mharriso@digi.lonestar.org (Mark Harrison)
Date: 21 Aug 93 14:54:42 GMT
Raw View
There are currently three sizes of integral types: char, short, and long.
These usually indicate 1, 2, and 4-byte values, respectively.

Now that 8-byte integers are becoming common, how can we represent them?
A simplistic answer is "Redefine sizeof(long) == 8", but this would break
too much existing code to be a realistic solution.

Some implementations use "long long" for this.  Will this be addressed
by the standard?  Or will C++ be a "32-bit" language for the forseeable
future?
--
Mark Harrison, mharriso@dsccc.com, (214)519-6517
                        ^^^^^^^^^ No matter what the headers say!!!




Author: garry@ithaca.com (Garry Wiegand)
Date: Sun, 22 Aug 1993 05:57:14 GMT
Raw View
In a recent article mharriso@digi.lonestar.org (Mark Harrison) wrote:
>There are currently three sizes of integral types: char, short, and long.
>These usually indicate 1, 2, and 4-byte values, respectively.

Not to be contrary, but there are currently four choices of integer
type in C and C++: char, short, int, and long.

>Now that 8-byte integers are becoming common, how can we represent them?
>A simplistic answer is "Redefine sizeof(long) == 8", but this would break
>too much existing code to be a realistic solution.

On DEC Alpha machines char, short, int, and long map to 8, 16, 32,
and 64 bits respectively. Works fine, of course, unless one's code is
so unportable as to break when an integral type has *more* than the
minimum precision required by the standard.

The penalty is that you have to think of 'int' as 'sizeof(short) <=
sizeof(int) <= sizeof(long)' rather than as 'the integral type most
"natural" to this machine.'

When 128-bit machines arrive then the language spec might indeed
be forced to take cognizance. But for now we're OK.

garry

--
Garry Wiegand --- garry@ithaca.com --- Ithaca Software, Alameda, California




Author: jbuck@forney.eecs.berkeley.edu (Joe Buck)
Date: 22 Aug 1993 20:51:34 GMT
Raw View
mharriso@digi.lonestar.org (Mark Harrison) writes:
>There are currently three sizes of integral types: char, short, and long.
>These usually indicate 1, 2, and 4-byte values, respectively.

You can no longer count on this; many 64-bit machines may define long to
be 8 bytes.

>Now that 8-byte integers are becoming common, how can we represent them?
>A simplistic answer is "Redefine sizeof(long) == 8", but this would break
>too much existing code to be a realistic solution.

Code that assumes long is exactly 32 bits is broken.  Code that assumes
that a long has 32 bits OR MORE is portable.

>Some implementations use "long long" for this.  Will this be addressed
>by the standard?  Or will C++ be a "32-bit" language for the forseeable
>future?

C++ is not a 32-bit language, except for those with "all the world's a
Vax" disease.  Since that is a common disease, many implementations
provide "long long".

If you have code that relies on a value being exactly 32 bits, the
portable way to program is to use a typedef like "int32".  The person who
ports your code then must change only one typedef.  This is a case where
streams are superior to printf.  Consider this:

typedef ??? int32; // int? long? short?

 int32 value;

 printf("value = %d\n", value);
 printf("value = %ld\n", value);
 cout << "value = " << value << "\n";

Notice that only one of the two printfs can be right (is value an int or
a long?), but that the cout statement is always right.



--
Joe Buck jbuck@ohm.EECS.Berkeley.EDU




Author: bagpiper@netcom.com (Michael Hunter)
Date: Sun, 22 Aug 1993 22:11:16 GMT
Raw View
: If you have code that relies on a value being exactly 32 bits, the
: portable way to program is to use a typedef like "int32".  The person who
: ports your code then must change only one typedef.  This is a case where
: streams are superior to printf.  Consider this:

: typedef ??? int32; // int? long? short?

:  int32 value;

:  printf("value = %d\n", value);
:  printf("value = %ld\n", value);
:  cout << "value = " << value << "\n";
try:
printf("value = %ld\n", (long)value) ;
And yes, you will pay for the cast when value isn't a long, but hey,
if you are doing any sort of code needs to be efficient and are using
printf something is seriously wrong.
: Notice that only one of the two printfs can be right (is value an int or
: a long?), but that the cout statement is always right.
I like streams better too, but this is a bad example of whey....
: Joe Buck jbuck@ohm.EECS.Berkeley.EDU
 Michael Hunter bagpiper@netcom.com





Author: gjb@fig.citib.com (Greg Brail)
Date: Mon, 23 Aug 1993 16:22:53 GMT
Raw View
In article <258m8m$m4q@agate.berkeley.edu> jbuck@forney.eecs.berkeley.edu (Joe Buck) writes:
>mharriso@digi.lonestar.org (Mark Harrison) writes:

>>Some implementations use "long long" for this.  Will this be addressed
>>by the standard?  Or will C++ be a "32-bit" language for the forseeable
>>future?

>C++ is not a 32-bit language, except for those with "all the world's a
>Vax" disease.  Since that is a common disease, many implementations
>provide "long long".

I disagree. There are many applications that benefit from a 64-bit
integer type. Unfortunately, there's no type that's guaranteed to be
at least 64 bits across platforms. It'd be great to see the standard
address this.

>If you have code that relies on a value being exactly 32 bits, the
>portable way to program is to use a typedef like "int32".  The person who
>ports your code then must change only one typedef.  This is a case where
>streams are superior to printf.  Consider this:

This is a good solution regardless.

    greg

--
Greg Brail ------------------ Citibank -------------------- gjb@fig.citib.com




Author: mharriso@digi.lonestar.org (Mark Harrison)
Date: 23 Aug 93 19:59:11 GMT
Raw View
In article <CC5CJF.4px@ithaca.com> garry@ithaca.com (Garry Wiegand) writes:
>In a recent article mharriso@digi.lonestar.org (Mark Harrison) wrote:
>
>>Now that 8-byte integers are becoming common, how can we represent them?
>>A simplistic answer is "Redefine sizeof(long) == 8", but this would break
>>too much existing code to be a realistic solution.
>
>On DEC Alpha machines char, short, int, and long map to 8, 16, 32,
>and 64 bits respectively. Works fine, of course, unless one's code is
>so unportable as to break when an integral type has *more* than the
>minimum precision required by the standard.

Precisely.  And too much existing code breaks when this is the case.
I'm not arguing that this is good, I'm just repeating the reason people
are telling me they can't change ("break") their compilers.

>The penalty is that you have to think of 'int' as 'sizeof(short) <=
>sizeof(int) <= sizeof(long)' rather than as 'the integral type most
>"natural" to this machine.'
>
>When 128-bit machines arrive then the language spec might indeed
>be forced to take cognizance. But for now we're OK.

*You* may be OK, but I'm stuck with compiler vendors who tell me they
won't add a 64-bit integer to their compiler because it's not a part
of the standard.  All the world is not an Alpha. :-)
--
Mark Harrison, mharriso@dsccc.com, (214)519-6517
                        ^^^^^^^^^ No matter what the headers say!!!