Topic: short enum, bool
Author: andys@thone.demon.co.uk (Andy Sawyer)
Date: Fri, 20 Jan 1995 22:17:22 +0000 Raw View
In article <D2JJ3z.Cvw@ukpsshp1.serigate.philips.nl>
baynes@ukpsshp1.serigate.philips.nl "Stephen Baynes" writes:
[snip stuff about 'short enums']
> Some C and C++ compilers do offer this as an extension. However for space
> saving it is unecessary - just buy yourself a good compiler - this will
> chose the size of an enum according to the range of the enumeration values
> you specified for it.
I've wonder about this for a while, as it leads me to think that
sizeof( enum ) may be meaningless. Since the compiler is, as you say, free
to choose the size of an enum, what happens in the following case:
enum little { l_lo = 0, l_hi = 1 }; // Will fit in a single bit
enum big { b_lo = 0, b_hi = 256 }; // Needs at least 9 bits
cout << ( ( sizeof( little ) == sizeof( big )) ? "l==b" : "l!=b" ) << endl;
cout << ( ( sizeof( little ) == sizeof( enum )) ? "l==e" : "l!=e" ) << endl;
cout << ( ( sizeof( big ) == sizeof( enum )) ? "b==e" : "b!=e" ) << endl;
Even more curious would be the case of
enum big_x { big_x_lo = 1, big_x_hi = 256 };
Would it be legal for a compiler to implement this as an 8 bit value,
applying a +/- 1 adjustment on conversions to/from the type? What about
a single bit? (Probably not the latter - but the former?)
sizeof( big_x ) == ?;
Any takers?
Regards,
Andy
--
* Andy Sawyer ** e-mail:andys@thone.demon.co.uk ** Compu$erve:100432,1713 **
The opinions expressed above are my own, but you are granted the right to
use and freely distribute them. I accept no responsibility for any injury,
harm or damage arising from their use. -- The Management.
Author: andys@thone.demon.co.uk (Andy Sawyer)
Date: Fri, 20 Jan 1995 22:20:47 +0000 Raw View
In article <790596622snz@wslint.demon.co.uk>
Kevlin@wslint.demon.co.uk "Kevlin Henney" writes:
>
> Hey, isn't it about time somebody asked about enum inheritance? (HHOK)
> Nobody's suggested adding that to the language for at least a couple of months
> <grin>
>
.. how about being able to derive from int then? <even bigger grin>
(In case you were wondering, I wanted to do this once, many years ago!)
Regards
virtual Andy() = 0;
--
* Andy Sawyer ** e-mail:andys@thone.demon.co.uk ** Compu$erve:100432,1713 **
The opinions expressed above are my own, but you are granted the right to
use and freely distribute them. I accept no responsibility for any injury,
harm or damage arising from their use. -- The Management.
Author: pstemari@erinet.com (Paul J. Ste. Marie)
Date: Fri, 20 Jan 1995 22:20:33 EST Raw View
In article <790640242snz@thone.demon.co.uk> andys@thone.demon.co.uk (Andy Sawyer) writes:
> ... I've wonder about this for a while, as it leads me to think that
>sizeof( enum ) may be meaningless.
As far as I know, it is. What does sizeof(struct) mean?
>Since the compiler is, as you say, free
>to choose the size of an enum, what happens in the following case:
> enum little { l_lo = 0, l_hi = 1 }; // Will fit in a single bit
> enum big { b_lo = 0, b_hi = 256 }; // Needs at least 9 bits
> cout << ( ( sizeof( little ) == sizeof( big )) ? "l==b" : "l!=b" ) << endl;
> cout << ( ( sizeof( little ) == sizeof( enum )) ? "l==e" : "l!=e" ) << endl;
> cout << ( ( sizeof( big ) == sizeof( enum )) ? "b==e" : "b!=e" ) << endl;
Implementation (at best?) defined?
> Even more curious would be the case of
> enum big_x { big_x_lo = 1, big_x_hi = 256 };
> Would it be legal for a compiler to implement this as an 8 bit value,
>applying a +/- 1 adjustment on conversions to/from the type? What about
>a single bit? (Probably not the latter - but the former?)
As long as (int) big_x_lo == 1 && (big_x) 1 == big_x_lo, and likewise for
big_x_hi, I rather suspect it's entirely up to the compiler.
Paul J. Ste. Marie,
pstemari@well.sf.ca.us, pstemari@erinet.com
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 21 Jan 1995 05:03:42 GMT Raw View
andys@thone.demon.co.uk (Andy Sawyer) writes:
>In article <790596622snz@wslint.demon.co.uk>
> Kevlin@wslint.demon.co.uk "Kevlin Henney" writes:
>>
>> Hey, isn't it about time somebody asked about enum inheritance? (HHOK)
>> Nobody's suggested adding that to the language for at least a couple of months
>> <grin>
You can get the equivalent of inheriting from enum under the new rules.
An enum object must contain enough bits to represent all its values.
You can create new values of that enum type that don't fall outside
the range thus extended to that many bits. Example:
enum error { info=0, warning=500, serious=1000, fatal=1500 };
Any constant or object of type 'error' may have any value between
0 and 2047. So we now can write
const error resources_low = 1;
const error obsolescent = 501;
const error unknown_request = 1001;
const error out_of_memory = 1501;
We can extend the enum whenever and wherever we like.
> .. how about being able to derive from int then? <even bigger grin>
> (In case you were wondering, I wanted to do this once, many years ago!)
This has been informally proposed, and would make a lot of things
nicer. I do not believe it will be in the first C++ standard.
Maybe some day.
--
Steve Clamage, stephen.clamage@eng.sun.com
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 21 Jan 1995 05:34:59 GMT Raw View
andys@thone.demon.co.uk (Andy Sawyer) writes:
>In article <D2JJ3z.Cvw@ukpsshp1.serigate.philips.nl>
> baynes@ukpsshp1.serigate.philips.nl "Stephen Baynes" writes:
>[snip stuff about 'short enums']
>> Some C and C++ compilers do offer this as an extension. However for space
>> saving it is unecessary - just buy yourself a good compiler - this will
>> chose the size of an enum according to the range of the enumeration values
>> you specified for it.
> I've wonder about this for a while, as it leads me to think that
>sizeof( enum ) may be meaningless. Since the compiler is, as you say, free
>to choose the size of an enum, what happens in the following case:
> enum little { l_lo = 0, l_hi = 1 }; // Will fit in a single bit
> enum big { b_lo = 0, b_hi = 256 }; // Needs at least 9 bits
> cout << ( ( sizeof( little ) == sizeof( big )) ? "l==b" : "l!=b" ) << endl;
> cout << ( ( sizeof( little ) == sizeof( enum )) ? "l==e" : "l!=e" ) << endl;
> cout << ( ( sizeof( big ) == sizeof( enum )) ? "b==e" : "b!=e" ) << endl;
The sizeof any enumeration type or object is well-defined in a
program, although implementation-defined. The language rules do
not address things like compiler command-line options to change
the sizes of types. You are on your own if you use such options
inconsistently. (It violates the One-Definition Rule, since type
'big', for example, would have different definitions in different
translation units in the same program.)
There are no language requirements governing the relative sizes
of your two enums. They could be the same size, or 'little' could
be larger than 'big'.
> Even more curious would be the case of
> enum big_x { big_x_lo = 1, big_x_hi = 256 };
> Would it be legal for a compiler to implement this as an 8 bit value,
>applying a +/- 1 adjustment on conversions to/from the type? What about
>a single bit? (Probably not the latter - but the former?)
No. Type big_x is required to be able to represent all values
from 0 through 511 under the new language rules. It must therefore
occupy at least 9 bits.
Type 'little' can be represented in one bit, but every object
must have a size of at least 1. Two different objects of type 'little'
must have different addresses (except for bitfields) so as a practical
matter, the object will take up at least as much space as a char.
(By definition, sizeof(char)==1.)
--
Steve Clamage, stephen.clamage@eng.sun.com
Author: dag@control.lth.se (Dag Bruck)
Date: 21 Jan 1995 16:00:30 GMT Raw View
>>>>> "A" == Andy Sawyer <andys@thone.demon.co.uk> writes:
A> In article <D2JJ3z.Cvw@ukpsshp1.serigate.philips.nl>
A> baynes@ukpsshp1.serigate.philips.nl "Stephen Baynes" writes:
A> [snip stuff about 'short enums']
>> Some C and C++ compilers do offer this as an extension. However for
>> space saving it is unecessary - just buy yourself a good compiler -
>> this will chose the size of an enum according to the range of the
>> enumeration values you specified for it.
A> Since the compiler is, as you
A> say, free to choose the size of an enum, what happens in the
A> following case:
The compiler is somewhat constrained by the rules of the C++ working
paper, so we can say a few things about your examples.
A> enum little { l_lo = 0, l_hi = 1 }; // Will fit in a single bit
An enum which is not a bitfield must be an addressable unit, so
sizeof(any enum type) >= 1.
A> enum big { b_lo = 0, b_hi = 256 }; // Needs at least 9 bits
so sizeof(big) >= 2.
A> enum big_x { big_x_lo = 1, big_x_hi = 256 };
A>
A> Would it be legal for a compiler to implement this as an 8 bit
A> value, applying a +/- 1 adjustment on conversions to/from the type?
A> What about a single bit? (Probably not the latter - but the
A> former?)
No neither is legal, so sizeof(big_x) >= 2.
-- Dag Bruck
Author: Viktor Yurkovsky <n4mation@panix.com>
Date: 21 Jan 1995 21:23:58 GMT Raw View
clamage@Eng.Sun.COM (Steve Clamage) wrote:
> > .. how about being able to derive from int then? <even bigger grin>
> > (In case you were wondering, I wanted to do this once, many years ago!)
>
> This has been informally proposed, and would make a lot of things
> nicer. I do not believe it will be in the first C++ standard.
> Maybe some day.
I've seriously considered this as an option in my implementation,
but it does introduce a lot of problems. Also it is a lot easier
to write a compiler with a restricted set of internal types that
are not derivable from.
-------------------------------
n4mation@panix.com |
Data In Formation Inc. |
|
Victor Yurkovsky |
|
Compiler maker |
|
Special discounts for |
weddings and funerals. |
_______________________________|
Enjoy your compile time. I WILL take it away from you!
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 22 Jan 1995 03:58:53 GMT Raw View
dag@control.lth.se (Dag Bruck) writes:
>A> enum big { b_lo = 0, b_hi = 256 }; // Needs at least 9 bits
>so sizeof(big) >= 2.
Well, no, because a char might be 9 (or more) bits -- and it is on
some systems (especially those with 36-bit words).
I don't think any basic type is required to have size > 1.
That is, I believe sizeof(long double)==1 is allowed. (Imagine
a system where the smallest addressable unit has more than 64 bits.)
--
Steve Clamage, stephen.clamage@eng.sun.com
Author: kevlin@wslint.demon.co.uk (Kevlin Henney)
Date: Mon, 16 Jan 1995 16:41:18 +0000 Raw View
In article <3fa97p$b85@ixnews3.ix.netcom.com>
PGoodwin@ix.netcom.com "Phil Goodwin" writes:
[stuff about choosing efficient sizes for enums deleted]
>That behavior takes just a little too much power out of the programmers hands.
>if an "efficient" size isn't the same size as an 'int' (I apologize for not
>being quite up to date) I think that this would also lead to unexpected
>behavior.
Given that the programmer didn't have that 'power' in the first place, in
either C or C++, nothing is lost I'm afraid.
[...]
>If I remember correctly the DEVMODE structure in the Windows operating system
>uses the same declaration for both 16 and 32 bit environments. Although the
>declarations are the same the sizes of the members are not because of the
> native
>size of an 'int' on the different platforms. It would be handy to be able to
>declare a member of such a structure as 'short' or 'long' in order to have some
>
>guarantees about its size that are consistent with the guarantees made for the
>other members of that structure, and it would be nice to be able to further
>modify the type of that member by using an 'enum' or 'bool' specifier that
>limits the values that the member can legally hold.
You get guarantees about minimum precision with short and long, and that's
your lot. A bool can legally hold true or false, and an enum can legally
hold up to the next power of 2 that the value of the highest declared
constant for that enum rounds up to. These are also minimum guarantees.
They sound pretty good to me :-)
Only problems need solutions, and since there is no problem here no change
to the language is needed ;-)
+---------------------------+-------------------------------------------+
| Kevlin A P Henney | Human vs Machine Intelligence: |
| kevlin@wslint.demon.co.uk | Humans can wreck a nice beach more easily |
| Westinghouse Systems Ltd | |
+---------------------------+-------------------------------------------+
Author: clamage@Eng.Sun.COM (Steve Clamage)
Date: 16 Jan 1995 22:14:41 GMT Raw View
bill@amber.ssd.csd.harris.com (Bill Leonard) writes:
>In article <3f9n22$es3@engnews2.Eng.Sun.COM>, clamage@Eng.Sun.COM (Steve Clamage) writes:
>> However, if you know you need to use a smaller size in a struct,
>> use a bitfield:
>>
>> ...
>This is not semantically equivalent to what was desired in all implementations,
Different responses in this thread show that different people have
different interpretations of the original post. I was responding
to how I interpreted it: I want to use less space than what the compiler
likely allocates for a bool or enum. I showed that you do not need a
language extension to accomplish this.
>even those that have the same sizes for char, short, and int. In fact
> struct foo {
> short a1;
> short a2;
> short a3;
> }
>is not equivalent
> struct foo {
> int a1 : 16;
> int a2 : 16;
> int a3 : 16;
> }
>on implementations where short is 16 bits and ints are 32 bits. The reason
>is alignment. On many machines, the struct containing shorts will require
>an alignment of 16 bits, while the one containing the int bit fields will
>require an alignment of 32 bits. Thus, if you use the struct inside
>another struct, you will not get the same results.
Quite true, and there is no portable solution for that. But then, a
compiler is free to align shorts on 32-bit boundaries as well.
>> If you need a large array of bools or enums and are concerned about
>> total storage, consider a class that packs them as tightly as is
>> appropriate. ...
>This doesn't really address the issue of embedding bools in a structure
>that needs to match (in both size and alignment) an existing structure that
>does not use bool.
>> If you are trying to match an external format with a C or C++ struct
>> declaration, that is always unportable, even among compilers on the
>> same platform. Being able to declare a "char enum" or "bit bool"
>> would not change that, which was what the original post was about.
>Well, I would expect most C++ implementations would guarantee (somehow) that
>a C structure declaration would yield the same layout in C++ as in the C
>implementation on the same platform.
This is in fact required by the draft standard. The layout requirements
for a valid C struct are the same in C++ as in C, for otherwise
compatible compilers.
>With that assumption, then, one could safely say that matching an external
>format specified in C is not unportable if you use exactly the same
>declaration. I think what the original poster was wanting was some limited
>ways that a declaration could be modified without changing its layout.
I did not interpret the original post that way. You cannot get the
exact effect without a language extension. But you can approximate it:
struct foo_base { // match the C declaration of 'foo' exactly
short a1_; // we want a1, a2, a3 to act like enums
short a2_;
short a3_;
};
class foo : public foo_base { // or "private foo_base"
public:
enum1 a1() const { return (enum1)a1_; }
enum2 a2() const { return (enum2)a2_; }
enum3 a3() const { return (enum3)a3_; }
enum1& a1() { return (enum1&)a1_; }
enum2& a2() { return (enum2&)a2_; }
enum3& a3() { return (enum3&)a3_; }
};
BTW, this technique (deriving from a C struct) is a general one for
achieving C compatibility with a C++ class.
--
Steve Clamage, stephen.clamage@eng.sun.com
Author: scotty@netcom.com (J Scott Peter)
Date: Mon, 16 Jan 1995 23:34:49 GMT Raw View
Kevlin Henney (kevlin@wslint.demon.co.uk) says:
> [stuff about choosing efficient sizes for enums deleted]
> >That behavior takes just a little too much power out of the programmers hands.
> >if an "efficient" size isn't the same size as an 'int' (I apologize for not
> >being quite up to date) I think that this would also lead to unexpected
> >behavior.
> Given that the programmer didn't have that 'power' in the first place, in
> either C or C++, nothing is lost I'm afraid.
I am the original poster who requested the `short bool' and `short enum'
feature. I think that many of the replies that stated that the feature would
not add anything, is not needed, and would result in more confusion than
clarity, are really obfuscating the issue. They imply that I want to do
something special, that is outside the area of normal C/C++, or that I want
more portability than is intrinsically possible, or something like that.
Let me simply restate the issue:
I just want to do with bools and enums what I can already do with ints.
Everything the replies have said of bools and enums applies also to ints:
By default, the compiler chooses the most efficent storage size for an int.
If you want a different size, you don't really *need* short/long/byte
keywords. You could do it with bitfields, you could do it with a specialised
array template, blah blah blah.
The fact is, though, that we have these very useful keywords `long' and
`short', which modify the size of `int' (and `char', which declares a
different size). True, short and long don't guarantee a particular byte
size on every machine. But, using them, one can write interface code
to just about any structure used by a particular operating system on a
particular machine. Furthermore, one can create space-saving arrays without
C++ gymnastics. (And one could even guarantee an absolute byte size
on practically any machine by using machine-specific defines; e.g.
`typedef int4 long' or `typedef int4 int', depending on the machine.)
I just want the same size-specifying ability to extend to the other integral
types.
> Only problems need solutions, and since there is no problem here no change
> to the language is needed ;-)
Yeah, so since we don't *need* short or long ints either, wouldn't the
language have been better of without them? In fact, there *is* a
need I have in particular for them: I'm trying to write C++ wrapper code
for the Amiga OS calls. Like most windowing operating systems, the Amiga's
is very structure-dependent, with dozens of structures around which the
OS calls revolve. It cries out to have a class-oriented interface, with
simple inline member functions, laid on top of it. Anyway, my problem
has nothing to do with OOP, but with just trying to simplify the interface
in general. Tell me, which is more elegant:
#define HEAVE_NORMAL 0
#define HEAVE_CHUNKY 1
#define HEAVE_PROJECTILE 2
...
struct Excretion {
...
short heave_type; // One of the HEAVE_ values
};
Or this:
enum Heave {HEAVE_NORMAL, HEAVE_CHUNKY, HEAVE_PROJECTILE};
struct Excretion {
...
short Heave heave_type;
}
The answer is obvious, except I can't do it that way, because it's short.
These fields are all over the place. Again, for those not getting the point:
It has nothing to do with whether a short-sized variable is more efficient
than an int-sized variable. *I'm trying to interface to already existing
structures!*.
Sorry, bitfields are not good enough. First, they're just ugly. They're
problematic in general: declaring a field 16 bits might not result in the
same alignment as declaring a field `short'. And as for bools, gcc won't
even let me make a bitfield bool of more than 8 bits.
Furthermore, in this case, a `short' int variable already existed in C code
for the API. In order to change it to a bitfield, I have to know how many
bits a short is on this machine, and the alignment behavior it imposes
on surrounding fields. Otherwise, I can't make it an enum, and I lose
convenience.
Short and long exist in C because they're useful. Their usefulness applies
to bools and enums exactly as much as they do to ints, and the language
should reflect that.
And as for `byte': I need that for the same reason: some fields in existing
Amiga structures are byte-sized bools and enums, they just aren't declared
that way. If I could declare a `char enum', that would be fine too. But
it would require changing the way `char' parses (you'd have to allow `char
int'), and then you'd have a confusion regarding overloading: does a `char
enum' overload as a char or as an enum? That's why a new `byte' keyword
would be better: like short and long, it would specify only size, not
underlying type behavior.
--
J Scott Peter XXXIII // Wrong thinking is punishable.
scotty@netcom.com // Right thinking is as quickly rewarded.
Los Angeles // You will find it an effective combination.
Author: kevlin@wslint.demon.co.uk (Kevlin Henney)
Date: Wed, 18 Jan 1995 09:15:56 +0000 Raw View
In article <scottyD2Iuu2.Lz2@netcom.com>
scotty@netcom.com "J Scott Peter" writes:
>Kevlin Henney (kevlin@wslint.demon.co.uk) says:
>> [stuff about choosing efficient sizes for enums deleted]
>> >That behavior takes just a little too much power out of the programmers
> hands.
>> >if an "efficient" size isn't the same size as an 'int' (I apologize for not
>> >being quite up to date) I think that this would also lead to unexpected
>> >behavior.
>
>> Given that the programmer didn't have that 'power' in the first place, in
>> either C or C++, nothing is lost I'm afraid.
>
>I am the original poster who requested the `short bool' and `short enum'
>feature. I think that many of the replies that stated that the feature would
>not add anything, is not needed, and would result in more confusion than
>clarity, are really obfuscating the issue. They imply that I want to do
>something special, that is outside the area of normal C/C++, or that I want
>more portability than is intrinsically possible, or something like that.
>
>Let me simply restate the issue:
>
> I just want to do with bools and enums what I can already do with ints.
>
>Everything the replies have said of bools and enums applies also to ints:
>By default, the compiler chooses the most efficent storage size for an int.
>If you want a different size, you don't really *need* short/long/byte
>keywords. You could do it with bitfields, you could do it with a specialised
>array template, blah blah blah.
The difference between bool and enum, and the other integer types, relates
to range and not just bytesize: bool takes one of two values, enums take the
the next power of 2 above the highest constant member, whereas the maximum
value of an integer will change with the byte size. If you change the bytesize
of a bool or enum, does the range increase? Or does the bytesize just change?
If it is the former, then that is nonsensical. If it is the latter, then this
is not the same as for ints, and so the analogy / prior art you are referring
to does not exist.
Also, the compiler does not necessarily choose the most efficient type
for an int - this is an efficiency myth.
>The fact is, though, that we have these very useful keywords `long' and
>`short', which modify the size of `int' (and `char', which declares a
>different size). True, short and long don't guarantee a particular byte
>size on every machine. But, using them, one can write interface code
>to just about any structure used by a particular operating system on a
>particular machine. Furthermore, one can create space-saving arrays without
>C++ gymnastics. (And one could even guarantee an absolute byte size
>on practically any machine by using machine-specific defines; e.g.
>`typedef int4 long' or `typedef int4 int', depending on the machine.)
These aren't just different sizes, they are different types.
>Tell me, which is more elegant:
>
> #define HEAVE_NORMAL 0
> #define HEAVE_CHUNKY 1
> #define HEAVE_PROJECTILE 2
> ...
> struct Excretion {
> ...
> short heave_type; // One of the HEAVE_ values
> };
>
>Or this:
>
> enum Heave {HEAVE_NORMAL, HEAVE_CHUNKY, HEAVE_PROJECTILE};
>
> struct Excretion {
> ...
> short Heave heave_type;
> }
>
>The answer is obvious, except I can't do it that way, because it's short.
If you were writing this code from scratch, then the answer would be obvious.
However, you are trying to interface to an existing API. The latter leaves
a bad taste, and the former should be rejigged to use consts or an anonymous
enum. Leave short in there as it's the OS API. Put your abstraction in the
next level up and use the native structure, I presume the API provides, rather
than providing your own. Wrap it up.
>These fields are all over the place. Again, for those not getting the point:
>It has nothing to do with whether a short-sized variable is more efficient
>than an int-sized variable. *I'm trying to interface to already existing
>structures!*.
Use the existing structures, then :-)
+---------------------------+-------------------------------------------+
| Kevlin A P Henney | Human vs Machine Intelligence: |
| kevlin@wslint.demon.co.uk | Humans can wreck a nice beach more easily |
| Westinghouse Systems Ltd | |
+---------------------------+-------------------------------------------+
Author: kevlin@wslint.demon.co.uk (Kevlin Henney)
Date: Fri, 20 Jan 1995 10:10:22 +0000 Raw View
In article <scottyD2DFML.MDt@netcom.com>
scotty@netcom.com "J Scott Peter" writes:
>Forgive me if this has been addressed already.
Forgiven, but yes it has been.
>In the same way that one can specify an int or unsigned int as long or short,
>one should be able to specify other integral types (i.e. enums and bools) as
>long, short, or char sized.
You're not just specifying the byte size, you are specifying the range.
They are different types with different limits. A bool can only hold one
of two values, and an enum also has a fixed range. Live with the types
as they come.
>This is desirable in order to:
> Save space. Enums are a great convenience, and the new bool type is as
> well. But if one wants to have a char-or short-sized variable, to save
> space in a structure or array, one has to declare a char or short. One
> loses the semantics of the enum or bool.
This is of no use for general programming.
> Conform to operating-system structures. Many structures used by OS
> system calls contain char-or short-sized fields which have the semantics
> of a bool or enum. It would be better to declare them as such in the
> structure definition, but this is currently impossible.
At this level, it would be preferable to use the OS primitives provided
and wrap them up at the next level. Review the program's design instead
of changing the language.
>It is *not* sufficient to have a compiler-flag for `short' enums or bools.
>This would not allow one to choose when and where to make enums/bools shorter
>than int, to conform to a particular structure.
>
>Suggested behavior:
> The following constructs are syntactically equivalent. They serve
> to modify the semantics of a byte/short/int/long variable:
>
> signed
> unsigned
> bool
> enum <tag>
>
> In order to allow byte-sized bools/enums, a new keyword `byte', with the
> same syntactical behavior as `short' and `long' should be created.
>
> This is better than using `char' to declare byte-sized integers, because
> you don't have to change the syntactical behavior of `char'.
> Furthermore, you could then distinguish between ASCII characters and
> byte-sized integers. A byte variable would promote to an int, and
> overload like an int. A byte enum <tag> variable would overload like
> enum <tag>, etc. A char variable would still overload like a char.
A byte is defined in the C standard as char. A char is an integral type,
so you would gain nothing. If you want a small integer use a signed char,
and if you want a small unsigned integer use an unsigned char. This is
existing practice and no new types are required.
> (Without creating a new keyword, one could make `short short' do the same
> thing. Any reasonable programmer would then `#define byte short short').
Oh yeah? ;-)
>Example usages:
> enum greeting { Yo, Mama, Eats, My, Shorts };
>
> greeting a; // int-sized
> short greeting b; // short-sized
> byte greeting d; // byte-sized
> greeting byte e; // same thing
>
> byte bool f;
> bool byte g;
>
> typedef byte greeting grtng;
> grtng h; // byte-sized
What did you just gain with this? AFAICS, nothing :-(
Hey, isn't it about time somebody asked about enum inheritance? (HHOK)
Nobody's suggested adding that to the language for at least a couple of months
<grin>
+---------------------------+------------------------------------+
| Kevlin A P Henney | Money confers neither intelligence |
| kevlin@wslint.demon.co.uk | nor wisdom on those who spend it |
| Westinghouse Systems Ltd | |
+---------------------------+------------------------------------+
Author: scotty@netcom.com (J Scott Peter)
Date: Sat, 14 Jan 1995 01:18:21 GMT Raw View
Forgive me if this has been addressed already.
In the same way that one can specify an int or unsigned int as long or short,
one should be able to specify other integral types (i.e. enums and bools) as
long, short, or char sized.
This is desirable in order to:
Save space. Enums are a great convenience, and the new bool type is as
well. But if one wants to have a char-or short-sized variable, to save
space in a structure or array, one has to declare a char or short. One
loses the semantics of the enum or bool.
Conform to operating-system structures. Many structures used by OS
system calls contain char-or short-sized fields which have the semantics
of a bool or enum. It would be better to declare them as such in the
structure definition, but this is currently impossible.
It is *not* sufficient to have a compiler-flag for `short' enums or bools.
This would not allow one to choose when and where to make enums/bools shorter
than int, to conform to a particular structure.
Suggested behavior:
The following constructs are syntactically equivalent. They serve
to modify the semantics of a byte/short/int/long variable:
signed
unsigned
bool
enum <tag>
In order to allow byte-sized bools/enums, a new keyword `byte', with the
same syntactical behavior as `short' and `long' should be created.
This is better than using `char' to declare byte-sized integers, because
you don't have to change the syntactical behavior of `char'.
Furthermore, you could then distinguish between ASCII characters and
byte-sized integers. A byte variable would promote to an int, and
overload like an int. A byte enum <tag> variable would overload like
enum <tag>, etc. A char variable would still overload like a char.
(Without creating a new keyword, one could make `short short' do the same
thing. Any reasonable programmer would then `#define byte short short').
Example usages:
enum greeting { Yo, Mama, Eats, My, Shorts };
greeting a; // int-sized
short greeting b; // short-sized
byte greeting d; // byte-sized
greeting byte e; // same thing
byte bool f;
bool byte g;
typedef byte greeting grtng;
grtng h; // byte-sized
--
J Scott Peter XXXIII // Wrong thinking is punishable.
scotty@netcom.com // Right thinking is as quickly rewarded.
Los Angeles // You will find it an effective combination.
Author: pollardj@jba.co.uk (Jonathan de Boyne Pollard)
Date: 16 Jan 1995 12:06:25 -0000 Raw View
J Scott Peter (scotty@netcom.com) wrote:
: Forgive me if this has been addressed already.
Not only addressed, but at least one compiler vendor (Metaware) implements
it as an extension to C++.
With High C++, the default for an enum is a "short enum", where the
compiler chooses the minimum width type necessary. A "long enum" always
gives the enumerated type the same size as int.
( Incidentally, High C++ also has "long long", another extension, for
64-bit integers. )
: This is desirable in order to:
: Save space.
Which most C++ compilers default to doing anyway.
: Conform to operating-system structures. Many structures used by OS
: system calls contain char-or short-sized fields which have the semantics
: of a bool or enum.
Fortunately, the default integral promotions and the definitions for
operators (such as operator! and operator&&) mean that most operations on
these fields behave sensibly.
: [ Suggested addition of a 'byte' modifier. ]
:
: This is better than using `char' to declare byte-sized integers,
I think that you are trying to solve a non-problem here, simply in the name
of completeness. I don't know about anyone else, but char, signed char,
and unsigned char have served me well for handling byte-sized data for
years, and apart from the niggles about signedness I've never really wanted
more.
: Furthermore, you could then distinguish between ASCII characters and
: byte-sized integers.
Further to my point above, there are very few (if any) practical situations
where "byte-sized integers" (I think that I can guess what you mean) *need*
to be distinguished from ASCII characters.