Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <blockquote> <p>1- What does the writer mean by word length</p> </blockquote> <p>A word is a fundamental unit of memory like <a href="http://en.wikipedia.org/wiki/Page_%28computer_memory%29" rel="nofollow">a page</a> -- actually, there's an article <a href="http://en.wikipedia.org/wiki/Word_%28computer_architecture%29" rel="nofollow">on word</a> too, which I won't regurgitate. The significance to C is, like your author says, that it is not always the same but it is determined by hardware characteristics. This maybe one reason the C standard doesn't dictate the literal size of basic types; the most obvious one to contemplate is the size of pointers, which will be 4 bytes on 32-bit systems and 8 on 64-bit systems, to reflect the size of the address space.</p> <blockquote> <p>2- Why short int and long data types are inherently non-portable.</p> </blockquote> <p>More accurately: they're as portable as C <em>but</em> their size is not standardized, which may make them useless for many applications where a fixed specific size is required.</p> <blockquote> <p>3- Is typedef is a microprocessor directive, and if it is what is its function ?!</p> </blockquote> <p>No, it's not a processor directive. It's a nice piece of <a href="http://en.wikipedia.org/wiki/Syntactic_sugar" rel="nofollow">syntactic sugar</a> which enables you to define custom types.</p> <blockquote> <p>4- Can I write typedef unsigned char (anything) instead of typedef unsigned char INT8U;</p> </blockquote> <p>Yep, that's the idea. Beware that the C standard doesn't even dictate the size of a char, although I've never heard of an implementation where it is anything but 8-bits <em>[but someone in the comments has]</em>.</p> <blockquote> <p>5- Why did the author code typedef unsigned char INT8U; and then #define UBYTE INT8U can't I use this directly typedef unsigned char UBYTE;</p> </blockquote> <p>You could, yes. Possibly the author wanted to restrict the number of places such a type is defined. Since using the <code>#define</code> is a pre-processor directive, it might also also slightly streamline the executable (although not to a degree that could be considered generally significant).</p> <blockquote> <p>6- There is a double use of typedef unsigned char one of them is typedef unsigned char INT8U; and the other typedef unsigned char BOOLEAN; Why did he do that?!</p> </blockquote> <p>Again, use of typedefs is a lot about "sugar"; they can make your code cleaner, easier to read, and (presuming they are done properly), more robust. "Boolean" is a math derived CS term for a type which only has two meaningful values, zero (false) or not zero (true). So it could in theory be implemented with just one bit, but that's neither easy nor in the end efficient (because there are no processors with 1-bit registers, they would have to slice dice and fake such anyway). Defining a "bool" or "boolean" type is common in C and used to indicate that the significance of the value is either true or false -- it works well with, e.g. <code>if (var)</code> (true) and <code>if (!var)</code> (false) since C already evaluates that way (0 and NULL are the only values that will pass <code>if (!var)</code>). Whereas using something like <code>INT8U</code> indicates you are dealing with a value that ranges from decimal 0 to 255, since it unsigned. I think putting the <code>U</code> upfront is a more common practice (<code>UINT8</code>), but if you are used to the concepts it is reasonably clear. And of course the typedef/define is not hard to check.</p> <hr> <h1>About <code>stdint.h</code></h1> <p>Integer types are the ones with the greatest range of variation, and in fact the ISO C standard does require that an implementation include definitions for various integer types with certain <em>minimum</em> sizes in <code>stdint.h</code>. These have names like <code>int_least8_t</code>. Of course, types with a real <em>fixed</em> size (not just a minimum) are needed for many things, and most common implementations do provide them. The C99 standard dictates that if they are available, they should be accessible via names following the pattern <code>intN_t</code> (signed) and <code>uintN_t</code> (unsigned), where <code>N</code> is the number of bits. The signed types are also specified as <a href="http://en.wikipedia.org/wiki/Two_complement" rel="nofollow">two's complement</a>, so one can work with such values in all kinds of highly portable ways.</p> <p>As a final note, while I'm not familiar with MicroC, I would not take that documentation as representative of C generally -- it is intended for use in a somewhat restrictive and specialized environment (a 16-bit int, implied by the typedefs, is unusual, and so if you ran that code elsewhere, INT16U could be 32-bits, etc). I'd guess MicroC only conforms to ANSI C, which is the oldest and most minimal standard; evidently it has no stdint.h.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload