Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Between this and your previous questions, it sounds like there are several fundamental confusions here:</p> <ol> <li><p>If your program is going to be run on a 64-bit machine, it sounds like you should compile and unit-test it on a 64-bit machine. Running unit tests on a 32-bit machine can give you confidence the program is correct in that environment, but doesn't necessarily mean the code is correct for a 64-bit environment.</p></li> <li><p>You seem to be confused about how 32- and 64-bit architectures relate to endianness. 32-bit machines are not always little-endian, and 64-bit machines are not always big-endian. They are two separate concepts and can vary independently.</p></li> <li><p>Endianness only matters for single values consisting of multiple bytes; for example, the integer 305,419,896 (0x12345678) requires 4 bytes to represent, or a UTF-16 character (usually) requires 2 bytes to represent. For these, the order of storage matters because the bytes are interpreted as a single unit. It sounds like what you are working with is a sequence of raw bytes (like a checksum or hash). Values like this, where multiple bytes are not interpreted in groups, are not affected by the endianness of the processor. In your case, casting the byte array to a <code>long long *</code> actually <em>creates</em> a potential endianness problem (on a little-endian architecture, your bytes will now be interpreted in the opposite order), not the other way around.</p></li> <li><p>Endianness also doesn't matter unless the little-endian and big-endian versions of your program actually have to <em>communicate</em> with each other. For example, if the little-endian program writes a file containing multi-byte integers without swapping and the big-endian program reads it in, the big-endian program will probably misinterpret the data. It sounds like you think your code that works on a little-endian platform will suddenly break on a big-endian platform even if the two never exchange data. You generally don't need to be worried about the endianness of the architecture if the two versions don't need to talk to each other.</p></li> <li><p>Another point of confusion (perhaps a bit pedantic). A byte does not store a "hex value" versus a "decimal value," it stores an integer. Decimal and hexadecimal are just two different ways of representing (printing) a particular integer value. It's all binary in the computer's memory anyway, hexadecimal is just an easy conversion to and from binary and decimal is convenient to our brains since we have ten fingers.</p></li> </ol> <p>Assuming what you're trying to do is print the value of each byte of the array as decimal, you could do this:</p> <pre><code>unsigned char bytes[] = {0x12, 0x34, 0x56, 0x78}; for (int i = 0; i &lt; sizeof(bytes) / sizeof(unsigned char); ++i) { printf("%u ", (unsigned int)bytes[i]); } printf("\n"); </code></pre> <p>Output should be something like:</p> <blockquote> <p>18 52 86 120</p> </blockquote>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload