47 minutes ago · Tech · 0 comments

If you look very closely at my previous post, you’ll notice that I initialize a 128-bit integer with a 64-bit value. The 128-bit unsigned integer represents the internal state of a random number generator. Why not initialize it to a 128-bit value? I was trying to keep the code simple. A surprising feature of C compilers, at least of GCC and Clang, is that you cannot initialize a 128-bit integer to a 128-bit integer literal. You can’t directly print a 128-bit integer either, which is why the previous post introduces a function print_u128. The code __uint128_t x = 0x00112233445566778899aabbccddeeff; Produces the following error message. error: integer literal is too large to be represented in any integer type The problem isn’t initializing a 128-bit number to a 128-bit value; the problem is that the compiler cannot parse the literal expression 0x00112233445566778899aabbccddeeff One solution to the problem is to introduce the macro #define U128(hi, lo) (((__uint128_t)(hi) << 64) | (lo))…

No comments yet. Log in to reply on the Fediverse. Comments will appear here.