That was my best guess at what purpose it served. In the example I posted, I believe the u would not be required because the variable foo was declared as unsigned. Is the u used in that instance for consistency/stylistic reasons?
According to my understanding, since you are declaring the variable as an unsigned integer, using "u" as a suffix will not make any difference.
In case of arithmetic operation such as multiplication or division with negative numbers, using "u" will typecast the number to unsigned.
1) -100 / 10 = 0xFFF6 (-10d), which is a negative number (-10)
2) -100 / 10u = 0x198F (6543d). This is because, it will consider 0xFF96 as a positive integer (65430) and divide it by 10.
uint8 foo = 1u;
the 'u' isn't really required since the data type is already unsigned. This makes mostly a sense for use with the preprocessor, e.g.
#define FOO 1u
to abuse misuse (means you will get an warning from compiler)
or to get implicit a cast, e.g.
#define BAR 1f
for float where BAR is a float type for now.