Hello,
I have noticed that if you add one in the char variable which holds 2^16-1 the variable will hold the zero as the new value.
Can someone explain me why is it happening?
Thank you
Printable View
Hello,
I have noticed that if you add one in the char variable which holds 2^16-1 the variable will hold the zero as the new value.
Can someone explain me why is it happening?
Thank you
First you have to understand how computers represent numbers and perform math. For this example let's consider a 4-bit number, but it's the same for 8-bit, 16-bit, 32-bit, etc. I'm only going to consider unsigned numbers here, though it's similar for signed numbers.
A 4-bit number can have the values 0000 through 1111.
When you have the maximum 4-bit number, 1111 and add 1 to it, the result should be 10000. However, this number can't be represented in 4-bits so any bits beyond the right most 4 bits (known as the least significant bits). This is 0000.
char - and all the other primitive data types - can only hold distinct number of different values. The limits are documented in Oracle's Tutorial on the Primitive Data Types.
Floating point values will "overflow" and throw a runtime exception when they do. But integral types (int, char, long) "wrap around" as you have found.
---
I don't really know why... you'd have to ask the language's authors! Perhaps someone here can explain the sort of context where one behaviour rather than another (wrap around vs overflow) is to be preferred, and especially why one group of data types is handled differently to the other.