Literals: What's the point?
So, I just came upon the concept of "literals," and I'm not getting it. Apparently it's the "source code representation of a primitive data type." In other words, the number you typed in the program. I won't ask why that even needs a name; here's what I'm not getting:
It says that you can make a number be a long-type, for instance, by adding an L after it. So 30000 is an int, 30000L is a long.
But if you're declaring a variable, you can already type "long x = 30000." Why would you need the extra letter? Even if you're feeding a truly long number to the program through input, you would have to feed it to a long-type field if it were too big to fit in an int. So why is the L ever needed at all?
As a matter of fact, if I type "long x = <really big number>" into my IDE, it gives me an error that the "integer value" (even though I just declared it as a long) is too big, until I append an L to the end. And if I declare a huge number appended with L to an int var, it tells me "possible loss of precision" (nevermind the fact that it should be too big for an int var anyway, which it only spits at me if I drop the ending L).
I know this has to be a stupid question, I can feel it - the answer is going to be very de-mystifying I'm sure, but right now, I don't get the point of all this.
Thanks a lot,
Re: Literals: What's the point?
Literals aren't limited to only numbers. They can be booleans, characters, strings, numbers, null, etc. (not a complete list).
They're called literals because they represent an absolute value which cannot be interpreted as anything other else. We call them literals because we need a name to refer to what these things are.
There is a big difference between these:
int val = '4'; // not actually the integer literal 4, but the character literal '4' which in ASCII is equivalent to 52
int val2 = 4; // this is the integer literal 4
int val3 = (int) 4f; // this is the floating point literal 4f. Because floats are not implicitly castable to int, we must explicitly cast it.
There are complicated reasons (some of which is historical) why integer literals and long literals are treated differently, and I won't pretend to know/understand all of it. My guess is that it has something to do with implementation details of the compiler (especially on older 32-bit systems), and possibly has historical reasons coming from C/C++ where ints originally held 16-bit values and longs held 32-bit values.