I don't think necessity is the mother of invention - invention, in my opinion, arises directly from idleness, possibly also from laziness.Agatha Christie
To do nothing at all is the most difficult thing in the world, the most difficult and the most intellectual.Oscar Wilde
Every man is, or hopes to be, an idler.Samuel Johnson
My first encounter with the idea of lazy evaluation was using files in Pascal, back in 1977. A Pascal file is syntactically the same as a pointer. For example:
VAR p: ^CHAR; VAR f: FILE OF CHAR; VAR ch; ch = p^; ch = f^;
Those two assignment statements are both syntactically correct. The first will access the character pointed to by p, and the second will access the character at the current position in the file f.
The problem, though, is that you generally use the READ() pseudo-procedure to access files. "READ(f,ch)" is defined as equivalent to "ch = f^ ; GET(f);" So READ() not only accesses the file, but also advances the file position and reads the next data item. This abstraction works fine with disk files, but when you do a READ() on an interactive file - e.g. stdin in Unix terminology - it always tries to read one character ahead of where you are. If your program wants to read a line of input, Pascal will read the line and then try to read one extra character, to fill in f^. So you hit return and nothing happens.
The solution back then was to make dereferencing a file pointer a lazy operation. When the GET() procedure is executed, all it does is set a flag saying that it needs to do the real GET(). Some time later in the program's execution, when the data pointed to by the file pointer is called for, the flag gets noticed and the real GET() is performed, filling in the data just in time for it to be used. This was fairly simple to implement in the Pascal compiler, and it made interactive files behave the way one would expect.