Pure functional programming it is not about denying time. It's about separating the computing time from the programming model's time. Or equivalently, to separate the computing order from the order of real-world events.
That's why monads are an extremely powerful concept, and not just a hack to "avoid time". Using monads, you can compose evaluations in a certain order (model time), but that doesn't automatically mean they will be actually evaluated in that order (computing time). Well, if you use IO monads this will actually be the case. But you are free to define other monads which allow you to do strange things like looking into the future or checking multiple alternatives at once.
Pure functional programming forces you to always make an explicit distiction between these two kinds of time, and that feels buerocratic to those who are used to confound them. However, making that distinction usually yields to much better interfaces. So there is an extra effort, but also a big gain.
I strongly recommend to read Philip Wadler's The essence of functional programming before talking undifferentiated about the broad concept of time: http://homepages.inf.ed.ac.uk/wadler/papers/essence/essence....
I'm more or less a novice with FP, yet to learn a language more functional then Python, but this, and the article, are along the lines of something I was thinking of. Why exactly is it that I/O is stateful? A program, after all, is something that takes in input and spits out output, right? Why can't it be:
[Byte] -> [Byte]
main [data:rest] = ...
(sorry if I'm making up syntax or types, haven't properly learned Haskell yet). Well, my friends tell me, the problem is that often times you have to deal with timing, exceptions, etc. Well, fine:
[(Byte, Integer, Error)] -> [Byte]
main [(data, time, err):rest] = ...
As you said, no reason actual evaluation has to correspond to the logical order of evaluation.
I think this roughly corresponds to the original I/O system in Haskell, based on lazy streams. They were, by all accounts, an absolute bitch to program with. Continuation based I/O was slightly better, but monads were still greeted as liberators when they arrived on the scene.
In Haskell they like it better if a function returns the same value if it is called with the same arguments later on. Just like sin( pi ) gives the same answer every time you call it. If you start writing functions that take a file name and return the bytes then that function might return a different value the next time it is called assuming the file contents changed on disk. The same line of reasoning applies to db calls, user i/o, anything with random numbers etc.
But I'm not taking a file name, I'm taking a stream of data, for which the output should always be the same. This should be good enough for applications that deal with standard in and standard out, yes?
Though that's a good point, when it comes time to be opening files on the disk, it inevitably becomes stateful again.
In fact, you can have functions of type [Byte] -> [Byte], however, the problem is when you need to get the first set of bytes from a file or stdin. You could sort of read this as a function of type FileHandle -> [Byte], which will only very rarely even return the same value twice, let alone guaranteed-always.