That was back in the day when computers were very slow so every cycle counted. However, on modern day computers most can get ~1GHz-2GHz clock speeds, so the loss of a few clock cycles here or there will result in literally nano-second losses (this is more than made up by the fact that OS will sap more clock cycles). Plus, modern compilers can optimize code much better than your average programmer (and even good programmers) in a fraction of the time.
Okay, if you insist. But those people who have programmed machine-oriantated languages, like assembly, and especially on legacy systems, always try to optimise to free up processor cycles to do something else with those cycles.