I am dismissive of the notion that because someone has knocked a couple of orders of magnitude off the work-factor to break a well studied crypto algorithm, that one should abandon it when the remaining factor still suffices for the job. The most persuasive argument of the notion is perhaps here which, by the way, is an excellent source which I recommend.
WEP misused RC4 which led to breaking WEP. I think the break provided no general attack except for that style of misuse. Certainly there are applications for which RC4 should not be used. If there are well studied alternatives that are as cheap, in a particular situation, RC4 should be abandoned. Economics still holds. In short ‘broken’ is not a binary attribute of a crypto system, even for DES.
I am not aware of “cracks foretelling total breakage” which is a common fear. The original cost of breaking DES (in boolean ops) was known from the beginning and still stands. For DES the algorithm is as good as its key length allows (perhaps one bit less).
Cracks reported for SHA1 knock off about 11 bits but are thus still theoretical for lack of compute power for a demonstration. And then it is only to protect against some opponent who can afford to compute and store 271 generation parameter sets while he searches for a useful collision.
Introducing new crypto algorithms is itself error prone and that is a reason to stall. Sometimes it is indeed time to move on, however and early adopters are needed.
Schneier (above link) quotes the NSA saying “Attacks always get better; they never get worse.”. True, but worse how fast? Does a ‘broken’ crypto system degrade faster than one not yet broken? I think the evidence is spotty. Rather than longer hashes, reported by Schneier, I would love to see more secure hashes of the same length. Perhaps we just don’t know how to do that now.