In light of the EMI/iTunes announcement, Slate's Explainer, which I am a huge fan of, explores the theoretical audible differences between 256 kbps AAC files and the 128 kbps versions. Christopher Beams says that 256 kbps files, though packed with twice as much data, do not sound twice as sharp as 128 kbps versions. Agreed: the added info isn't as important to your ears. And it is likely, according to quoted tests, you can't distinguish between anything higher than 128kbps sources. That's where things get fuzzy.
Hedging his statement, he says:
But a listener's ability to distinguish sound quality depends on many factors, like age, hearing ability, and attentiveness, not to mention the style of music and where one listens to it. For example, music with delicate timbres—a string quartet, say—might sound noticeably choppy at lower bitrates, whereas compressing an AC/DC song might not be so bad.
Sounds right, if a little inconclusive and safe. I'd wish he'd mention earbud quality as a factor, too. Little white iPod earbuds definitely won't separate the two rates, but with an expensive set of speakers or buds the difference comes a lot closer to being apparent.