Main Menu

Tip jar

If you like CaB and wish to support it, you can use PayPal or KoFi. Thank you, and I hope you continue to enjoy the site - Neil.

Buy Me a Coffee at ko-fi.com

Support CaB

Recent

Welcome to Cook'd and Bomb'd. Please login or sign up.

April 25, 2024, 07:11:37 PM

Login with username, password and session length

which bitrate?

Started by The Man With Brass Eyes, September 27, 2004, 10:48:03 AM

Previous topic - Next topic

gazzyk1ns

With VBR you specify a quality that the encoder should produce, and it will use as many bits as it thinks it needs to produce that quality - obviously this could vary greatly, a quiet piano solo would need a lot less bits to achieve a certain quality than thrash metal would.

ABR is a form of VBR (average bitrate) where you specify what you want the average bitrate to be - i.e. when you want the quality of VBR but have strict space requirements.

Fraunhofer encoders are OK (there are a few), probably the "best of the rest" - LAME is light years ahead. LAME is also the only encoder where quality is a primary goal, and which has been tuned for music as a result of the above-mentioned type of blind listening test. Other MP3 encoders were just written to do a job for a commercial application. There's a half-decent encoder called Gogo which is based on LAME but tuned for speed with some quality compromises... sounds good, but for whatever reason, the version of LAME they based it on was about three years old and a beta. Oh well.

The question "At what bitrate does MP3 reach CD quality?" is both an easy and hard question to answer. The actual answer is "never." Like you said, it's a lossy encoder and so lots of the original will be discarded whatever quality settings you specify. The trick is finding out whether you can tell if those things have been discarded or not - your pet bat might appreciate you encoding to CBR 320, but not many humans would benefit.

I don't mind typing about this at all and I'm not trying to get you to stop asking questions, but I'd bet my house on "--alt-preset 192" with LAME 3.90.3 never producing any artifacts you'll hear. If your portable doesn't like VBR then use "--alt-preset cbr 192".

That guide linked to up there for use with CDex is very good, you can replace "--alt-preset standard" with what I recommended up there if you like. I use "--alt-preset standard" myself, it's about as good a quality as MP3 is ever going to get and usually produces files at about 220kbps. There are loads of code level tweaks applied with that, on paper there are quite a few advantages over any other setting. I'm aware that it's probably overkill though, I doubt I would ever tell the difference between that and what I am recommending to you... but I've got enough HD space to use the setting I do so I always just plump for that.

Elastic Spastic Shashlik

Thanks you guys. Thanks a bunch.

I was using a bitrate of 128 for my jukebox and it has been fine except  on a couple of  tracks where the excessive cymbals sounded like they were underwater at times.  But now you've got me thinking about it. You bastards. I'm now convinced I've got to rip my CD's again at 192.

I've got a Creative Zen Extra 30GB jukebox. The Creative Mediasource software supplied provides great encosion. I've only been dissapointed 3 times in 128kbps. National Express by The Divine Comedy has underwater cymbals, as do most tracks on Electric Guitarist by John McLaughlin. Recently I downloaded The Beatles' Abbey Road and Rubber Soul using Bit Torrent (yes, I have the albums on vinyl.. I'm not into piracy). The tracks were originally encoded at 256, so I converted them to 128. The result is shit. There are whining  noises present throughout. Has anybody else experienced such distortion when downsampling a high-bitrate track to 128?

In any case, I've got over 2000 tracks on the jukebox and I've still got over 20GB to play with, so I figure I may as well re-rip the lot  at 192.

I haven't got round to doing any of these "tests" yet.
Regarding "artefacts I can hear" – I'm still listening to these 128 mp3s (and this is all without comparing to anything else), and I can't detect any artefacts.  I've bunged the volume up and listened hard.  The cymbals go "tink", the drums sound right... (you can probably guess I'm not a Masters in music science here).

I'm really keen to find out what "I'm not hearing" as I bet my ears have become accustomed to 128 kb/s mp3s.  With headphones on it's a different experience than listening to a CD on a HIFI in a room.  Some songs I've "missed" become apparent when I hear them at home on a HIFI.
The other thing with MP3s is, at times, you're not always hearing "an album" in the original sequence as the mastered disk.

Yeah, I suppose so – all lossy encoders will remove some degree of the original... and the only way to have the purest sound is either listen to the uncompressed CD or corresponding wav.

Me and my pet bat are not on speaking terms at the moment.  I don't really want to talk about it at the moment.  He's been dissing the Hives again... something about the Makeup being better.

Asking questions are good – thanks for the answers.  It puts ideas in my head and educates my simple mind.

Quote from: "Elastic s*****c Shashlik"I was using a bitrate of 128... fine except on a couple of tracks where the excessive cymbals sounded like they were underwater at times.

Tell you what, the Libertines have just sprung up in my mp3 playlist now.  Take something like "Horrorshow" – real mad cymballing.  I don't have the original to compare to here, but it does sound slightly faulty – like continuous hissing (it's not that fast).

The "re-ripping" thing: I might well be in the same boat as you.  Probably around 20gigs... </Sergeant Pepper's Bleeding Heart's Club Band>.

Quote from: "Elastic s*****c Shashlik"converted them to 128

oh boy. That's bad news when you re-encode an already coded file.  Lossy encoders throw away parts of the wavform that they guess people might not hear... recoding an already lossy will compress that data further.

(I figure if I keep using the word lossy, it sounds like I'm knowledgeable and clever)

(listen to me, I sound like a mother... now go to your room and think about what you have done!!)

Mr Colossal

yes you are right, saving an mp3 that was previously an mp3 really fucks up the quality. It doesn't just do it a tiny bit either- I have made tunes where i used  some samples that were originally  downloaded as  Mp3s, then saved as WAV to be used by my seqeuncing program.

When i got around to finishing the tune, and saved it as an mp3, the samples that were mp3's originally has a real high pitched tinny screech to them.

I guess its similar to making copys of videos tape, if you make a copy of the copy, and continue to,  the quality will get poorer and poorer.

Heres a very in-depth  and technical experiment, with waveform example and graphs and everything.  Thats should clear up a few things, or alternatively confuse the fuck out of you.


http://www.eecs.umich.edu/~holtk/mp3tfd/

They use the word 'lossy' too ; )

EDIT: i've just noticed that the tests jump from original, 256 to 128 missing out 192. But you can still see the difference from original to 128. The differences between 256 to 128 will just be an amplified version of 192 and 128.

Quote
The fundamental method behind MPEG audio compression is to take advantage of the inability of the ear to perceive a sound that is much weaker than another sound which is temporally in close proximity, i.e. a weaker sound will tend to be hidden under a stronger neighboring sound. Although one approach is to simply remove sounds that are hidden under strong sounds, the alternative that MPEG takes is to include the weaker sounds, but allow a higher degree of distortion. Since the distortion will be hidden under the stronger sound, the ear is less likely to notice a slightly distorted weak sound than the complete absence of the weak sound. Hence, the method that MPEG audio uses is to vary the amount of signal quantization in different frequency bands so that the weaker sounds are still present, but at a lower quantization level than that of the stronger signals.


.

So im assuming the smaller the bitrate, the larger amount of  sounds are considered weaker, and thus replaced by distortion/different frequencey bands. As the high and low end of the sound  frequencies will seem more prominent to us over the mid range, I guess more of this will be appear to affected. But seeing as bass is low and often distorted anyway, we tend to notice the difference in the higher end first?  

The 'tinny phase' effect could be accounted for by the fact that the  missing /weaker sounds,  are filled in by more distortion/diiferent frequency bands... Well thats my take on things anyway.

And back to something i posted earlier I read that it is impossible for your average  human ear  to detect the difference  between anything higher than 192 kb and CD quality.  Ripping at anything higher would be like sombody who didn't fold their clothes when they put them in the drawers or something.

gazzyk1ns

The key word there is average. There's a lot of... I'm not sure how to put it... "confusion" regarding this. It's very difficult for me not to sound like some kind of audio snob in this sort of discussion, I always feel like I'm trying to tell you that I have to be some kind of encoding perfectionist and that my hearing is extremely good.

Just to reiterate, I would agree that when using a good encoder (i.e. LAME), 192 CBR is "as good as perfect" for 99.99% of people (including me).

But the confusion comes from people and websites, who have the right idea and experience with lossy compression quality, misunderstanding what causes which artifacts.

With any lossy compression, the "widely understood" scenario is true - the lower the bitrate, the worse the results will sound. With MP3, artifacts become less noticeable when you get above 128 and virtually disappear at 192. The format is good enough for that to be the case.

But the format isn't good enough to eliminate some artifacts, no matter what the bitrate, the amount of bits available for the encoder to use on a given sample isn't the problem. MP3 has poor time resolution and so an artifact called "Pre-echo" always has the potential to occur. The bitrate of the file does have some effect, when a higher bitrate (or VBR quality...) is specified the encoder will use more "short blocks", which are what they sound like, when a sample is made up of short blocks it will have a higher time resolution than the same sample made up of long blocks. There is obviously a "bloat" penalty for using short blocks - hence the use of many of them depending on the bitrate/quality specified.

Pre-echo isn't the only artifact which has the potential to occur at any bitrate but I won't pretend to fully understand what they are or how to "spot" them, I think in the past I've tried listening to "problem samples" over at Hydrogen Audio and have been unable to hear anything. Some people obviously can though.

The basic thing I'm trying to get across is that the MP3 format has, and will always have, problems with perfectly encoding some things. Fortunately, most of us will never notice these "problems".