So I've been writing music and watching Mixing and Mastering tutorials.One of the many I follow is Rick Beato who is a record producer. He has his own YT channel. I was watching this video and as per Rick you really cannot tell the difference between Wav 44.1 or 320 KBS MP3. He says "You can only guess"
Post by nausearockpig on Feb 9, 2019 6:08:33 GMT 1
People claim to hear the difference. Maybe they can, maybe they can’t. I don’t think I’ve ever heard it, but that’s not to say it’s not there!
I guess if you got a track, then converted to MP3 then burned to CD, then repeated those steps over and over it MIGHT degrade the sound to the point it’s audible, but I’ve never bothered to try.
Let’s face it, A LOT of people either listen to compressed music thru high end headphones or listen to FLAC/WAV thru iPhone headphones, there wouldn’t be that many people out there listening to lossless music thru high end speakers on a hifi, or on studio quality headphones....
also, thumbs up to Rick Beato. Check out his lWhat Makes This Song Great” series. Awesome.
Last Edit: Feb 9, 2019 6:09:48 GMT 1 by nausearockpig
If you have a lead on Brisbane 21 August 1992 - CT version, for the love of Bob, let me know. Please!
there wouldn’t be that many people out there listening to lossless music thru high end speakers on a hifi, or on studio quality headphones....
Sadly this ^^^^^^^^^^^^^^ MP3 was developed in a time when storage was a premium & content delivery was primitive (anyone remember dial-up?) Today this isn't really the case, but the format is still used to cut costs for bandwidth usage & storage & those were its primary concerns at the outset & not the quality of the resulting audio. It's kind of perverse to think that a codec was developed around an antiquated infrastructure & yet today most major tech companies design their audio infrastructure around the codec. That's not progress. Just laziness & greed. Back on point though, the perceived quality of the resulting audio is largely dependant on the equipment you use to listen to it on. Sound cards in pooters, cheapo Chinese mp3 players, state of the art amplifier circuitry, car radios & so on will all have an impact on what you actually hear. Not to mention the state of your own hearing which naturally deteriorates as you get older. If you're going to load your phone up with music to listen to either on the inbuilt speakers or some cr@ppy earbuds, then it makes no sense to fill that storage with 24bit hi resolution wav (or other lossless) files. You simply won't hear any difference. If you take that same audio file set that you loaded into your phone/ mobile player etc. & play it on a decent mid to high end audio system at home, then it will sound tragic. The thing is, not many people actually do comparisons. They'll just play a source file & assume that's how it should be. Try it yourself. Get a good quality, lossless audio file of something with a good array of instrumentation al the way from the bottom to top end of the frequency spectrum. Make a copy & convert it to a lossy file format. Then play them one after the other on your home system without other things to disturb you (washing machines, vacuum cleaners & so on) Play the lossy one first & then play the original after. It will actually sound fractionally slower. Not because it is, but because there is so much more information in the lossless audio that your brain takes a tiny bit more time to process it. It's like putting on an audio equivalent of a pair of glasses. I found a really good comparison of a short guitar sequence that makes the difference so clear . First sample is uncompressed WAV. Second is compressed vorbis. Third is compressed mp3 Pay attention the the ringing of the strings. On the compressed samples they a obviously degraded & sound quite "splashy" compared to the original WAV sample which gives a clean ring as no information has been stripped from it. The algorithm used for creating mp3s makes a decision for "us" as to what we can & cannot hear & arbitrarily removes those sounds that an artist &/or recording engineer has gone to great lengths to include in the original source. Fun fact!! The development of the algorithm for mp3 audio used Suzanne Vega's Tom's Diner & in 2015 a composer called Rym Maguire actually released a track called moDernisT (an anagram of Tom's Diner) which is just the audio signals that were stripped out of the original song
there wouldn’t be that many people out there listening to lossless music thru high end speakers on a hifi, or on studio quality headphones....
Sadly this ^^^^^^^^^^^^^^ MP3 was developed in a time when storage was a premium & content delivery was primitive (anyone remember dial-up?) Today this isn't really the case, but the format is still used to cut costs for bandwidth usage & storage & those were its primary concerns at the outset & not the quality of the resulting audio. It's kind of perverse to think that a codec was developed around an antiquated infrastructure & yet today most major tech companies design their audio infrastructure around the codec. That's not progress. Just laziness & greed. Back on point though, the perceived quality of the resulting audio is largely dependant on the equipment you use to listen to it on. Sound cards in pooters, cheapo Chinese mp3 players, state of the art amplifier circuitry, car radios & so on will all have an impact on what you actually hear. Not to mention the state of your own hearing which naturally deteriorates as you get older. If you're going to load your phone up with music to listen to either on the inbuilt speakers or some cr@ppy earbuds, then it makes no sense to fill that storage with 24bit hi resolution wav (or other lossless) files. You simply won't hear any difference. If you take that same audio file set that you loaded into your phone/ mobile player etc. & play it on a decent mid to high end audio system at home, then it will sound tragic. The thing is, not many people actually do comparisons. They'll just play a source file & assume that's how it should be. Try it yourself. Get a good quality, lossless audio file of something with a good array of instrumentation al the way from the bottom to top end of the frequency spectrum. Make a copy & convert it to a lossy file format. Then play them one after the other on your home system without other things to disturb you (washing machines, vacuum cleaners & so on) Play the lossy one first & then play the original after. It will actually sound fractionally slower. Not because it is, but because there is so much more information in the lossless audio that your brain takes a tiny bit more time to process it. It's like putting on an audio equivalent of a pair of glasses. I found a really good comparison of a short guitar sequence that makes the difference so clear . First sample is uncompressed WAV. Second is compressed vorbis. Third is compressed mp3 Pay attention the the ringing of the strings. On the compressed samples they a obviously degraded & sound quite "splashy" compared to the original WAV sample which gives a clean ring as no information has been stripped from it. The algorithm used for creating mp3s makes a decision for "us" as to what we can & cannot hear & arbitrarily removes those sounds that an artist &/or recording engineer has gone to great lengths to include in the original source. Fun fact!! The development of the algorithm for mp3 audio used Suzanne Vega's Tom's Diner & in 2015 a composer called Rym Maguire actually released a track called moDernisT (an anagram of Tom's Diner) which is just the audio signals that were stripped out of the original song
Steve, what I know most about you in almost 10 years of friendship is you are the king when it comes to audio. I did buy a great pair a speakers (monitors actually) for recording so I will try what you say any report back! I appreciate the input greatly. People do NOT realize what a band goes through when mixing and mastering a song. One, let alone an album. It is VERY time consuming.
Thanks nausearockpig also. Rick is good but my favorite of all is Warren Huart. He is the bomb. He interviewed Junkie XL and his studio is phenomenal.
Check it out if you have time. It's a long video but worth it.
Ok steve I listened on my monitors to that file. There is a head slapping obvious difference and the wav is of course the best. The vorbis is wobbly and the MP3 is muffled. BUT is the MP3 320 kb/s ? Because that is where he is saying you can't tell the difference. Because there is 128 kb/s etc... and 24 bit vs 16 bit yeah?
BTW all of my songs I'm working on are in 44.1 Wav Right now.
Oh and by the way Warren mentions The Cure in that video. He also mentions Siouxsie, Bauhaus and several others from our genre. He talks about the settings The Cure use on their amps to get their sound. I'll try to find the time so you all can go right to it if you want and edit this.
Ok steve I listened on my monitors to that file. There is a head slapping obvious difference and the wav is of course the best. The vorbis is wobbly and the MP3 is muffled. BUT is the MP3 320 kb/s ? Because that is where he is saying you can't tell the difference. Because there is 128 kb/s etc... and 24 bit vs 16 bit yeah?
BTW all of my songs I'm working on are in 44.1 Wav Right now.
I believe it's a very low rate, but as a comparison it's alarmingly obvious isn't it? Naturally, the higher the bit rate used, the less obvious the difference, but it's still there. As for 24 vs 16bit, that's something entirely different really. It refers to the number of bits of information sampled. The higher the number, the better the resolution as you typically have a much clearer picture of the dynamic range (the difference between the quietest & loudest signals in the audio track). Lossy compression has virtually no bearing on that. It only serves to strip "unwanted" information to facilitate a smaller storage requirement & better streaming performance for lower bandwidths. If you work on your masters at as high a resolution as you can handle (24/48 or 24/96) then you have much more information to work with as you tinker with settings & effects etc. After you've done playing & are happy with the finished item, you can dither it back to 16/44.1 which will be more compatible with playback gear. !! Even working at 24/44.1 is going to give you a good base from which to work from. Effectively it means your sampling 24bits of data 44,100 times per second as opposed to just 16bits in the same timeframe. But beware. The high res files will take up a lot more storage space.
Ultimately, it comes down to personal preference though. Your ears are yours & nobody else's. But recording engineers/ artists try to cater for everyone's ears & the myriad systems the work will be reproduced on.
Post by nausearockpig on Jun 14, 2019 12:57:27 GMT 1
so off the back of a member here questioning why I would FLAC a lossy stream, i thought I'd do a wee experiment.
I imported the TS file of the webrip of The Cure 2019-06-09 Pinkpop (MPEG2 TS Stream that was shared on Dime, and probably here) into Audacity, trimmed a random section down to 15 seconds of audio, exported the audio to 24 bit WAV and 8kbps MP3, imported both tracks back into Audacity then did a spectral analysis on both. I saw some interesting things.
Firstly, you can see the MP3 track has what looks like a short section of silence at the beginning. Listening to the samples, it doesn't though. I guess it could be too short to hear, but I don't know.
If you look at the original audio snip, you can see this "gap" is not present. So I'm guessing that the MP3 conversion created this gap.
Look at the spectrals of the two tracks, wav on the left, MP3 on the right. You can see that the MP3 version is truncated at ~1800Hz where as the wav file drops off way past the 8000Hz mark. You can also see that the MP3 audio frequencies (is that the right term?) peak at -25dB but they go up to -21dB on the wav.
Sure, sure, sure piggy, you may say.. but the proof is in the pudding, have a listen to the two audio samples downloadable at the link below and you'll hear the difference. I was shocked.
There's a caveat or two on this: It's unlikely anyone will downsample to 8kbps, I did that at such a low bitrate to see if the MP3 conversion did actually lose any audible audio.
I bet I couldn't tell the difference between FLAC and 320 MP3, but that wasn't the point I was making when talking about FLACing a lossy stream. I was trying to keep the original signal as pristine as possible, rather than say "oh well it's lossy anyway, what's the point of not bothering to keep it as good as it can be", not "you won't hear the difference", that's a different conversation..
I'm sure I had another point, but oh well.. one caveat is enough.
Anyway, if anyone here with actual audio engineering skills can weigh in, I'd appreciate a professional opinion. I hope you enjoyed my presentation.
There are 2 popular cure shows that are frequently shared as fake flac. Almost no one ever complains. They were originally uploaded by David McQuay (pale amber glow) at about 2003 on his website as 160kbps mp3 files. The shows are
01.05.1982 London - Hammersmith Odeon
30.05.1984 's-Hertogenbosch (complete AUD)
The latter was pitch fixed but AFAIK the source is the same fake flac also.
David himself of course didn't share these shows as Flac - someone downloaded them and started to trade (?) them as the real thing. I have the original mp3's of 30-05-1984 floating around somewhere if someone wants proof.
One good thing about music, when it hits you feel no pain So hit me with music, hit me with music
so off the back of a member here questioning why I would FLAC a lossy stream, i thought I'd do a wee experiment.
I imported the TS file of the webrip of The Cure 2019-06-09 Pinkpop (MPEG2 TS Stream that was shared on Dime, and probably here) into Audacity, trimmed a random section down to 15 seconds of audio, exported the audio to 24 bit WAV and 8kbps MP3, imported both tracks back into Audacity then did a spectral analysis on both. I saw some interesting things.
Firstly, you can see the MP3 track has what looks like a short section of silence at the beginning. Listening to the samples, it doesn't though. I guess it could be too short to hear, but I don't know.
If you look at the original audio snip, you can see this "gap" is not present. So I'm guessing that the MP3 conversion created this gap.
Look at the spectrals of the two tracks, wav on the left, MP3 on the right. You can see that the MP3 version is truncated at ~1800Hz where as the wav file drops off way past the 8000Hz mark. You can also see that the MP3 audio frequencies (is that the right term?) peak at -25dB but they go up to -21dB on the wav.
Sure, sure, sure piggy, you may say.. but the proof is in the pudding, have a listen to the two audio samples downloadable at the link below and you'll hear the difference. I was shocked.
There's a caveat or two on this: It's unlikely anyone will downsample to 8kbps, I did that at such a low bitrate to see if the MP3 conversion did actually lose any audible audio.
I bet I couldn't tell the difference between FLAC and 320 MP3, but that wasn't the point I was making when talking about FLACing a lossy stream. I was trying to keep the original signal as pristine as possible, rather than say "oh well it's lossy anyway, what's the point of not bothering to keep it as good as it can be", not "you won't hear the difference", that's a different conversation..
I'm sure I had another point, but oh well.. one caveat is enough.
Anyway, if anyone here with actual audio engineering skills can weigh in, I'd appreciate a professional opinion. I hope you enjoyed my presentation.
there wouldn’t be that many people out there listening to lossless music thru high end speakers on a hifi, or on studio quality headphones....
Sadly this ^^^^^^^^^^^^^^ MP3 was developed in a time when storage was a premium & content delivery was primitive (anyone remember dial-up?) Today this isn't really the case, but the format is still used to cut costs for bandwidth usage & storage & those were its primary concerns at the outset & not the quality of the resulting audio. It's kind of perverse to think that a codec was developed around an antiquated infrastructure & yet today most major tech companies design their audio infrastructure around the codec. That's not progress. Just laziness & greed. Back on point though, the perceived quality of the resulting audio is largely dependant on the equipment you use to listen to it on. Sound cards in pooters, cheapo Chinese mp3 players, state of the art amplifier circuitry, car radios & so on will all have an impact on what you actually hear. Not to mention the state of your own hearing which naturally deteriorates as you get older. If you're going to load your phone up with music to listen to either on the inbuilt speakers or some cr@ppy earbuds, then it makes no sense to fill that storage with 24bit hi resolution wav (or other lossless) files. You simply won't hear any difference. If you take that same audio file set that you loaded into your phone/ mobile player etc. & play it on a decent mid to high end audio system at home, then it will sound tragic. The thing is, not many people actually do comparisons. They'll just play a source file & assume that's how it should be. Try it yourself. Get a good quality, lossless audio file of something with a good array of instrumentation al the way from the bottom to top end of the frequency spectrum. Make a copy & convert it to a lossy file format. Then play them one after the other on your home system without other things to disturb you (washing machines, vacuum cleaners & so on) Play the lossy one first & then play the original after. It will actually sound fractionally slower. Not because it is, but because there is so much more information in the lossless audio that your brain takes a tiny bit more time to process it. It's like putting on an audio equivalent of a pair of glasses. I found a really good comparison of a short guitar sequence that makes the difference so clear . First sample is uncompressed WAV. Second is compressed vorbis. Third is compressed mp3 Pay attention the the ringing of the strings. On the compressed samples they a obviously degraded & sound quite "splashy" compared to the original WAV sample which gives a clean ring as no information has been stripped from it. The algorithm used for creating mp3s makes a decision for "us" as to what we can & cannot hear & arbitrarily removes those sounds that an artist &/or recording engineer has gone to great lengths to include in the original source. Fun fact!! The development of the algorithm for mp3 audio used Suzanne Vega's Tom's Diner & in 2015 a composer called Rym Maguire actually released a track called moDernisT (an anagram of Tom's Diner) which is just the audio signals that were stripped out of the original song
Very clear explanation. I would add another variable: recording/mastering/remastering process of the track/music itself. At some point, every late remastered music (including The Cure of course) is really damaged in frequencies. So, in this point it would no really big differences in WAV or MP3. (I hope my english is good enough...)
Very clear explanation. I would add another variable: recording/mastering/remastering process of the track/music itself. At some point, every late remastered music (including The Cure of course) is really damaged in frequencies. So, in this point it would no really big differences in WAV or MP3. (I hope my english is good enough...)
Your English is fine But what you're talking about is dynamic compression rather than lossy compression. They are two completely different things. There will still be a clear audible difference between a badly mastered lossless & the same badly mastered lossy audio track (depending on the listening environment already mentioned. But you are completely right about recent remasters. Most of them are far inferior to the originals. There are very few that have had a really good job done on them & here's a top tip for all of you. If you see a vinyl reissue with "digitally remastered" printed on it. Don't waste your money. It's the same master as on the CD. More on that here thecurecommunity.freeforums.net/thread/5535/loudness-war