Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
WanderingKid
Feb 27, 2005

lives here...

IronChef Chris Wok posted:

2) What are balanced cables, and why do they cost a billion dollars?

Its a cable with two signal carrying conductors instead of one. At each end its terminated in 3 places instead of 2. So instead of tip (signal) and sleave (ground) 1/4" jacks you have tip (+ve signal), ring (-ve signal) and sleave (ground).

Its called balancing because the impedance of the source and load at either end of both signal conductors is the same. Therefore, any sort of interference will induce equal noise voltage in both wires. The receiver passes the difference between both signal carrying lines and doesn't pass signal common to both lines (noise voltage).

Balanced cables are not expensive in the sense that you can get cheap as poo poo balanced cables just like you can get cheap as poo poo unbalanced cables. Its just that the balanced variety will tend to be a little more expensive than the unbalanced variety. I picked up 8 x 6.0 meters of the cheapest ones Thomann were selling which worked out at €7.30 each I think and I'm pretty much done synthwise. They aren't going to be stepped on or flexed alot so whatever.

Short version: You use balanced cables for common mode noise rejection. You need gear capable of differential signalling for it to work otherwise its just like any other unbalanced cable. You can totally cheap out if you want to.

Hippie Hedgehog posted:

2. How exactly is it "balanced"? Looks like an ordinary RCA lead to me.

I'm not sure. I think they took the whole "3 lines" thing rather literally and are attempting to sell 3x separate unbalanced RCA to RCA cables. Hey, if the sleight of hand works they sell 3 times as much cable right?

WanderingKid fucked around with this message at 18:07 on Jan 19, 2011

Adbot
ADBOT LOVES YOU

WanderingKid
Feb 27, 2005

lives here...
That has more to do with amplitude than anything else. Bass at high amplitude is going to thump your chest, but I can't see many people with the living arrangements where they can get away with playing music that loud in their own home.

Most people I know do not want or need a very powerful sound system. I have a pair of 100 watt desktop monitors (Dynaudio BM5as) for production work and I barely ever used them at home. I never cranked them because it would just piss everyone off. You can get desktop speakers that go much louder but if its for home use, whats the point? My house isn't designed to be an auditorium and they sound like poo poo in small rooms anyway.

For the past year, me and a couple of buddies have been renting studio space and now I just leave my desktop monitors in the studio.

WanderingKid fucked around with this message at 15:17 on May 25, 2011

WanderingKid
Feb 27, 2005

lives here...
What does FFT/block size have to do with anything in this thread? :stare:

WanderingKid
Feb 27, 2005

lives here...
I use FFT nearly every day (mainly in GlissEQ) and I can't figure out for the life of me what you are trying to say...

WanderingKid
Feb 27, 2005

lives here...

Neurophonic posted:

Agreed. A simple trick to making a drum more 'kicky' in PA circles is a tight Q, 1.5 to 3dB boost at 91Hz. It gives more of that 'hammer knocking wood' sound, particularly from front loaded horn cabinets. Quite often, you'll end up notching down the harmonic range to prevent horrible ringing from the hits affecting your vocals or guitars.

Bass drums and most drum type sounds are atonal. So you can't post specific EQ settings because they will change depending on the drum sound.

Lets take a Roland TR-808 bass drum:



Its a control voltage + trigger pulse input going into a mixer. This determines how loud the initial impulse is of a positive feedback oscillator, which goes into a low pass filter (LFP) and voltage control amplifier (VCA) in series.

When you increase positive feedback, the decay of the drum gets longer. If you keep increasing it (beyond what a TR-808 lets you do with the decay controls) then it will just sound to infinity and you will have a constant sine wave.

This oscillator is sometimes called a T network filter. Theres a low pass filter with input from the oscillator and then mixer, probably so that changing the decay by decreasing positive feedback does not decrease the pitch of the oscillator as well.

At short decays, this drum sound doesn't have a fixed pitch. At long decays, the drum sound decays to a sine wave and that part of the sound does have a constant pitch reference that you can tune an instrument to. If you sample a TR-808 you can pitch it up and down in your sampler and play bass drum hits as notes with long decays (i.e. like in old school drum and bass).

When you get into tunable bass drums that give you lots of modulation and decay control, it just becomes impossible to say "+3dB boost to 90hz at q factor 1" because there is no constant pitch reference.

Either way, you can pitch a snare drum up and down but it isn't harmonic, so you can't (for example) tune a guitar to it. EQ is a strange thing because it has a constant, static effect on the music, whose frequency and harmonic content is always in flux.

Its easy to dial up some settings that make one song sound subjectively better but then you skip to a different song by a different artist and it subjectively sounds like rear end because all of the pitch references, fundamental oscillating modes, the proportions of the mix etc. are different. Like you boost treble on a bass heavy song and for a while its banging. Then you skip to a treble heavy song, now the treble is over present and your ears start burning.

EQ and production? Re-evaluate it on a case by case basis. EQ on an entire mix is best kept wide band and low gain for slight emphasis. If you are using EQ for full blown sound shaping in the mixdown then your recording and sound design is probably fubar'ed.

WanderingKid fucked around with this message at 11:14 on May 27, 2011

WanderingKid
Feb 27, 2005

lives here...
Stuff like jitter is really an engineering concern, with engineering solutions. I have no idea why hobbyists or even music pros even need to think about it. Dan Lavry summed it up right over at prosoundweb forums years and years ago. He pretty much said that the only way to evaluate jitter is to measure it and the equipment needed to measure timing discrepancy in the picosecond range is very costly (several tens of thousands of dollars apparently). Realistically, nobody can measure jitter in their own home.

I'm pretty sure he also said that it was a much bigger concern in markets outside professional audio, with mission critical goals - high speed telecommunications, medical sciences etc.

WanderingKid
Feb 27, 2005

lives here...
The first google hit is a pretty good, no bullshit explanation of what early reflection is. Its a property of reverberation. Figure 3.77 and 3.78 give you some idea of just how much of the sound you hear is not direct from the source. This is also a hypothetical room with no floor or ceiling and no objects inside it.

I'd love to spend a few minutes in an anechoic chamber which is a completely dead room, designed so you only hear sound direct from source. I remember reading a comment by Bruce Swedien where he said that most people are amazed at how dull and soft everything sounds in such a room.

If you increase the dimensions of our hypothetical room, it takes longer for the sound wave to travel to the walls and reflect back. Eventually the time delay will be great enough that you perceive the first (early) reflection as an echo.

Sound also attenuates with distance so the bigger the environment, the louder the source has to be to project across the room, reflect off a wall or some solid object within the room and still be heard. This is one reason why you can stand in the nave of a Cathedral and hear an echo when you yell, but not if you whisper.

WanderingKid
Feb 27, 2005

lives here...
You should paint go faster stripes on your cables. Improves velocity factor, but only if you have aerodynamic plugs too.

WanderingKid
Feb 27, 2005

lives here...
They need to make edible versions of those rice paper wraps so I can rebalance my tongue on the sub molecular level and take my kobe beef cuisine to the next level. Advantage: its not a firehazard after you've eaten it.

WanderingKid
Feb 27, 2005

lives here...
I don't know if that forum is real or what but hot drat, post a link. Looks like a riot fun party over there.

WanderingKid
Feb 27, 2005

lives here...
I guess the point is that you can't trust the validity of any of the information because the ratio of right to wrong is such that you are better off fishing for opinions and then guessing at the correct answer your own self using applied logic.

Gearslutz is like this. There are real pros on that forum and technology insiders like Paul Frindle and Dan Lavry and they are pretty good at explaining the concepts behind digital audio as far as it is possible to do so without math. But for every Paul Frindle theres a hundred posters that are exhibitions of the Dunning Kruger effect. You have to carefully parse everything that is said for factual errors, formal errors and cognitive biases, which is exhausting.

WanderingKid
Feb 27, 2005

lives here...
1210s were the business. When I used to spin records I had a beat to poo poo 4th hand pair that were still trucking when I sold them. Everywhere I went to spin records there was a pair of beat to poo poo 1210s that kept on trucking after what looked like 15 years of raining pint glasses, cigarette/spliff ash and heavy handed DJs.

I'd still have at least one of them if vinyl wasn't so freakishly expensive.

WanderingKid
Feb 27, 2005

lives here...
Whats wrong with the tone arms? Never had a problem with them as long as they aren't broken and they haven't been tampered with (like people fiddling with the pivot screw). :|

WanderingKid
Feb 27, 2005

lives here...

Jerry Cotton posted:

The non-beat-to-poo poo units also work pretty well :) But the main thing is that it also works really well as a home player (which is probably why DJs adopted them). Anyone with wobbly floors can attest to this.

1210s were the first decks where I learned how to mix by riding the pitch only. Mainly because I used to have cheap Gemini belt drives and I could slow the platter with my index finger without shaving it off. Also nipple twisting kind of worked. On 1210s the motors just keep on powering on and oh god why won't it slow down make it slow dow-. Whip the crossfader over <TRAIN WRECK> pretend like nothing happened.

They were expensive back then so I could only afford hand me downs (which were still kind of expensive even though they looked like poo poo). A few years ago I remember seeing people dumping their 1210s for like 120 euros a pop and I was tempted to buy one for old times sake. But I really don't want to get back into vinyl. I don't want to pay 5 quid for singles and like 12 for EPs. Yeesh. :)

WanderingKid
Feb 27, 2005

lives here...
Are 1200s prone to mistracking? Thats news to me. :\

Simple and age old test: plonk a spirit level on your deck and make sure its level and flat. Take off the cartridge, unscrew the counterweight until its about to come off (but don't take it off). Set the anti skate to 3 and release the tone arm. Turn the anti skate off and the tone arm should swing all the way in. Set anti skate to 3 again and tone arm returns back to where it was.

If it gets stuck along the way then the arm isn't set right. Or someone has tinkered with the pivot screws. Time to get your tone arm assembly replaced because its cheaper and easier to just buy a new tonearm. You need to de-solder/solder like 4 wires. I had to do it in the beginning because my tables were trashed but after that they were good until I sold them.

WanderingKid fucked around with this message at 08:03 on Jan 2, 2012

WanderingKid
Feb 27, 2005

lives here...
Hmmm thats to do with setting the anti skate properly, which has nothing to do with the above.

WanderingKid
Feb 27, 2005

lives here...
Oh ok. That makes sense. Also agreed on the second part. Part of the whole 1210 thing stems from them being ubiquitous I guess. 1210 is a bit like pro tools. Every post production facility or recording studio has some form of pro tools rig so it just becomes convenient when all your mates and every place you want to play has the same gear. I can relate to that because I don't use pro tools for production. I use FL Studio which is not an industry standard. Its difficult for me to do stuff with other people. :\

I don't think its as much of an issue now what with everything going digital.

WanderingKid
Feb 27, 2005

lives here...
Sometimes when you guys talk, I feel like I don't know anything anymore, despite producing my own music for 8 years. I reckon alot of this stuff shouldn't ever be described in words because everyone has a different idea of what bright and detailed "sounds" like. Using terms of reference like this is only useful if you keep the internal logic of it to yourself.

If you have to use words to describe what something sounds like, you gotta put it into practice so you make a connection in your brain between what you are reading, seeing on your screen and hearing in your headphones.

I don't understand what is meant by "hearing frequency response". I use FFT analysis alot so I am familiar with some of the concepts as they relates to music production. I have a basic understanding of the maths behind fourier transforms but any sound engineering student would have me beat.

WanderingKid fucked around with this message at 13:44 on May 23, 2012

WanderingKid
Feb 27, 2005

lives here...
In production however you have alot of sound shaping control so when people talk about a song sounding muddy, it is something you have control over during the mixdown with EQs and filters and just good mixing technique. When I mix something and it sounds like crap, I usually don't think of the problem being with my speakers - its almost always because my mixing was sloppy. If the hihats don't have enough sibilance you can compensate for that by amping up the higher frequencies with an EQ or by just mixing it better. Part of the reason why hihats could get lost in the mix is because they are being masked by other instruments and you have failed to make them stand out in the mix.

In the end I think alot of the issues are down to mixing technique. If you listen to the finished mix on a set of speakers that are really different to the ones used to mix the song then it will of course sound different but after so many years of mixing, I think I've gotten into the habit of thinking about the whole thing in very relative terms. I use reference tracks for everything so its not terribly important what speakers I'm using to mix. I just need some anchor (the reference track) to tell me when I'm going way off base.

For example, the last reference track I used was Under the Influence (Chemical Brothers) which has enormous bass. Why? I was writing a dance track and I wanted enormous bass. I often mix at home on crappy earbuds and you don't really feel that bass so you tend to be mixing blind. If my mix has way more bass energy than Under the Influence however, I know I've done something wrong because if I play it on a soundsystem that can put out big bass, it will be too much.

So I think of the whole thing in relative terms, not in specific terms whereas that glossary attempts to nail down very subjective and relative things by attempting to give them absolute definitions.

The other thing I never do is say stuff like "to fix weak sounding kick drums you need to to boost 60hz and 180hz by x dB" because put simply - it will be different for every kick drum. Theres are approximate ranges where it is appropriate to use EQ but I couldn't be exact unless I have a specific sound to work with. I never think of sibilance in terms of an exactly defined frequency range because it varies depending on the sound. To get rid of sibilance (de-ess) on vocals, the range where you want de-essing to occur will vary depending on the singer and to a lesser extent, the microphone used to record their voice. Really good technical vocalists can work with any mic (Michael Jackson is an oft used example because he could do barnstorming performances on anything and regularly used cheap mics like SM58s).

WanderingKid fucked around with this message at 14:04 on May 25, 2012

WanderingKid
Feb 27, 2005

lives here...
Regardless of whether its digital or not, you cannot restore something that was never present to begin with. So if you really are overloading a mic then the only way to fix that properly is to do the recording again. Probably with a less sensitive mic or with -10dB pad.

Similarly you cannot restore dynamic range to a song that is brickwalled in the mix. Theres nothing there to restore since it was destructively edited to be the way it is, deliberately or otherwise. If brickwalling it wasn't your intention then the cold hard truth is you need to redo the mix.

Lastly, an increase in dynamic range is one possible byproduct of using an expander. It is basically a noise gate with an envelope. So you use it to de-emphasize low level noise in a recording. But there are limits to what you can do with it and it is not a fix for a bad recording. If you have low level noise in your recording and you want to get rid of it, then the proper way to address the problem is to go back to the recording and eliminate the noise at the source. So if you are recording your voice in your room and your computer fan can be heard whirring away in the background then the best thing to do is not record so close to your computer.

When people point out flaws in professionally recorded and produced music, it is sometimes the case that some people do not regard them as flaws. For better or worse, these things just are what they are. The imperfections and the mistakes are part of what makes it unique and part of what makes it human.

WanderingKid
Feb 27, 2005

lives here...

Jerry Cotton posted:

Regarding recorded music, there are some records where one needs to turn up the <gasp> loudness knob just to hear all the instruments being played at sensible volumes. If you don't, you'll end up asking the band "hey when did you get a horn section?" after the first live gig you attend and the answer is: "We've always had them". These are almost always self-produced, of course.

The thing about self production is that it can be a blessing and a curse but its all one package, for better or worse.

If you know what you are doing, then you will have more control over how your songs sound when recorded. Production really becomes an extension of your song writing process (which can change how you go about writing songs).

The bad part is that you know at every level how the illusion is constructed, so you don't see it the same way other people see it. Its easy to make errors of judgement or spend alot of time and effort on parts of the mix that in the scheme of things, nobody really cares about except you.

A good way to experience this limbo state is to just have a go yourself. You can get alot of the tools you need via trial/freeware. It is common for bands to sound different live, even if the intention is to recreate (as faithfully as possible) the recorded sound so it is inevitable that some parts of the song will have different emphasis and you will notice some parts more than others compared to the record.

The simple fact is that you can't have everything in a mix with uniform amplitude. If something is getting drowned out then that is not necessarily a bad thing in itself. This is part of the illusion you are trying to create. If you want to draw attention to one sound, you have to recess all the other sounds so they do not compete for the listener's attention. Beyond a few maxims there isn't strictly speaking a list of rules you should adhere to when mixing.

WanderingKid
Feb 27, 2005

lives here...

longview posted:

It may just be the live mixing is that way in the first place sometimes, but a combination of eq (based on the sound and the known characteristic of the microphone used) and dynamics processing made a big difference in making it sound less flat.

That is normal. If you use an EQ with a drastic low and high shelf you are de emphasising all the bass and treble. So everything sounds like its playing through a telephone. You have just distorted the sound in a very localized way.

If you boost mid range a tonne, it will also sound like its playing through a telephone. It will just be vastly louder until you normalize, so there is an aspect of it which is highly relative.

The point is that if you are listening to a song and the bass drum isn't banging hard enough, you can use an EQ to emphasize certain parts of the drum (and everything else in the mix that "overlaps" the same frequency range). Those EQ settings are specific only to that one song. If you play a different song with a different sounding bass drum then your EQ settings are now useless for the purpose of emphasizing the bass drum because it doesn't have the same emphasis across the same frequency range.

To emphasize this new bass drum, you'll have to change all your EQ settings. If you try to fix mix problems with post processing, you will have to create new post processing settings for every song and theres no guarantee it will work for the intended purpose (since arrangement of sounds in every song is different).

All the EQ I do when mixing my own songs is on a per song basis. This is also the reason why I don't like to change sounds mid mix, because it throws off all your signal processing and automation, which can have far reaching effects.

If I mix a song and decide I don't like the bass drum, so I swap the drum sample for another sample then that creates alot of problems because I mixed everything around that particular sound. All my reactive signal processing has been set up with that sound present. So I have to follow the entire chain of signal processors connected in any way to the bass drum and may have to rework all the settings.

If you find yourself wanting to change specific aspects of a mix after you've mixed it then it is often easier to just start over rather than try to follow the tangle of threads.

In the end a track that has been mixed with an anemic bass drum has some other focal point for the listener's attention. I can't say if that is deliberate or not because I didn't mix it. To me it is what it is and if it ever needed fixing, the proper place to do it is in the mixdown. Once its baked and released, it is what it is, for better or worse. There is some limited correction you can do with EQ to compensate for room modes but thats an acoustic science and has nothing really to do with EQing songs.

WanderingKid
Feb 27, 2005

lives here...

Gromit posted:

My Yamaha amp has a setting for playback of MP3s and the like called "compressed music enhancer mode". The manual says:

"Enhances your listening experience by regenerating the missing harmonics in a compression artifact. As a result, flattened complexity due to the loss of high-frequency fidelity as well as lack of bass due to the loss of low-frequency bass is compensated, providing improved performance of the overall sound system."

Presumably that's some sort of EQ tweak rather than magic, but I don't know a thing about the subject.

Thats a load of handwaving to avoid having to say that your Yamaha receiver has a digital signal processor (DSP) chip in it, which crunches a non programmable suite of sound effects. Honestly, it could be anything and if they wanted you to know what it was, it would be easier for them to just say so.

If I had to guess from the description, I'd say its a multiband harmonic exciter effect which is ehhh. Its basically a (subtle) controlled distortion effect that is frequency dependent, so you can move the bands around and decide how much distortion and phase shift you get in each band. Except its designed for someone who knows nothing about sound design, so they take away all the control and just give you a "wow!" button which you press to turn it on or off. They might give you some presets to cycle through. Shrugs.

And rather than get into the ins and outs of distortion and equalization and phase shift, Yamaha uses vague but strangely appealing language like how wine connoisseurs talk about mahogany overtones. This way, they don't have to explain why you should buy a soundsystem with really low THD and then deliberately add distortion after the fact to make things sound (subjectively) better.

WanderingKid fucked around with this message at 17:22 on Jul 4, 2012

WanderingKid
Feb 27, 2005

lives here...

longview posted:

Regarding 192 kHz/24-bit: this has theoretical advantages. At 192 kHz you get the same effect as using one of those "oversampling" CD players, moving the sampling frequency up simplifies low-pass filter design. If that provides any noticeable improvement is questionable, but it's definitely easier to design a good filter if it doesn't have to have as sharp a drop-off.

From a practical perspective, I don't even use 96khz native sampling rate when recording and producing music because the file size is huge and your cpu load just takes a poo poo if you are using heavy post processing.

A large part of what people associate with sound fidelity is to do with the art of musical performance, recording and production. I remember some friends gave me stems to mix several years ago and I remember mentioning that the drums weren't recorded very well. There was an audible buzzing in the snare mic. They also had a professional mix the track, which he did in (I poo poo you not) 2 hours and it was so much better than my attempt I wanted to cry. Biggest serving of humble pie I've been given to date. He used exactly the same source material I did.

All that stuff related to sampling theorem is an engineering problem with engineering solutions and has been largely solved for and on behalf of consumers.

Modern sound recording/design software tends to do things like oversampling without you even knowing it, a bit like how Windows manages virtual memory without the end user ever having to dick around with Windows internals. It has been abstracted it to the point where the end user doesn't even see it happening or needs to understand how it works.

At some point I took an interest in the mechanics of sampling but you need to absorb decades of research and application of theory. Its not particularly useful to a musician. Its useful to build the tools that musicians use, who make the music that consumers buy.

WanderingKid
Feb 27, 2005

lives here...
Yeah, consumers don't need to concern themselves with any of that stuff. Like anti aliasing filters. I don't even need to think about aliasing really.

Lots of VST signal generators/processors have per plugin oversampling now. The most noticeable difference is how much of a poo poo your computer will take when you 8x oversample everything.

WanderingKid
Feb 27, 2005

lives here...

cheese-cube posted:

Yeah that's what I thought. You'd have to be running on some pretty lovely hardware to encounter buffer under-runs.

Edit: or have your settings seriously misconfigured.

You don't need lovely hardware, although you can get drop outs because your PC is very old and you are running very expensive signal processing, or your soundcard has lovely drivers or misconfigured ASIO or DMA buffer settings at high cpu loads.

High cpu loads could be a result of so many things - badly designed signal processors/generators, inefficient/expensive processes (like real time convolution), stupid oversampling. Those are all on the software design level. I don't design software, just use it as a tool to make sounds.

On the user side, your music projects and workflow could just be inefficient, i.e. bad channel routing and use of send buses resulting in lots of duplication of CPU destroying plugins.

Nevertheless, you will notice when you are dropping buffers. You get loud clicking noises or very short bursts of noise (that sound like clicks), sound drop outs, "stuttering". The most common solution is to increase the buffer size until you no longer get dropouts. Eventually it will result in a big enough delay between key press and sound, that you can't play a keyboard in time.

If you want to look into the mechanics of it, you need to go deep into Windows internals and how interrupt handling and direct memory access work.

Its a different mechanism to jitter which (I think) is a time domain quantization error. I don't build AD/DA converters so I don't really know what it is, because its an engineering thing.

I sort of know what a quantization error is because some of the tools I use let you do things like truncation (discarding bits) and decimation (discarding samples). If you discard enough bits the sound just collapses into noise but its a distinctive sort of noise. It isn't random.

For the kind of bit depths that commercial software and industry standards operates at, things like dither and quantization error just define noise floor.

Under specific and typically unnatural listening conditions you may be able to hear it. For example, listening to a fade out of a pure (audio frequency) sine wave at ungodly listening levels in a room with perfectly controlled acoustics. Ehhh, I dunno. The only reason I even encountered this poo poo is because of the prevalence of home mastering software. All of that stuff is still in engineering territory.

I dabbled with mastering my own songs for a bit but the conclusion I arrived at was this: don't try that poo poo at home. Pay a mastering engineer to do it for you or train to be a mastering engineer. Apart from the lack of technical expertise, you don't think of your own work the same way other people do, because you know how the auditory illusion is constructed.

WanderingKid fucked around with this message at 18:23 on Oct 8, 2013

WanderingKid
Feb 27, 2005

lives here...
Huh, its some variation of 16 bits/24 bits and 44.1/48/96khz. Use a bit crusher to throw away bits. You can throw away 15 bits and listen to the result (noise). I only understand these things in terms of music production, not in terms of engineering in rather the same way that a person can understand how to use a computer without having to know how windows internals or any of the hardware actually works. All of that has been abstracted from the user.

Some signal processors are computationally expensive like convolution reverb plugins. You can duplicate these as many times as you want over as many channels as you want, until your computer can't handle it anymore. But you can tidy things up by not having so much duplication of essentially the same effect. i.e. you put 1x convolution reverb effecto on a send bus and route 4 channels into it rather than use 4x convolution reverb effects on 4 separate channels.

WanderingKid
Feb 27, 2005

lives here...
I never pretended that I understood the engineering side of production? If you want to try it out yourself you can get FL Studio demo for free (fully functional, save disabled) and a free bit crusher/decimator plugin like e-phonic LOFI. Have fun. Its easier to get a grasp of what quantization noise is when you literally throw away 15 out of 16 bits and listen to what the result sounds like. Turn your speakers down. Beyond that if you want to know precisely what it is, you are gonna have to get into math that is beyond what I'm capable of describing. Have fun with that too if it interests you.

Adbot
ADBOT LOVES YOU

WanderingKid
Feb 27, 2005

lives here...
Yeah the whole thing is just stupid. Now its just misspeaking and jumping on it. Now we are conflating bit depth and bitrate, asking when is noise not a noise and playing semantics like its a dick waving contest. I'll just stop this from disappearing any further down the rabbit hole by saying I'm wrong. Nothing more to see. Move along.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply