Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dexo
Aug 15, 2009

A city that was to live by night after the wilderness had passed. A city that was to forge out of steel and blood-red neon its own peculiar wilderness.

88h88 posted:

How you feel about this will depend on your thoughts about what happens after you die. I'll say one thing though, the thought of someone uploading a bunch of poo poo I hated just to be a oval office would make me haunt their rear end from beyond the grave.

Introducing a coffin with an 'audiophile' speaker setup.

http://catacombosoundsystem.com/index.html

https://www.youtube.com/watch?v=SDpC5ZYcA7M


Holy poo poo. That's like the perfect combination to take stupid people's money.

Rich people already pay way too much for designer coffins. Mixing them and audiophiles is a match made in heaven.

Adbot
ADBOT LOVES YOU

The Locator
Sep 12, 2004

Out here, everything hurts.





How many minutes hours does the power supply last after they unplug it? Music forever (well, until the battery dies).

Neurophonic
May 2, 2009

Dexo posted:

Holy poo poo. That's like the perfect combination to take stupid people's money.

Rich people already pay way too much for designer coffins. Mixing them and audiophiles is a match made in heaven.

What's even better is that it quite proudly states that they use a $30 Tripath amp module.

KillHour
Oct 28, 2007


And what the hell is a 1600Mhz HDD? Do they mean RAM?

Iamthegibbons
Apr 9, 2009
I stumbled upon a new line of audiophile claims this week. Apparently if you use Directsound in Windows, the kernel mixer introduces imperfections (i.e it's not 'bit perfect' due to on-the-fly processing and resampling). They are all using ASIO or WATAPI drivers in exclusive mode instead. They claim the difference is immense, all not control tested of course. Seems kinda impractical, considering if you use ASIO or WATAPI in exclusive mode no other application will be able to play any audio simultaneously! Is this as much bullshit as I suspect?

Edit: It's kinda funny that the ASIO foobar plugin they use explicitely states on the homagepage that it will not improve the sound :butt:

Iamthegibbons fucked around with this message at 16:29 on Aug 26, 2013

BANME.sh
Jan 23, 2008

What is this??
Are you some kind of hypnotist??
Grimey Drawer
What's funnier is that plenty of sound cards aren't really designed for ASIO, so when you force them to use ASIO drivers, there could actually be more bugs and errors in the audio processing. Not sure if WATAPI is the same or not.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Iamthegibbons posted:

Is this as much bullshit as I suspect?
The lowest interface is Kernel Streaming, where you set the soundcard to a specific format (sampling rate, channels, bitdepth) and play your poo poo. That's bit-perfect.

On top of that you have WASAPI (WinMM in old rear end versions of Windows), which is a mixer through which all applications channel their audio via the different interfaces like DirectSound, WinMM, XAudio1/2, and so on, which sit on top of it. The WASAPI mixer operates at a set format (default is 44Khz, stereo, 16bit, but the user can change it). Anything being played in an application that doesn't match its sampling rate is going to be resampled, number of channels is going to be up-/downmixed. Application specific volume levels are applied, tho. WASAPI uses 32bit IEEE floats internally, which are then going to be dithered and converted to whatever bitdepth your output format is set to.

WASAPI has an exclusive mode, which is essentially a pass-through mode to KS, which lets you select a format and play your music bit-perfect. It doesn't do any resampling or channel mixing, nor apply volume changes to the data. It blocks any other audio.

If you're playing audio in the same format WASAPI is set to, the volume slider for your application is maxed and nothing else is playing, the audio should be bit-perfect, too, since WASAPI doesn't need to resample or mix channels. I doubt that going to IEEE floats and back to integers will involve any rounding errors.

--edit:
That said, I'm not sure about the resampling. I know for sure that it doesn't do that when recording audio. Playback, I don't know. Heard yes, back when WASAPI was introduced. Internet however says no.

Combat Pretzel fucked around with this message at 20:01 on Aug 26, 2013

jonathan
Jul 3, 2005

by LITERALLY AN ADMIN
I use WASAPI in XBMC for movie watching and music listening, It's the recommended option if it works. Directsound with bluray rips sometimes gives me lipsync issues.

Is it IMMENSELY better than Directsound ? Well, given that directsound introduces timing errors and performance issues, when while watching a movie with a bunch of guests, is completely unacceptable, I'd say WASAPI is immensely better. Not because of sound quality, but because of performance.

I also bitstream everything to the receiver, so I'm using high bitrate lossless "HD" audio formats usually.


Edit:

From http://wiki.xbmc.org/index.php?title=Windows_audio_APIs

The best line from below is:

quote:

I myself have a dislike of Window's cutesy system sounds happening at 110db


quote:

Since Vista SP1, Windows has two primary audio interfaces, DirectSound and Wasapi (Windows Audio Session Application Programming Interface). The latter was a replacement for XP's Kernal Streaming mode.
DirectSound acts as a program-friendly middle layer between the program and the audio driver, which in turn speaks to the audio hardware. With DS, Windows controls the sample rate, channel layout and other details of the audio stream. Every program using sound passes it's data to DS, which then resamples as required so it can mix audio streams from any program together with system sounds.
The advantages are that programs don't need resampling code or other complexities, and any program can play sounds at the same time as others, or the same time as system sounds, because they are all mixed to one format.
The disadvantages are that other programs can play at the same time, and that a program's output gets mixed to whatever the system's settings are. This means the program cannnot control the sampling rate, channel count, format, etc. Even more important for this thread is that you cannot pass through encoded formats, as DS will not decode them and it would otherwise bit-mangle them, and there is a loss of sonic quality involved in the mixing and resampling.
Partly to allow for cleaner, uncompromised or encoded audio, and for low-latency requirements like mixing and recording, MS re-vamped their Kernal Streaming mode from XP and came up with WASAPI.
WASAPI itself has two modes, shared and exclusive. Shared mode is in many ways similar to DS, so I won't cover it here.
WASAPI exclusive mode bypasses the mixing/resampling layers of DS, and allows the application to negotiate directly with the audio driver what format it wishes to present the data in. This often involves some back-and-forth depending on the format specified and the device's capabilities. Once a format is agreed upon, the application decides how it will present the data stream.
The normal manner is in push mode - a buffer is created which the audio device draws from, and the application pushes as much data in as it can to keep that buffer full. To do this it must constantly monitor the levels in the buffer, with short "sleeps" in between to allow other threads to run.
WASAPI, and most modern sound devices, also support a "pull" or "event-driven" mode. In this mode two buffers are used. The application gives the audio driver a call-back address or function, fills one buffer and starts playback, then goes off to do other processing. It can forget about the data stream for a while. Whenever one of the two buffers is empty, the audio driver "calls you back", and gives you the address of the empty buffer. You fill this and go your way again. Between the two buffers there is a ping-pong action: one is in use and draining, the other is full and ready. As soon as the first is emptied the buffers are switched, and you are called upon to fill the empty one. So audio data is being "pulled" from the application by the audio driver, as opposed to "pushed" by the application.
WASAPI data is passed-through as-is, which is why you must negotiate capabilities with the audio driver (i.e. it must be compatible with the format you want to send it as there is no DS between to convert it), and why encoded formats like DTS can reach the receiver unchanged for decoding there.
Because WASAPI performs no mixing or resampling, it is best used in the exclusive mode, and as a result the application gets the exclusive rights to the audio buffers, to the exclusion of all other sounds or players. WASAPI shared mode does allow this, but that's not a common mode and not what we want for an HTPC. I myself have a dislike of Window's cutesy system sounds happening at 110db
Hope some of you found today's primer of use. Please pick up a scorecard from the desk and drop it in the big round "collection box" on your way out
Cheers, Damian

jonathan fucked around with this message at 20:19 on Aug 26, 2013

Philthy
Jan 28, 2003

Pillbug

Iamthegibbons posted:

I stumbled upon a new line of audiophile claims this week. Apparently if you use Directsound in Windows, the kernel mixer introduces imperfections (i.e it's not 'bit perfect' due to on-the-fly processing and resampling). They are all using ASIO or WATAPI drivers in exclusive mode instead. They claim the difference is immense, all not control tested of course. Seems kinda impractical, considering if you use ASIO or WATAPI in exclusive mode no other application will be able to play any audio simultaneously! Is this as much bullshit as I suspect?

Edit: It's kinda funny that the ASIO foobar plugin they use explicitely states on the homagepage that it will not improve the sound :butt:

Other forums have tested bit perfect playback methods by sampling the output from one machine directly into a second machine. It's how they test the audio to see if the output is actually bit perfect or not from the various applications. The amusing thing is people buying expensive plugins/players like Audirvana, Pure Music, etc and claiming the sound is better than X bitperfect plugin/player. When they are actually 100% the same, because, bitperfect is going to be the same no matter what platform/plugin/software you play it back with. That's kind of the point. So on Windows you can grab something like Foobar2000 for free and get the best, or JRiver MC on OSX for $50 instead of $200 or whatever those other ones cost.

Iamthegibbons
Apr 9, 2009
Some very informative answers, thanks! :cheers: So there is something to be said about bit perfect it seems, at least for time-critical applications. As for sound quality, I guess it's difficult to differentiate bit perfect and bit imperfect playback on ABX testing? Usually just playing audio files isn't really time-sensitive. I don't think I ever have timing issues, even on the cheapest sound cards.

Shame that WASAPI exclusive mode playback is still pretty impractical for me. I often have two sources playing audio at once on my PC (black metal just isn't the same without some Tito Puente mixed in). Makes sense for a dedicated media player though!

Iamthegibbons fucked around with this message at 16:45 on Aug 27, 2013

Philthy
Jan 28, 2003

Pillbug
You can certainly see more detail in the recordings comparing different versions of the songs. It's how HDTracks got caught selling an upsampled Metallica album and passing it off as HD. They pulled it saying this is what got sent to them, and gave full refunds to everyone who bought it.

Whether or not you can notice it is where this thread comes into play.

longview
Dec 25, 2006

heh.
http://support.microsoft.com/kb/2653312 There can be cases where the Windows sound system does a very poor job of sample rate conversion and adds distortion to the sound, which is a legitimate concern (and a step back since XP does this just fine apparently).

More details at https://www2.iis.fraunhofer.de/AAC/ie9.html

However I can't say I heard any errors on my stock Windows 7 + IE 9 install so it may have been rolled into a patch or something.

So for high quality playback certainly changing the sample rate of the sound card or using a high quality converter makes sense, but I really like having a working volume control so no bit-perfect playback for me.

GWBBQ
Jan 2, 2005


Opensourcepirate posted:

Getting back to actual science and using the correct hardware in the correct place. Not all RCA outputs are created equally.

wikipedia posted:

A line level describes a line's nominal signal level as a ratio, expressed in decibels, against a standard reference voltage...

The most common nominal level for consumer audio equipment is −10 dBV, and the most common nominal level for professional equipment is 4 dBu. By convention, nominal levels are always written with an explicit sign symbol. Thus 4 dBu is written as +4 dBu.

Expressed in absolute terms, a signal at −10 dBV is equivalent to a sine wave signal with a peak amplitude of approximately 0.447 volts, or any general signal at 0.316 volts root mean square (VRMS). A signal at +4 dBu is equivalent to a sine wave signal with a peak amplitude of approximately 1.736 volts, or any general signal at approximately 1.228 VRMS

So according to wikipedia, consumer level hardware should have a line out of about half a volt, and professional equipment should be about 1.5 volts. I think that modern consumer equipment probably goes up towards the pro range, just based on how low I can turn amplifiers and how well I can drive headphones, but I could be wrong about that.
I really expected a post starting with "Getting back to actual science and using the correct hardware" to at least involve an oscilloscope or multimeter to back up claims about waveforms and voltage levels. In my experience, consumer level equipment almost always outputs unbalanced audio from RCA jacks at -10dBV, and tolerances are looser than professional equipment. Professional equipment almost always outputs balanced audio from TRS, XLR, or captive screw connections at +4dBu, some units have mic/line level switches, and a lot will offer an unbalanced output on RCA jacks at -10dBv.

Speaking of expensive cables and actual science, I'll be spending tomorrow replacing my old RG59 cable TV wire terminated with screw-on F connectors with a remnant of Belden 1694A I got for free and some decent compression fittings. Crossing my fingers that I can get the MER up into the acceptable range for the box in my room to start working again.

HFX
Nov 29, 2004

GWBBQ posted:

I really expected a post starting with "Getting back to actual science and using the correct hardware" to at least involve an oscilloscope or multimeter to back up claims about waveforms and voltage levels. In my experience, consumer level equipment almost always outputs unbalanced audio from RCA jacks at -10dBV, and tolerances are looser than professional equipment. Professional equipment almost always outputs balanced audio from TRS, XLR, or captive screw connections at +4dBu, some units have mic/line level switches, and a lot will offer an unbalanced output on RCA jacks at -10dBv.

Speaking of expensive cables and actual science, I'll be spending tomorrow replacing my old RG59 cable TV wire terminated with screw-on F connectors with a remnant of Belden 1694A I got for free and some decent compression fittings. Crossing my fingers that I can get the MER up into the acceptable range for the box in my room to start working again.


Other then very short runs say from a wall to a box, I never do understand why cable companies will often wire a house with RG-59. I usually wire up friends house with RG-6 or better. RG-59 is only good for wall to box / tv.

GWBBQ
Jan 2, 2005


The RG59 was 15+ years old. With new cable and connectors, SNR improved by 20dB (I have to imagine the twist-on F connectors had a lot to do with that) and all the channels that didn't come in are back. Aside from macroblocking and losing frames due to poor signal, can bad SNR and MER degrade picture quality noticeably or is it just placebo effect that everything looks better?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Digital streams are either fine, or broken (blocking, wrong vectors warping the image, etc.). There's nothing in-between.

Wasabi the J
Jan 23, 2008

MOM WAS RIGHT
He's talking about SNR which is very much an IF/RF thing. If your SNR's are good, you're going to get better accuracy with the decoding equipment.

Bit Errors are a real thing, guys. Just because it's "binary" doesn't mean it's just "on/off"; you can get MOST of the data needed, some or all. That being said, if it's video, it's definitely susceptible to things caused by interference, blocking and lost frames aside; but I'm not an expert at TV and Video, I just know that's the case with things like satcom modems.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I count bit errors as broken, because it doesn't let the data decompress into what it is supposed to be. lovely SNR on analog means noise. lovely bits means corrupted macroblocks, wrong motion vectors, both creating broken frames. Which is an issue, considering that several successive frames will be reconstructed from motion vectors, smearing the broken image sections all over the place.

So technically, either a stream is correct or not. I guess the main point is however that a higher SNR, beyond a sufficient one to read a reliable stream, doesn't make more vivid, crisp or contrasty video. Signal fuckery may achieve that effect on an analog TV signal, but your digital decoder would just go ape poo poo and show a blank image.

That said digital TV, at least in Europe, is overcompressed to boot. Ringing everywhere.

Khablam
Mar 29, 2012

Wasabi the J posted:

Bit Errors are a real thing, guys. Just because it's "binary" doesn't mean it's just "on/off"
Yes, it does. In fact, that is the literal definition of it.
SNR issues will almost never (in extreme unlikelihood) cause a bit to read flipped, and if it does the byte it was a part of is nonsensical, and the frame the byte helped make up will either block, or depending on cache technology, simply be re-read and delivered normally.*
The idea that bit errors can cause subtle errors (or a 'different' image .. less black blacks!) is pure audio/videophile nonsense, so don't subscribe to it.
I'm not having a go at you, but there are a lot of people (usually trying to sell you something) who come out with what you're saying, and the truth of it is there is really only two states - failed or working. Everything else amounts to "you could be getting a better picture and just don't know it!" which is sales pitch.

* Supported by USB & SCSI, making "this USB cable is better" arguments particularly laughable.

longview
Dec 25, 2006

heh.
Problem is it may not be obvious if a bitstream is in fact coming through properly, there may be errors that only show up occasionally and a marginal digital TV signal may be clear most of the time but break up on scene changes or be more susceptible to noise causing somewhat random errors to creep in.

I've also had more subtle errors in DVI cables where only pixels of certain dark values would be turned blue, that's an error where it's definitely not clear from looking at a normal screen that there's anything wrong, even though it's digital there's still an analog component in there and if error correction is being used silently it's possible to have a degraded link but not notice anything most of the time.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


longview posted:

I've also had more subtle errors in DVI cables where only pixels of certain dark values would be turned blue, that's an error where it's definitely not clear from looking at a normal screen that there's anything wrong, even though it's digital there's still an analog component in there and if error correction is being used silently it's possible to have a degraded link but not notice anything most of the time.

I had that as well, turned out to be a marginal DVI cable at first, cheap and probably not fully up to DVI spec. Then I upgraded to a 1920x1200 monitor and the error returned. Technically, single-link DVI can just barely deliver 1920x1200 at like 56hz or something, if you have a card that's 100% to spec with no cost-cutting. Most of the time it's OK, but it's one of those times where you're riding right on the edge of what your hardware can do. Upgrading to a dual-link-capable graphics card solved the issue for good.

Apart from the blue specks, the picture was perfect, though.

longview
Dec 25, 2006

heh.
Yeah in my case it went away when I tightened the connector properly, but my point was mostly that HDMI is the same way, no matter what, all electrical and optical signals are transmitted analog, so it's always worth striving for the best SNR even though it "looks fine".

Looking for the highest possible SNR without overloading the receiver is always what you want for reliable transmission, noise is not constant and the stronger the signal the more resistant the link will be to pulses or other EMI problems.

Just because it's digital doesn't mean cables don't matter, that's silly because cables definitely matter, you can make a USB cable out of whatever and it'll work somewhat but it's not going to be just as good as a real shielded cable. Some have cables better shielding, like RG-6 which has braid and foil shielding, meaning it's better at rejecting RF intrusion especially at higher frequencies, this can make a big difference if you live near a transmitter site since otherwise RF can couple into the cable system and overload the receivers, as well as cause additional intermod in amplifiers. It also has significantly lower loss than RG-59, especially at UHF frequencies and this can matter if you use a cable modem too, where better SNR translates directly to higher speeds.

But I guess we're just supposed to ignore all that since there's a digital TV signal, it'll either get there reliably every single time or it'll completely not work! And TV systems all use QAM AFAIK so they're not even binary when transmitted :v:

LRADIKAL
Jun 10, 2001

Fun Shoe
This is a little ridiculous. You're pointing to DVI signal issues as an example of a digital signal that needs a cleaner signal? Firstly, it's real likely the issues described above were a DVI cable running in analog, VGA mode. Secondly, I'm not sure that DVI has any sort of error correction. When you're talking about issues with one color channel being "off" then that sounds pretty analog to me.
https://en.wikipedia.org/wiki/Digital_Visual_Interface#Specifications

Digital signals are very sensitive to flipped or missing bits. In the best case scenario error correction fixes it fine, or re-transmits, which someone mentioned. In almost all other scenarios you are going to have significant, obvious artifacts or signal dropouts. You're not going to get a fuzzy picture or loss of a color channel from a bad digital signal.

Also, wtf are you talking about "even though it is digital there is an analog component in there"?

Khablam
Mar 29, 2012

None of what you were experiencing related to issues of SNR. It's simply orders of magnitude away from being the case. It's somewhat interesting that you reference the DSL scenario in your post, yet still don't appreciate how far off the mark talking about SNR issues in short-run high-grade digital cables is.
Back in ~2003 ADSL hit my student house, of Victorian origin with 50 year old phone cables installed. Despite getting very poor SNR based on:
- 3500 metres of copper wire in dodgy condition between my house and the exchange.
- A couple dozen meters of old copper wire in the house, in one part we found the main line had been chewed down by mice to a few strands of copper
- Said wire was literally wrapped around the main power line coming into the house.
it worked, some years later the same cables synced to 6.5mbs.

The connection was bit-perfect nearly all the time. I know this, because wary of the issues I ran MD5 checks on any large download. I can count on one hand the number of times I had a failure - 3. DSL connections are incredibly complex, because they split the available frequency range up into multiple bands to deliver enough bandwidth, circumventing frequency limitation issues. Decent routers run a stable bit-perfect connection over a line that will offer 10-20db of SNR.

Or hey lets talk about my Digital TV signal that goes ~56,000meters through hills, a busy city with all that entails and arrives at my house all-but bit perfect (my receiver counts errors - it's a single digit number to date) with an SNR that is single digit.

So lets look at USB, whose typical SNR is anywhere between 100 and 120db or, to put it another way, the signal is 100,000+ times the power of the noise going through the link, or 10,000 times better than other technologies that can deliver bit-perfect data.

It's miles and miles away from being an issue with home cable lengths.

quote:

this can make a big difference if you live near a transmitter site since otherwise RF can couple into the cable system and overload the receivers, as well as cause additional intermod in amplifiers
No, it really can't, you're orders of magnitude off again. RF induction is a few millionths of a volt, with essentially unrecordable current.

If you're getting RF radiation into your home at a level to blow out electronics then you probably have other concerns, like why is my skin peeling and oh god my brain is liquid what help.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Jago posted:

This is a little ridiculous. You're pointing to DVI signal issues as an example of a digital signal that needs a cleaner signal? Firstly, it's real likely the issues described above were a DVI cable running in analog, VGA mode. Secondly, I'm not sure that DVI has any sort of error correction. When you're talking about issues with one color channel being "off" then that sounds pretty analog to me.
https://en.wikipedia.org/wiki/Digital_Visual_Interface#Specifications

Digital signals are very sensitive to flipped or missing bits. In the best case scenario error correction fixes it fine, or re-transmits, which someone mentioned. In almost all other scenarios you are going to have significant, obvious artifacts or signal dropouts. You're not going to get a fuzzy picture or loss of a color channel from a bad digital signal.

Also, wtf are you talking about "even though it is digital there is an analog component in there"?

Nope, DVI-D cable. No friggin way anything analog was coming through that cable. I know how noise on a VGA or analog DVI connection looks on a monitor, this didn't look like that at all.

My best guess is that my video card simply couldn't properly run the DVI output at the frequency needed, although it's not like 1280x1024 is the most demaning resolution out there. Part of the signal got too noisy and for some reason that made certain very dark colors default to 100% blue. There's no error correction in DVI, though, so I don't know why that would happen, it would have to be something vaguely defective in the graphics card or in the monitor.

When I changed over to a DVI-I dual link cable, the problem went away until I upgraded to a 1920x1200 monitor and then it was back. Changing to a newer, higher-spec dual-link-capable graphics card (Geforce 5900XT to 7600GT) cured the problem.

Wasabi the J
Jan 23, 2008

MOM WAS RIGHT

Khablam posted:

Yes, it does. In fact, that is the literal definition of it.
SNR issues will almost never (in extreme unlikelihood) cause a bit to read flipped, and if it does the byte it was a part of is nonsensical, and the frame the byte helped make up will either block, or depending on cache technology, simply be re-read and delivered normally.*
The idea that bit errors can cause subtle errors (or a 'different' image .. less black blacks!) is pure audio/videophile nonsense, so don't subscribe to it.
I'm not having a go at you, but there are a lot of people (usually trying to sell you something) who come out with what you're saying, and the truth of it is there is really only two states - failed or working. Everything else amounts to "you could be getting a better picture and just don't know it!" which is sales pitch.

* Supported by USB & SCSI, making "this USB cable is better" arguments particularly laughable.

Sorry, what I was getting at was more along the lines of clarifying that broken data doesn't mean the signal just doesn't work; the implication that I was addressing was the commonly misunderstood "data either works or it doesn't" phrase -- which is true, but rarely to the point the entire data stream is lost.

I absolutely agree with everything you said though; I was again, arguing more that you CAN have a broken/hosed up data stream to your device and still get a picture.


Jago posted:

Also, wtf are you talking about "even though it is digital there is an analog component in there"?

"Digital" information through most mediums, including copper, are still analog impulses; those impulses are modulated and demodulated by the equipment, interpreting the analog electrical impulses and determining what the bits should have been from the transmitting source.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Khablam posted:

The connection was bit-perfect nearly all the time. I know this, because wary of the issues I ran MD5 checks on any large download.
Anything that runs on top of TCP, like HTTP, FTP and whatever else is usually used for downloads, is supposed to be bit-perfect, because it does checksums and requests retransmits if they fail. Bittorrent runs on UDP these days, which isn't checksummed in itself, but the Bittorrent protocol is based on chunks each with their own checksum. If a chunk fails to verify, it'll be rerequested, too.

Wasabi the J
Jan 23, 2008

MOM WAS RIGHT
I'm on Amazon browsing for a snazzy-looking USB cable for my new keyboard (just for aesthetics), and I came across this review for an $89 flat USB cable.

Steve "Disappointed" Dai posted:

I just bought a Meridian Explorer USB DAC, and it is wonderful. It has the meridian house sound which is detailed, musical, and non-fatigue. However the stock USB cable comes with it seems to be average grade, so I figure I can get better sound with a more expensive USB cable. I did some search and seem Wireworld Starlight is a good choice.

I received the new cable today, but after connecting it to my stereo, i was disappointed. The sound is closed, edgy, and non-musical. I switch back to the stock cable and everything is better. I will return it, and for $99 it is just worthy it.

grack
Jan 10, 2012

COACH TOTORO SAY REFEREE CAN BANISH WHISTLE TO LAND OF WIND AND GHOSTS!
I was on Head-Fi reading the IEM discovery thread because there's been some interesting new headphones released from KEF and Onkyo and then they started talking about cables and then someone said he was waiting for his cable to properly burn in and then :eng99:

Just when you think there might be a useful, interesting thread boom, right back to the stupidity.

Wasabi the J
Jan 23, 2008

MOM WAS RIGHT

grack posted:

I was on Head-Fi reading the IEM discovery thread because there's been some interesting new headphones released from KEF and Onkyo and then they started talking about cables and then someone said he was waiting for his cable to properly burn in and then :eng99:

Just when you think there might be a useful, interesting thread boom, right back to the stupidity.

Head-Fi was the place that turned me onto buying dumb headphones because I thought they knew what they were talking about, and seemed well moderated.

I should have known something was up before I got probated for talking about if someone double blind tested two rigs I was looking at.

And when someone was doing a legitimate review for like $1300 worth of equipment (described by some as an entry-level system) with J-Pop.

Wasabi the J fucked around with this message at 08:35 on Sep 4, 2013

longview
Dec 25, 2006

heh.
Overloading a RF receiver doesn't mean destroying it, it usually means a strong signal which causes interference, for example desensing of the receiver which will seem to make it "deaf". This can be a concern with poorly shielded cables if part of the run passes near a transmitter, keying a 5W UHF transmitter near receivers can often cause problems in my experience.

As mentioned, TCP data transfers are checksummed, you need to look at how many packets are retried with a network analyzer to say anything.

RF induction is a real problem, TV signals tend to be around 60 dBuV, around a mV, EMI and RFI from fairly normal devices can cause surprisingly large fields to develop. Using unshielded cables for data transmission is virtually unheard of in military applications in part because of requirements that interference be controlled.

And yes, every single digital transmission exists as an analog signal in a transmission line, an optical signal or an RF field in the air. The fact that you can get a good tv signal from far away means that you bought a good antenna and pointed it the right way, good job.

There is a point where it's good enough, but I hate this notion that coat hangers are just as good as coax for transmitting a signal, they're not, in some cases it's fine but try stringing a 10 meter run of S/PDIF over coat hangers and coax over a fluorescent lamp, I'd bet one would perform better.

Khablam
Mar 29, 2012

longview posted:

There is a point where it's good enough, but I hate this notion that coat hangers are just as good as coax for transmitting a signal, they're not, in some cases it's fine but try stringing a 10 meter run of S/PDIF over coat hangers and coax over a fluorescent lamp, I'd bet one would perform better.

If I transmit 010011011010 and 010011011010 is read at the other end, then one is as good as the other no matter how many theories you come up with as to how interference can be happening.
Like I said, the SNR ratios on short-run cable lengths are so good that you need to cause massive physical damage before it starts impacting the signal. 100db is 1:100,000 of signal to noise. You can find something that increases the noise by 20,000 times the norm, and still be absolutely no-where near causing an error. Your connector can be so dirty that it only conducts 1/1000th of the voltage it's meant to, and viola, you still have a perfect transfer. RF induction is, again, orders of magnitude below causing an issue with HDMI / USB or any other digital cables. Can it cause interference at the aerial? Yes of course, because there we're dealing with uV and SNR ratios are strained as it is.

In the context of a home, digital cables need to be either faulty or unsupported to cause issues. There's not really two ways about it.

Philthy
Jan 28, 2003

Pillbug

Wasabi the J posted:

And when someone was doing a legitimate review for like $1300 worth of equipment (described by some as an entry-level system) with J-Pop.

I'd like to know what 'entry' level is defined as. I read all the equipment mags. I love seeing all the cool looking stuff I'd buy if I won the lotto. Most everything in the Euro mags are priced around $10,000 and described as entry level. Nothing in these magazines usually costs less than $5,0000 when converted from the English pound. They often compare them to the same manufactures other offerings which are usually four times the price, but this is a steal because it does 90% of what the other thing does as a quarter of the price! Speaker cable advertisements have financing plans available. Seriously.

I would kill for an audio rag that kept a $1,000 ceiling on any and all equipment reviewed.

Philthy fucked around with this message at 15:26 on Sep 4, 2013

Opensourcepirate
Aug 1, 2004

Except Wednesdays
It's been suggested that I work for Blue Jeans Cable because all I do is bring them up in this thread. I keep bringing them up because I think their articles are super relevant to this kind of discussion, even if you would never buy their products - either that or I've just been brainwashed by them.

I know that a lot of people here have never seen an HDMI/DVI signal at the edge of failure, but you start to lose individual pixels or groups of pixels. It is true of course that either the picture is 100% digitally right, or it's not, but it's not always clear at first that you're having issues with HDMI when you have weird intermittent problems with a few pixels.

Blue Jeans Cable posted:

We tend to assume, when thinking about wire, that when we apply a signal to one end of a wire, it arrives instantaneously at the other end of that wire, unaltered. If you've ever spent any time studying basic DC circuit theory, that's exactly the assumption you're accustomed to making. That assumption works pretty well if we're talking about low-frequency signals and modest distances, but wire and electricity behave in strange and counterintuitive ways over distance, and at high frequencies. Nothing in this universe--not even light--travels instantaneously from point to point, and when we apply a voltage to a wire, we start a wave of energy propagating down that wire which takes time to get where it's going, and which arrives in a different condition from that in which it left. This isn't important if you're turning on a reading lamp, but it's very important in high-speed digital signaling. There are a few considerations that start to cause real trouble:

Time: electricity doesn't travel instantaneously. It travels at something approaching the speed of light, and exactly how fast it travels depends upon the insulating material surrounding the wire. As the composition and density of that insulation changes from point to point along the wire, the speed of travel changes.
Resistance: electricity burns up in wire and turns into heat.
Skin effect: higher frequencies travel primarily on the outside of a wire, while lower frequencies use more of the wire's depth; this means that higher frequencies face more resistance, and are burned up more rapidly, than lower frequencies.
Capacitance: some of the energy of the signal gets stored in the wire by a principle known as "capacitance," rather than being delivered immediately to the destination. This smears out the signal relative to time, making changes in voltage appear less sudden at the far end of the wire than they were at the source. This phenomenon is frequency-dependent, with higher frequencies being more strongly affected.
Impedance: if the characteristic impedance of the cable doesn't match the impedance of the source and load circuits, the impedance mismatch will cause portions of the signal to be reflected back and forth in the cable. The same is true for variations in impedance from point to point within the cable.
Crosstalk: when signals are run in parallel over a distance, the signal in one wire will induce a similar signal in another, causing interference.
Inductance: just as capacitance smears out changes in voltage, inductance--the relationship between a current flow and an induced electromagnetic field around that flow--smears out changes in the rate of current flow over time.
Impedance, in particular, becomes a really important concern any time the cable length is more than about a quarter of the signal wavelength, and becomes increasingly important as the cable length becomes a greater and greater multiple of that wavelength. The signal wavelength, for one of the color channels of a 1080p HDMI signal, is about 16 inches2, making the quarter-wave a mere four inches--so impedance is an enormous consideration in getting HDMI signals to propagate along an HDMI cable without serious degradation.

Impedance is a function of the physical dimensions and arrangement of the cable's parts, and the type and consistency of the dielectric materials in the cable. There are two principal sorts of cable "architecture" used in data cabling (and HDMI, being a digital standard, is really a data cable), and each has its advantages. First, there's twisted-pair cable, used in a diverse range of computer-related applications. Twisted-pair cables are generally economical to make and can be quite small in overall profile. Second, there's coaxial cable, where one conductor runs down the center and the other is a cylindrical "shield" running over the outside, with a layer of insulation between. Coaxial cable is costlier to produce, but has technical advantages over twisted pair, particularly in the area of impedance.

It's impossible to control the impedance of any cable perfectly. We can, of course, if we know the types of materials to be used in building the cable, create a sort of mathematical model of the perfect cable; this cable has perfect symmetry, perfect materials, and manufacturing tolerances of zero in every dimension, and its impedance is fixed and dead-on-spec. But the real world won't allow us to build and use this perfect cable. The dimensions involved are very small and hard to control, and the materials in use aren't perfect; consequently, all we can do is control manufacturing within certain technical limits. Further, when a cable is in use, it can't be like our perfect model; it has to bend, and it has to be affixed to connectors.

So, what do we get instead of perfect cable, with perfect impedance? We get real cable, with impedance controlled within some tolerance; and we hope that we can make the cable conform to tolerances tight enough for the application to which we put it. As it happens, some types of impedance variation are easier to control than others, so depending on the type of cable architecture we choose, the task of controlling impedance becomes harder or easier. Coaxial cable, in this area, is clearly the superior design; the best precision video coaxes have superb bandwidth and excellent impedance control. Belden 1694A, for example, has a specified impedance tolerance of +/- 1.5 ohms, which is just two percent of the 75 ohm spec; and that tolerance is a conservative figure, with the actual impedance of the cable seldom off by more than half an ohm (2/3 of one percent off-spec). Twisted pair does not remotely compare; getting within 10 or 15 percent impedance tolerance is excellent, and the best bonded-pair Belden cables stay dependably within about 8 ohms of the 100 ohm spec.

If we were running a low bit-rate through this cable, it wouldn't really matter. Plus or minus 10 or 15 ohms would be "good enough" and the interface would work just great. But the bitrate demands placed on HDMI cable are severe. At 1080i, the pixel clock runs at 74.25 MHz, and each of the three color channels sends a ten-bit signal on each pulse of the clock, for a bitrate of 742.5 Mbps. What's worse, many devices now send or receive 1080p/60, which requires double that bitrate, and certain features which are beginning to appear on the market--Deep Color and 3D--increase the bitrate even further.

Impedance mismatch, at these bitrates, causes all manner of havoc. Variations in impedance within the cable cause the signal to degrade substantially, and in a non-linear way that can't easily be EQ'd or amplified away. The result is that the HDMI standard will always be faced with serious limitations on distance. We have found that, up to 1080p/60 at standard color depth, well-made cables up to around 50 feet will work properly with most, but not all, source/display combinations. As Deep Color and other bandwidth-stretching applications become more common, more and more cables which have been satisfactory will begin to fail.

In June 2005, the HDMI organization announced the HDMI 1.3 spec. Among other things, the 1.3 spec offers new color depths which require more bits per pixel. The HDMI press release states:

"HDMI 1.3 increases its single-link bandwidth from 165MHz (4.95 gigabits per second) to 340 MHz (10.2 Gbps) to support the demands of future high definition display devices, such as higher resolutions, Deep Color and high frame rates."

So, what did they do to enable the HDMI cable to convey this massive increase in bitrate? If your guess is "nothing whatsoever," you're right. The HDMI cable is still the same four shielded 100-ohm twisted pairs, still subject to the same technical and manufacturing limitations. And don't draw any consolation from those modest "bandwidth" requirements, stated in Megahertz; those numbers are the frequencies of the clock pulses, which run at 1/10 the rate of the data pairs, and why the HDMI people chose to call those the "bandwidth" requirements of the cable is anyone's guess. The only good news here is that the bitrates quoted are the summed bitrates of the three color channels -- so a twisted pair's potential bandwidth requirement has gone up "only" to 3.4 Gbps rather than 10.2.

Zorak of Michigan
Jun 10, 2006

KozmoNaut posted:

Nope, DVI-D cable. No friggin way anything analog was coming through that cable. I know how noise on a VGA or analog DVI connection looks on a monitor, this didn't look like that at all.

My best guess is that my video card simply couldn't properly run the DVI output at the frequency needed, although it's not like 1280x1024 is the most demaning resolution out there. Part of the signal got too noisy and for some reason that made certain very dark colors default to 100% blue. There's no error correction in DVI, though, so I don't know why that would happen, it would have to be something vaguely defective in the graphics card or in the monitor.

When I changed over to a DVI-I dual link cable, the problem went away until I upgraded to a 1920x1200 monitor and then it was back. Changing to a newer, higher-spec dual-link-capable graphics card (Geforce 5900XT to 7600GT) cured the problem.

I can't understand how a cable noise issue could cause a specific pattern of video problem, such as dark blue turning black. As I understand it, if you had so much noise that the ADC couldn't reliably tell 1 from 0 anymore, you'd expect to see random flipped bits, which would in turn cause decoder errors, and thus create pixellation and splotches. It would be awfully weird if the noise was only affecting the handful of low-order 1 bits that made the difference between blue and black, but not the high order bits, or the low order blue and red bits, or whatever. Is there something in the wire protocol for DVI that would do that?

Opensourcepirate
Aug 1, 2004

Except Wednesdays

Blue Jeans Cable posted:

We're often asked why that's so bad. After all, CAT 5 cable can run high-speed data from point to point very reliably--why can't one count on twisted-pair cable to do a good job with digital video signals as well? And what makes coax so great for that type of application?

First, it's important to understand that a lot of other protocols which run over twisted-pair wire are two-way communications with error correction. A packet that doesn't arrive on a computer network connection can be re-sent; an HDMI or DVI signal is a real-time, one-way stream of pixels that doesn't stop, and doesn't repair its mistakes--it just runs and runs, regardless of what's happening at the other end of the signal chain1.

Second, HDMI runs fast--at 1080p, the rate is around 150 Megapixels/second. CAT5, by contrast, is rated at 100 megabits per second--and that's bits, not pixels.

Third, HDMI runs parallel, not serially. There are three color signals riding on three data pairs in an HDMI cable, with a clock circuit running on the fourth. These signals can't fall out of time with one another, or with the clock, without trouble--and the faster the bitrate, the shorter the bits are, and consequently the tighter the time window becomes for each bit to be registered.

The color channels are all carried on separate twisted pairs. It's totally possible that the blue channel would have issues while the other channels would be fine.

Edit: DVI and HDMI carry video data the same way, if that wasn't clear.

GWBBQ
Jan 2, 2005


Opensourcepirate posted:

The color channels are all carried on separate twisted pairs. It's totally possible that the blue channel would have issues while the other channels would be fine.

Edit: DVI and HDMI carry video data the same way, if that wasn't clear.
The most common problem I've seen with cheap cables is bad solder joints. Pull apart a cheap HDMI cable and you'll usually find inconsistent soldering. Most of the time it's good enough to not be a problem, but cold joints aren't terribly rare and if they're plugged in and unplugged frequently, or just bent at the wrong angle, the connection can get flaky. I'd be interested to see the ends torn apart if anyone happens to have a bad cable that caused visible but minor problems.

One place I'm more than willing to spend a few extra bucks on cables is Neutrik connectors for cables that see frequent use. The strain reliefs they use beat any other I've seen hands down, and they're typically rated for twice as many insertion/removal cycles as the cheap ones. Part of my job is designing and building high-tech classroom consoles where the cables will typically be plugged in and unplugged several times a day in addition to taking a lot of abuse from being moved around, tripped over, etc. Neutrik rebrands their connectors as Rean and sells them for half the price with nickel plating instead of gold and powder coat, so it's not even that much more.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Zorak of Michigan posted:

I can't understand how a cable noise issue could cause a specific pattern of video problem, such as dark blue turning black. As I understand it, if you had so much noise that the ADC couldn't reliably tell 1 from 0 anymore, you'd expect to see random flipped bits, which would in turn cause decoder errors, and thus create pixellation and splotches. It would be awfully weird if the noise was only affecting the handful of low-order 1 bits that made the difference between blue and black, but not the high order bits, or the low order blue and red bits, or whatever. Is there something in the wire protocol for DVI that would do that?
That is a matter of software. The video transfer standards are set up so that they degrade gracefully.
So if a block is too degraded to extract the data then it gets switched to black or blue or a lower bandwidth parallel signal if such is supported.

Khablam
Mar 29, 2012

Opensourcepirate posted:

It's been suggested that I work for Blue Jeans Cable because all I do is bring them up in this thread. I keep bringing them up because I think their articles are super relevant to this kind of discussion, even if you would never buy their products - either that or I've just been brainwashed by them.

I know that a lot of people here have never seen an HDMI/DVI signal at the edge of failure, but you start to lose individual pixels or groups of pixels. It is true of course that either the picture is 100% digitally right, or it's not, but it's not always clear at first that you're having issues with HDMI when you have weird intermittent problems with a few pixels.

Well, you're lambasted for it because you link stuff like this. Now whilst there is some technical merit to the points they're making, they are no-where near the "problem" that is being talked up in that piece. It's propaganda, designed to make designing good cables seem very difficult, so people are more likely to buy theirs, as they look like they know more than other people and oh god isn't this complicated I better trust the experts.
HDMI complicates things much more than it needs to, in a large part because it's a huge money-making racket on selling people $100 cables when $10 cables will suit 99% of all applications. Elsewhere, the same USB extension cable I bought to carry 1mbit USB 1 speed kept working at 12mbit USB 2.0 speeds, also despite being longer than the recommended maximum length by 3m (that is to say, until I moved house and didn't need it anymore).

Nothing beats the Cat5e network cable for an example of how easy one can transmit digital data, though. Originally designed to operate at Base-10 speeds, the same cables will operate at Base-100 and Base-1000. Yep, that's 100-times the data on the same cable. HDBaseT is a method of carrying video (10Gbit/s) over Cat5e cables, at up-to 100m lengths. Not bad, considering HDMI 1.4 is only 10.2Gbit/s for 15m - both therefore able to carry 4k images at 30fps, for a theoretical future where we actually have 3840 × 2160p content.

FWIW Cat5e cables operate at 100Mhz, some 33% higher than the "severe" 75Mhz they talk about poor little HDMI having to deal with (though HDMI can indeed exceed this, it's more a jab at their ridiculous language).
It should tell you something when cable designed in the 80s doesn't need any physical changes to work at many, many times the speed it was designed to.

HDMI cables exist for two reasons:
- To make money off people
- HDCP
The ridiculous need for point-to-point encryption is why we have HDMI, and don't see better (cheaper) alternatives like HDBaseT or USB hooking up digital devices.

Khablam fucked around with this message at 10:33 on Sep 5, 2013

Adbot
ADBOT LOVES YOU

Olympic Mathlete
Feb 25, 2011

:h:

GWBBQ posted:

One place I'm more than willing to spend a few extra bucks on cables is Neutrik connectors for cables that see frequent use. The strain reliefs they use beat any other I've seen hands down, and they're typically rated for twice as many insertion/removal cycles as the cheap ones. Part of my job is designing and building high-tech classroom consoles where the cables will typically be plugged in and unplugged several times a day in addition to taking a lot of abuse from being moved around, tripped over, etc. Neutrik rebrands their connectors as Rean and sells them for half the price with nickel plating instead of gold and powder coat, so it's not even that much more.

I love Neutrik stuff, their entire line is great, I never knew Rean were a rebrand of theirs. I'm also a fan of Amphenol kit, particularly for stuff like instrument cable plugs as they're just a joy to solder.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply