USB Cable type AB recommendation

thatguy

"Human mind is a fickle thing. We can give reality any shape we like. The sun revolved around the earth a 1000 years ago ]

sorry for going OT but here is what Russel says and perhaps more could be implied than just the Physics in contention......

"Before Copernicus, people thought the earth stood still and the heavens revolved around it once a day. Copernicus taught that 'really' the earth rotates once a day, and the daily revolution of the stars and the sun is only 'apparent'.
.......Astronomy is easier if we take the sun as fixed than if we take the earth, just as accounts are easier in decimal coinage. But to say more for Copernicus is to assume absolute motion which is fiction. All motion is relative and it is a mere convention to take one body as at rest. All such conventions are equally legitimate, though not all are equally convenient."
 
Hi folks,

Some time has past since I last wrote in this thread. Since I still see a lot of posts flowing, I decided to join in again.

As I already said before, let me clearly say that I am indifferent to the subject of buying expensive USB cables, simply because I do not have enough knowledge and/or experience in this subject. For that matter, I have not bought very expensive analogue cables either. But I appreciate the use of good quality analogue cables (not necessarily super expensive). I have a bit of experience in this field and also know a bit of the physics involved in analogue cables.

Despite several pages of posting, and I have not followed it all I must confess, I am yet to get an answer to some of the questions I asked in my last post, for example, the one involving a NAT and a MZ. I also did not get an answer to my query on 'late collisions' mentioned in the CISCO webpages. May be these are questions that are so trivial or absolute garbage that nobody bothered to answer, but somebody should then say so.

I am wondering what could be the switches that 'thatguy' and his colleagues are making (designing?) where these late collisions are only of 'academic interest'. Perhaps these are Infiniband switches or Cray Seastar 2+ (or the new Gemini) switches that massively parallel architectures use? BTW, while I do not know the technical sides of these switches, I use them (especially the Cray switches in their XT5) on a daily basis for my research.

I was quite precise in my first post of this thread where I said something like: whenever there is broadcast, there is in principle a possibility of collisions. Since I am not a professional in this field, I want to know if the current implementation of Ethernet completely does away with all broadcasts, not even for making the initial connection. If that is true, I think the name of the protocol should change to something else than Ethernet.

On the more recent discussions of this thread, I have a simple question.

I presume that electrical signal of some kind is used in digital communication and the carriers are photons in fibre medium and electrons in the conduction band of a metal. Now, can somebody tell me what exactly is this signal? Knowing that I or my more experienced (in condensed matter physics, not my field) physicist colleagues around the world can say something about what is the role of the transmission medium, from a fundamental point of view. Usually the role of the medium is scattering of the signal which generally changes the form of the wave-packets, even though by a tiny amount. I acknowledge that this problem can be looked at also from the phenomenological point, with data from experiments, but I'd like to know it from a fundamental physics point of view.

I agree digital data transmission is more robust. However, I have tons of everyday experience to infer that it is also not 100% precise. Since the days in 86, when I used the Pittsburg supercomputing Centre remotely, or in 93-94, when I used to transfer a few hundred MBs from Oakridge National Lab over several hundred miles to the present day when I want to backup a data file of a few GB over the net from the Cray to another machine in our LAN, at times the integrity of the data is lost. If this did not happen, there would no use for programs like 'checksum'. Even when somebody copies a file from the HDD to a CD/DVD at any speed, there is a check for faithful data transfer. In the last 30 years I have either worked or have close connection with some of the best supercomputer installations (academic) around the world, and still this day and age, with computers as advanced as the Cray or the IBM Blue Gene or with advanced clusters that make use of the hyper-transport facility of the processors themselves, there are MPI-errors related to faulty communication that occur quite frequently.

Comparing the accessing of a webpage thousand miles away digitally to an analogue transmission is, I am sorry, is irrelevant and slightly irritating, I confess honestly. There are different things for different purposes. I am not going to measure up an airport runway with a meter-scale.

I am sorry if I am asking too many questions. Some of it is surely due to my ignorance and I have no shame in admitting it. But I am reluctant to take things too far and for granted too easily or be dismissive when I have not understood something from the fundamental point of view. Because of my profession, I am trained to ask questions :).

Regards.
 
I belong to the school of thought that any review is only a guideline. Ultimately the buyer of such equipment will have to experience for him or herself and see if he or she can relate to the experience of the reviewer. Another reason for reviews is that there are a Gazillion products out there and it is impossible both for the reviewers or the buyers to audition/review all of them, hence the use of magazines, websites, fora etc. to try and categorize these products.
Cheers,
Sid

Sidvee

I agree reviews can be helpful, as long as they are treated as guidelines and not as gospel. But one needs to have the capability to distinguish between a good review and a bad review. Between a 'motivated' review and a relatively impartial review. Most reviews in audio magazines seem to be 'sponsored' reviews. But a reviewer who wants to maintain his credibility in the long run, needs to maintain a 'balanced' perspective about the brands he is reviewing.

I have bought both my pre/power amplifier and source without an audition, based to a large extent upon particular reviews which I felt had gone to the heart of the matter. I have found the actual performance of the Bryston's and the Esoteric to be very close to the way it was described in those reviews. So in this case the reviews gave me good advise and have helped me to acquire components which I can live with happily ever after :)
 
Sidvee

Most reviews in audio magazines seem to be 'sponsored' reviews.

Ajay124 to some extent this is true but more importantly this is the case with everything that has been reviewed in print media that depends on advertising as a form of revenue. In fact I find the car magazines quite hilarious, recommending willy-nilly,the brands advertising in them and buying a car is a significantly higher investment than audio (exceptions aside). I see the same in home improvement products, reviewed favorably in Interior magazines and you see the ads. of that product in the same or next page. I see the same in Wines, restaurants, movies, shoes, Watches - you name it, and it has a great review. How many times do you see a bad review? So as I said before, a review has to be treated strictly as a guideline, and buyer beware. Of-course there are good companies, who really have good products and one can benefit from that as well by using reviews. So I find it amazing when one hears adverse stories of snake oil in Hi/Fi & Audio (as well as deep emotional statements, arguments, disgreements etc.) only when in-fact it exists in every product that we buy based on disposable income (that has been reviewed somewhere by someone) and no one appears to be bothered with that - most of all the audiophool bashers.
Cheers,
Sid
 
Last edited:
I managed to get through my education with a minimum of science and maths too. Disgracefully so: I was thrown out of the maths class as a hopeless case!

(I was talking to a big man in cryptology a few weeks ago. He spoke with such love about maths and numbers, and such sadness for the many of us that suffer from the way that it is taught. Wish I'd had teacher like him.)

(Errr... did I tell that story before? Apologies if I did. I keep on getting bad checksum and bad magic number errors from my brain.)

Asit, the conversation touching on ethernet technology and networking was interesting. I'd have questioned more, but I felt guilty about going off topic.

At a guess, I'd say that the CD/DVD disks/drives is probably the least reliable link in the digital chains that most of us use, and that remains the case however powerful the computer.

Still following this conversation with interest, as well as continuing with my digital projects --- but I'm a bit off being a nerd for a few days, as I had to fix a broken partition table (Oh yeah... digital does go wrong sometimes, and often the most important bits!) yesterday, and I've already mentioned just how well I [don't] get on with numbers. :sad:
 
Asit, Thad, HiFiVision and others - the fundamental question here is different. The question is whether digital data moves from Point A to Point B without any loss (of any kind - drop, collisions, etc).

I checked with a few people in my office and they have not even heard of the concept of data loss. There is, of course, a huge age difference between them and me. I wonder if colleges have stopped talking about data losses in digital transmission systems completely as a waste of precious semester time.

I have seen HDMI cables dropping data completely, and it is an accepted fact that HDMI cables beyond 30 feet are problematic. In regular digital transmission, repeaters and signal amplifiers are common for long lengths, re-emphasising the fact that data signal strength do become weak and need to be boosted. This is directly related to the quality of transmission medium used and, in this case, we are talking about cables. Of course, the corollary to that is whether short differences of 1 to 2 meters make any difference at all.

The second point is the difference between moving digital data between two entities that can talk to each other, and the method of transmission itself. In synchronous transmission, the data is sent along one, let us say wire, that is called the signal bus. The control information, basically a clock that says when the bit starts and ends, is sent along another bus called the control bus. The receiver thus knows exactly how to identify the data. Once a connection is established, the transmitter sends out a signal, and the receiver sends back data regarding that transmission, and what it received. There is also error correction information being exchanged constantly. On top is the checksum that allows the receiver to verify each packet received.

Digital data for music use asynchronous system where there is no control bus. In this the transmitter simply transmits. The receiver uses it's own internal logic to identify when a bit starts and ends. That is why DAC manufacturers scream about the importance of clock, world clock etc. Logically a minuscule fluctuation in the data flow can force the receiving station to re-clock the incoming data. Since there is no control information sent nor there is any negotiation, the receiving station is almost blind.

I will be checking with a few professors at IIT/M. Asit if you could talk to a few of your colleagues at your institute we could correlate and present the correct scientific answers to these points. BEYOND that, what the readers wants to conclude is their own option.

I just want to settle whether there could be data losses or drops in digital transmission.

Cheers
 
Last edited:
I just want to settle whether there could be data losses or drops in digital transmission.

Digital data is encoded as two voltage levels - x volts is considered a 1, y volts is considered a 0. x volts has to be of a certain minimum threshold to be interpreted as a 1. Ditto for y. Since x and y are (analog) electrical signals, they follow the laws of physics and the voltage and current drops over distance, as does the phase which changes over the transmission line.

For example, over a single-mode fiber transmission lines, typical repeater distance is 130 kms. Meaning you need to amplify the signal by the time it has travelled this distance, or else it will no longer be discernible as a 1 or 0.

Over digital microwave links, repeater distance is subject to clutter (or presence of other radio signals that affect your own signal adversely), environment and terrain. I have seen a 48 kms hop between two stations using 2 GHz and 8 GHz microwave.

In older analog days too, when large coaxial cables were run across the length and breadth of the country, losses were very problematic. Another huge problem was frequency drift. One has to keep on adjusting them. I don't recall how far apart were the repeaters. In my first job, I remember Madras-Bombay coax link was very temperamental and used to occupy us for full shifts in rectifications. Thankfully that was the last long-distance analog link that we maintained. On the other hand, the India-UAE analog undersea cable, 1964 kms long, never ever gave problems. Except one time when an Ukrainian ship that was caught in a heavy storm off the coast of Mumbai snagged the undersea cable in its anchor and broke it (despite clear maritime maps that say a ship cannot throw anchor at so and so points in the sea). Marconi UK did make good telecom equipments and links.

Fiber is really a blessing as such problems don't exist. It's fairly digital - either link is working or it's cut and not working, or losses are too high when repeatedly spliced and not able to work properly.

Long story short; Yes, there is loss in any form of transmission, analog or digital.
 
Since x and y are (analog) electrical signals, they follow the laws of physics and the voltage and current drops over distance, as does the phase which changes over the transmission line...............Long story short; Yes, there is loss in any form of transmission, analog or digital.

Fantastic. And to add to that, the medium and method of transmission will thus make a difference, right?

Cheers
 
Last edited:
Fantastic. And to add to that, the medium and method of transmission will thus make a difference, right?

Cheers

Yes ... Silver makes squarewave(s) shine and gold makes the squarewave(s) glow. :ohyeah: Wonder if anyone noticed though ... ;)

Honestly, if you want my unscientific opinion I think the issue here is more of jitter than whether the digital signal exists in its original form at the receiving end. Remembering that the DAC needs to read 2 x 16bits (for stereo) at every sampling instance and that the RedBook sampling happens 44100 times a second - Can a jitter of 30 microseconds result in one bit of the 16bit signal being read incorrectly? Given that at 44100 samples per second, and one sample takes 22.6 microseconds, can there be a single bit read error once every 500000 microseconds (1/2 second)? Thus changing the value of the signed 16 bit value to vary between ?32768 to +32767 and ruining the harmonics and timber of the resulting sound to the human ear-brain network? Ok how about once every second? Is one single bit reading error is not possible (somewhat consistently) out of 882000 (stereo) 16 bit samples per second due to jitter? I am sure a mathematical probability of error formula exists somewhere taking an average copper wire and average jitter readings. The square wave may still remain a square wave at the receiving end, given the great scientific advances in digital transmission but what if the reader (DAC) is not interpreting (sampling) it correctly due to jitter? And we should also remember that every DAC also has its own intrinsic jitter adding to the problem of jitter during transmission. The total jitter in a system which will be transport jitter + transmission jitter + intrinsic jitter will add up to create artefacts that affect listening experience.

A computer system may be able to employ CRC checksum error recovery and still lock on the original signal with a delay of another 50 microseconds (for example) - but the computer is not a psycho-acoustical system interpreting the signal and evaluating the end result for its aesthetic beauty or reacting with emotions.

I have very distinctly heard an optical s/pdif cable sound harsh and grainy with the Beresford DAC and the same DAC sound smooth with a co-axial.



--G0bble
 
Last edited:
Asit, Thad, HiFiVision and others - the fundamental question here is different. The question is whether digital data moves from Point A to Point B without any loss (of any kind - drop, collisions, etc).

I checked with a few people in my office and they have not even heard of the concept of data loss. There is, of course, a huge age difference between them and me. I wonder if colleges have stopped talking about data losses in digital transmission systems completely as a waste of precious semester time.
Maybe --- because it probably is a waste of time. And maybe because they didn't study computer science as a primary subject. Old technology made a certain familiarity with baud rates, parity, stop bits, CRC (It stands for Cyclic Redundancy Check, but hey, I never had any clue how it worked!) and so on, necessary: mostly, these days, it is not. Nor do you need (and you never did) to know about collisions on [unswitched] ethernet networks, because the system looks after them (and you ignored my reply: these collisions do not cause data loss, enlarged upon by another poster who pointed out that congestion screws up LANs, not collisions.)

It is a question of risk assessment. It is sensible to take care on the roads because a tiny proportion of people crossing roads get run down and killed. It is not so sensible to obsess about USB (and many other kinds of) cables because an occasional bit gets lost or mis-timed unless the results are obvious.

it is an accepted fact that HDMI cables beyond 30 feet are problematic.
Then don't use them over 30 ft. There is a maximum defined length for most kinds of cable: for some it may be measured in thousands of km, or even tens of thousand, for others it may be in feet. Most of the cables that we use for PCs were probably designed to connect to stuff on the same desk, even cat5/6 ethernet has a specified maximum run of 100m. USB has a maximum of around 5m, it seems (down-to-earth USB information). A quick look at the Wikkipedia page for hdmi leaves me with the impression, "depends on the cable". Attenuation happens. Your maximum-possible bandwidth from your ISP is determined by your distance from the exchange, because the signal looses strength. There is a mathematical relationship between length of copper cable and attenuation but it is made worse by the number of cable joins.

Work within the limits, and forget about data loss! Be conservative: halve the published limits, and work within that!

Then we can return to what should be the baseline for digital cables: if a cable meets the specs, and has no faults, it will work. If it doesn't, then it doesn't meet the specs, has intrinsic faults, or has been damaged in use. Throw it away, buy another one, but do not think that you need to spend ten or a hundred as much (unless you bought really cheap junk in the first place) as a fix. All you need is a cable that works.

(I had another bad cat-5 lead only yesterday. chucked it. didn't order a new one from Monster, just picked another one out of the bag that did work)

I acknowledge that you are on the quest for perfection, and that is fair enough: it is a great quest ... but the stuff happening at either end of a digital link, given the above-mentioned criteria are met, has got to be infinitely more important than the link itself. As you know, your Windows operating system does not even deliver the same bytes it reads from a CD to your sound card, hence you load drivers that avoid that interference.
the medium and method of transmission will thus make a difference, right?
Yep. Did you use the internet in pre-www, pre-adsl days, over dial-up connection? When an interesting Usenet post disintegrated suddenly into a stream of gibberish, sometimes followed by that famous +++NO CARRIER+++ message? Now, as someone mentioned previously, we can stream not only text, but pictures, audio, video, across continents and oceans.

But yes, I agree that it is still possible for a poor-quality toslink cable to introduce substantial jitter. Imagine the differences between cloudy plastic, clear plastic, and the sort of materials (glass?) that carry digital signals across oceans. Glass is going to cost more than plastic; the protection needed for glass is going to be greater than the protection needed for plastic --- but all should be strictly within the computer market pricing, and there is no need for the fear-mongering-among-audiophiles scum to get involved. Really, I regard them as on a par with e-mail scammers!.

Goble said:
...The total jitter in a system which will be transport jitter + transmission jitter + intrinsic jitter will add up ...
You'd have to include that the data on your CD might already be jittered by the ADC used to prepare the master digital source. Except that apparently, jitter is not additive, and does not make in difference in the digital domain. It only matters at the DAC. Anyway, I've quoted this article before...
A lot of fuss is still made about jitter, but while it is potentially a serious issue it's rarely a practical problem these days simply because equipment designers and chip manufacturers have found very effective ways of both preventing it and dealing with it.

...

Another source of jitter (the strongest source these days) is cable-induced....

...

Fortunately, most modern D-As incorporate sophisticated jitter-removal systems to provide isolation between the inherently jittery incoming clocks embedded in the digital signal, and the converter stage's own reconstruction clock.

Sound on Sound
These days and modern was back in 2008. Things got worse since then? Or the manufacturers forgot their effective ways of dealing with? I don't think so: I think that certain unscrupulous companies sold us paranoia and mis-information.

So, despite my aesthetic liking for smoky-clear-over-silver USB cables, I continue to assert that that cheap grey thing (as long as it really is a USB2.0 cable) can be expected to work pretty well with your DAC or external interface. I accept that it is just my view of reasonable.
 
Last edited:
You'd have to include that the data on your CD might already be jittered by the ADC used to prepare the master digital source. Except that apparently, jitter is not additive, and does not make in difference in the digital domain. It only matters at the DAC. Anyway, I've quoted this article before...

So a random 50ns delay in the transport combined with a random 20msec delay by the transmitter, and a further 40picosec delay due to the intrinsic jitter variation in the clock receiver on the DAC side will not all add up for that sample? I can't digest that. <vomits> :ohyeah:


I wouldn't hold that article as sacrosanct know it all and the final word on jitter and digital transmission for audio. If it were a perfect world of perfect clocks in the modern post 2008 era, there wouldn't be so many bad sounding products out there, from DACs to Transports that incidentally also fail miserably in communicating the music, compared to vinyl.

--G0bble
 
Before, people start getting the wrong ideas about my stand on this, please re-read my post on page 3. For convenience I am repeating it here.

VenkatCR said:
This is an area where you really cannot conclude one way or the other, as of now.

I use external drives to store all my Blu-Ray movies. At home these are connected to my HTPC using the ordinary USB cables that came with the drives. Till today, not even once have I seen any sputtering or anything in any movie. Once when I tried the same trick in another place, the screen colours were screwed. I had, just the previous day, played the same movie from the same HDD at home without any issues. A few repeated attempts did not change the situation. On a lark, I changed the USB cable, and the issues disappeared. When I compared the two cables, the only difference I could see was a EMI suppressing ferrite core on both ends of the cable that worked. Since then I have ensured that all USB cables have the ferrite cores on both ends and I have not seen any issues.

Though I have an audio PC as mainstay, I have never used an USB cable for audio.

Before blowing up 1000s of rupees on an USB cable, it may be worthwhile to try this. Get a decent USB cable, fix an EMI/RFI suppressor at both ends (for all of 260/-) and see what happens. I have a feeling that may be all that is needed. RFI/EMI suppressors are available from MX. (MDR Electronics - MX Electronics - Home Theater Cables & Connectors - Computers & Laptop Accessories - Pro Audio / Studio Audio Cables & Connectors - Audio Video Components & Accessories - Fiber Optic Component & Accessories - Cable TV Component & Ac).

By the way, I have tried Wireworld and Audioquest USB cables for video, and I have not seen any appreciable difference.
 
Before, people start getting the wrong ideas about my stand on this, please re-read my post on page 3. For convenience I am repeating it here.

Yep wireworld has some proprietary winding method to reduce the effects of emi on the signal. Maybe thats a very expensive way of adding another type of emi/rf suppressor. Or an excuse to charge a Bomb for what should be a average priced cable.

--G0bble
 
Vencat, yes, thanks for the reminder of what you originally said :)

Neither of us actually use USB for audio! Threads like this do make me wish I could buy stuff just to play with. Actually, I suppose I do, but only one in each category at a time, and just now, I'm playing with (or stuck with, on a bad day!) Firewire

Gobble: I hope the stomach is feeling better now ;). Try this on Jitter only mattering when going from D to A. Let me know what you think.

In the digital world, so long as the bit is correctly read, the timing does not matter. If it is not correctly read, then you have data corruption rather than jitter.
 
Last edited:
Sorry for braking my promise to stay away but I wont offer any opinions this time :)

I just want to show people that bit errors on cables are not as frequent as many on the forum would like to believe. I had a 10 meter Ethernet cable that I bought from SP road about five years ago and decided to put it under test. IIRC, I paid Rs 100 and some change for it.

Topology:

Laptop --> Short Ethernet Cable --> D-Link Gig Switch --> Cable under test --> Desktop (Linux)

I have a script running on the Desktop which pulls in a 3 GB file from the laptop using FTP. It does this in a continuous loop. After every copy, it computers the md5 hash to check that the transfer was error free. However, I am more interested in the error stats from the Ethernet controller (Realtec RTL8111/8168B) where the bit corruptions will be detected by CRC check.

The stats so far (this controller does not seem to have a separate counter for CRC error, which gets incremented on a bit corruption; however it has a rx_error which will catch all kinds of frame errors):


root@andromeda:~# ethtool -S eth0
NIC statistics:
tx_packets: 21520329
rx_packets: 94576917
tx_errors: 0
rx_errors: 0 <<<<<<<<<<<<<<<<<<
rx_missed: 0
align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
unicast: 94576755
broadcast: 127
multicast: 35
tx_aborted: 0
tx_underrun: 0


root@andromeda:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr e0:69:95:9a:d7:d5
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::e269:95ff:fe9a:d7d5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:94691839 errors:0 dropped:0 overruns:0 frame:0
TX packets:21545593 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:129298793029 (129.2 GB) TX bytes:1422994282 (1.4 GB)
Interrupt:28 Base address:0x2000

root@andromeda:~#


The md5sum too has been perfect so far (~25 iterations). I am going to let it run overnight. I would love to see the error counters show some signs of life, but I am not very hopeful :)

If any one wants to see any other stats, feel free to ping me.

The cable under test is the one lying on top of the desktop on the right.
 
Last edited:
I wish I still had some thin ethernet cards, connectors, terminators and cable :D

Back to 10Mbs, but I suspect the accuracy would be pretty good.

But...

We really should have stuck to USB cables. It is just not valid, in anyway, to extend the conversation to all and every form of digital communication, reach any conclusion, and come back and say ... that's USB sorted. But that is what we have all tried to do :o
 
Update after about 6 hours and 1.2 TB of data (305 iterations):

Not even a single errored frame. All 305 md5sum results identical (all data transfers 100% bit perfect).


root@andromeda:~# ethtool -S eth0
NIC statistics:
tx_packets: 205259946
rx_packets: 908083702
tx_errors: 0
rx_errors: 0 <<<<<<<<<<<<<<<<<<<<<<
rx_missed: 0
align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
unicast: 908083021
broadcast: 614
multicast: 67
tx_aborted: 0
tx_underrun: 0
root@andromeda:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr e0:69:95:9a:d7:d5
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::e269:95ff:fe9a:d7d5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:908286448 errors:0 dropped:0 overruns:0 frame:0
TX packets:205305765 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1240483499047 (1.2 TB) TX bytes:13551501114 (13.5 GB)
Interrupt:28 Base address:0x2000

root@andromeda:~#


I am shutting down my machines.
 
Last edited:
Update after about 6 hours and 1.2 TB of data (305 iterations):

Not even a single errored frame. All 305 md5sum results identical (all data transfers 100% bit perfect).

Firstly this is packet data that can be bufferred and reclocked with some delay. Try to make it a non packet bitstream that is not cached/bufferred anywhere from HDD to receiving application on destination computer, and then prove to me that all the bits were received exactly within 20microseconds of each other without any variation.

Edit: To quote the article on jitter Thad posted:
The only effect of timebase distortion is in the listening; as far as it can be proved, it has no effect on the dubbing of tapes or any digital to digital transfer (as long as the jitter is low enough to permit the data to be read.

http://www.digido.com/jitter.html

Likewise your digital to digital transfer proves nothing about errors in the time domain and their audible effects on the human psycho-acoustical system..

@Thad, what does your article say that "proves" your point? ;)

I repeat: jitter does not affect D-D dubs, it only affects the D to A converter in the listening chain

Edit: Now where is the evidence that this article written in 1993 does not apply in 2011? http://www.stereophile.com/reference/1093jitter/index.html

Thanks
--G0bble
:)
 
Last edited:
Firstly this is packet data that can be bufferred and reclocked with some delay. Try to make it a non packet bitstream that is not cached/bufferred anywhere from HDD to receiving application on destination computer, and then prove to me that all the bits were received exactly within 20microseconds of each other without any variation.

I thought this thread was about cables, not clocks. It wasn't about Ethernet either, but some folks were insisting that packets get corrupted in cables carrying digital data. Although not a proof, I was trying to give them something to think over.

Second, if you are using computers, your data will be buffered at multiple points between the HDD and the receiving entity. All HDDs have internal buffers. SCSI is a packet oriented protocol. File systems always request the storage device for N blocks of data, and buffer them before giving it to the application. All routers and switches have buffers (store and forward switching and QoS) if you are taking data over the network. So NO, you just can't create a stream from the HDD to the receiver. That is why entities receiving voice/video streams (just one step before modem or DAC or the video equivalent) implement de-jittering mechanisms (I have worked with jitter buffers, but not in audio systems).

If you want to bash the computer clock, please go ahead. The beauty of a digital system is that it will work with a cheap clock (the results will be no different if you used a 1 ppm clock instead of 50 ppm), just as it does with a cheap cable. A few meters of cable, however, will have *no* effect on jitter. I would like to see the maths if someone believes otherwise.
 
Last edited:
Follow HiFiMART on Instagram for offers, deals and FREE giveaways!
Back
Top