Let's talk digital!

Even I am eager to hear about memory management issues Thad faced & resolved with the permission of OP of course :o I know that there is a separate thread Thad created but if this can be shared here without derailing the tread, it would make this discussion much more valuable & complete IMHO.
Honestly, I think it belongs elsewhere: it is to do with Linux memory management and CPU frequency control. Whoops, I just mentioned it! But discussion here would not serve the purpose of the thread.

It's just that I have two main points about PC Audio....

1. It is usually effective, good and easy. Plug'n'play! :ohyeah:

2. Sometimes it isn't. :cool:

The vast majority of people will experience "1" --- but I am far from the only person to have had some bad experiences. The DPC Latency thing can render a particular hardware combination almost useless for audio.

But we do not sit and contemplate the world's serious diseases for fun, unless we have good reason to: see "1" :D
 
Sorry for jumping in before the chapters began but since latency is discussed....

I suppose there are 3 factors that could affect the performance of PC based playback (apart from acoustic noise of fan and HDD seeking).

1. Jitter
2. EMI noise
3. Latency

If you want to check the latency of your PC, you can download and run this (no installation needed)
http://www.thesycon.de/dpclat/dpclat.exe

dpclat1.jpg


Dropouts may occur only if the bars go red (mostly by bad drivers or demanding applications). Right now, on my laptop, peak latency is showing less than 300 us which cannot be of any concern in streaming audio. Above that, ASIO takes care of latency issues very efficiently by reducing the buffer size. The same thesycon site has a good explanation on why dropouts occur.
 
If you want to check the latency of your PC...

Actually, no. Or it depends on what you mean by "latency." Deferred Procedure Call Latency is a specific thing (problem) and it does not relate to the usual meaning of latency in audio.

Latency in audio is delay caused by the amount of time it takes data/signal to get from A to B within your system. How long is it before the signal being received by your sound card is output to your speakers? That's latency, and it matters very much in some studio/recording scenarios.

Contrary view (Ranjeet and others will show how to reduce latency): it doesn't matter for playback. The water going in one end of the pipe comes out at the other. It does not get degraded by the amount of time it spends in the pipe.

But it's tweakable. And it's a free world :)
 
Last edited:
Yep DPC latency is required to be low only for recording/monitoring. For playback, even if it is 1 second, it is immaterial.
 
I thought latency also refers to the buffering issue that causes audio dropouts or glitches during playback.

Contrary view (Ranjeet and others will show how to reduce latency): it doesn't matter for playback. The water going in one end of the pipe comes out at the other. It does not get degraded by the amount of time it spends in the pipe.

OK if it doesn't matter, then why we are discussing on how to reduce it. :confused:

I have experienced drop outs on my PC but that was several years back (pentium III). Never had a dropout on my current PC. But yes the specs are quite decent though.
 
Yep DPC latency is required to be low only for recording/monitoring. For playback, even if it is 1 second, it is immaterial.
Again, delete "DPC" and you will be right. DPC latency is something else. And nasty.

I thought latency also refers to the buffering issue that causes audio dropouts or glitches during playback.
Hmmm... Possible, yes, but there should not be an issue with normal buffering.
OK if it doesn't matter, then why we are discussing on how to reduce it. :confused:
This is hifi --- you expect us all to agree about what matters and what doesn't? :lol:
 
But then you need to use extra CPU cycles to execute network/disk driver code to fetch more data i.e. larger file, so there is no way to say you always save on the CPU usage. You may end up using more CPU :). Infact, looking at the big picture of how computers work, I can think of several more reasons why FLAC is far better WAV.

That is not always what happens behind the scene, you never know if your FLAC files in the first place are stored on a disk in fragments which causes more CPU cycles anyways to read all the fragments and seek from one fragment to another. All this unless you are using SSDs or always ensure than music is on a non OS disk where you properly defrag regularly if there are regular add / delete operations.

Also regarding your opinion on more disk reads (and hence more CPU cycles) for WAVs as compared to FLACs, you can try evaluating DMA that relieves CPU of this task of seek and read assuming you are not using your HDD as NAS (since DMA emulates IDE for SATA which has overall lower speeds but more than enough for music files). Otherwise FLAC files would have always sounded more jittery than VBR 320 kbps since more

I also think regarding jitter that unless there are truly separate processors (with no bus sharing as well) executing audio playback and memory reads separately and other OS functions with isolation, we will end up anyway running multi threaded tasks in time sharing modes resulting in the minutest of jitters which may be impossible for our ears to detect.

Remember that all Core2Duos or i3s or i5s inspite of having multiple cores is not exactly like two separate single core processors in separate sockets having separate bus, the essential difference between Multi CPU vs Multi Core. This may be the reason why some players claim that there is more performance (hence better Audio Quality) in running two separate PCs one serving as the streamer of the audio and the other just entrusted with the playback in virtual hibernated mode so that effectively sharing of CPU cycles is minimal.
 
That is not always ... VBR 320 kbps since more

I think you are confused between CPU execution and disk latency. Yes, you may reduce disk rotational latency by placing data closer on the disk i.e. not fragmented or remove it completely in case of SSD but extra CPU will still be required to process larger file metadata to determine where the data lies on the disk and generate requests to read the correct disk blocks. These requests will then go through various other layers (IO scheduler, SCSI layer, adapter driver), all of which will be executed by the CPU. The disk controller will only DMA the requested data after it receieves these read requests. All the extra memory allocation and interrupt processing for data transfer will also require extra CPU.

NAS option will similarly require additional CPU processing in CIFS/NFS drivers and network drivers, which will also use DMA to transfer network data only after transfer request has been initiated by the network driver i.e. by CPU.

since DMA emulates IDE for SATA which has overall lower speeds but more than enough for music files

Am not sure what you are trying to say. Just to clarify, DMA is how peripheral devices (disk, usb, sound card, graphics card, network etc.) work with CPU while IDE and SATA are storage protocols which work few levels higher and they do not mix. The emulation would happen at the driver layer, again with help of CPU. Interstingly, if your system uses PCI bus for DMA (any modern PC) which is a shared bus, all the extra data you transfer from disk means the bus is locked and unavailable for system to send data to the soundcard/dac for that time. Yet another reason why less data is good ;).

Otherwise FLAC files would have always sounded more jittery than VBR 320 kbps since more

Maybe all the extra jitter is well compensated by the extra information which is missing in mp3 ;). Moreover, I am not saying flac > WAV and will necessarily have less jitter. I was just countering your claim that WAV uses less CPU to decode and hence better by just pointing out how limited the view is.

I also think regarding ... CPU cycles is minimal.

I am not very knowledgeable about jitter and how it is affected by running system so cannot comment on this. I have however read in a few places that modern systems have very good jitter reduction technologies so am hopeful my DAC does a half decent job to reduce jitter and I need not worry about having multiple CPUs :cool:.

Sorry for going OT.
 
See... too much worry!

Whenever the mind starts to play with this sort of stuff, remind it that a PC will play music from an optical disk ---and how slow is that?

If you feel your data is fragmented, well, defrag it! It can do no harm to do that, although I'm reminded of one or two colleagues who thought they knew more about PCs than the IT dept did (perfectly possible, but they didn't). They were always defragging their machines, and showing off how much faster they ran afterwards. But they didn't --- because all the data was on network drives, not on their machines at all!

Most people will have dedicated disks or file systems where they store data such as music, video, photos, etc. Given the nature of this data, it tends to be write-once/read-many. Starting with an empty file system, this means that the music is likely to be written in contiguous blocks and likely to remain that way. Despite my claiming that it wouldn't make a big difference anyway, I do think it is nice to think that one's data is neat and tidy. Defraggler is your friend :). Defrag away happily: I would too if I was MS-ing.

NB: if you have a swap file, there are special tools to defrag that. Again, at least in theory it is a good idea that it should not be fragmented.

fragmented data does not cause jitter. PCs are not, in any way, real-time machines. Disc reads is just one of the ways in which they are not realtime machines. A continuous stream of data at the speed that the software wants to read it is just never going to happen, whatever the application. That's why ...buffers.
 
but extra CPU will still be required to process larger file metadata to determine where the data lies on the disk and generate requests to read the correct disk blocks. These requests will then go through various other layers (IO scheduler, SCSI layer, adapter driver), all of which will be executed by the CPU. The disk controller will only DMA the requested data after it receieves these read requests. All the extra memory allocation and interrupt processing for data transfer will also require extra CPU.

NAS option will similarly require additional CPU processing in CIFS/NFS drivers and network drivers, which will also use DMA to transfer network data only after transfer request has been initiated by the network driver i.e. by CPU.

Am not sure what you are trying to say. Just to clarify, DMA is how peripheral devices (disk, usb, sound card, graphics card, network etc.) work with CPU while IDE and SATA are storage protocols which work few levels higher and they do not mix. The emulation would happen at the driver layer, again with help of CPU. Interstingly, if your system uses PCI bus for DMA (any modern PC) which is a shared bus, all the extra data you transfer from disk means the bus is locked and unavailable for system to send data to the soundcard/dac for that time. Yet another reason why less data is good ;).

I meant with DMA CPU does not need to bother about file size since it is programmed I/O, CPU only needs to initiate it via an interrupt to the DMA controller (kind of delegation). In fact WAV files have lesser sized metadata than FLAC (absense of ID3 tagging) unless you mean metadata related to where on disk the file is located which i think is immaterial for contiguous or defraged data.


Rest of the slacks (of different layers through which data will pass) remains the same for both FLAC vs WAV but on the contrary your audio player or codec who is decoding FLAC does not have to dedicate thread to decode in case of wav. All this only if you think ( in mind ) that lower CPU cycles can bring in improvement in Sound Quality. To me, from 5% to 1% should not be something to be bothered about.

You mentioned PCI / shared bus that is something (sharing of resources) I already have mentioned can affect true parallellism unless you use dedicated PCs which is a luxury lot of us do not have.
 
I meant with DMA CPU does not need to bother about file size since it is programmed I/O, CPU only needs to initiate it via an interrupt to the DMA controller (kind of delegation). In fact WAV files have lesser sized metadata than FLAC (absense of ID3 tagging) unless you mean metadata related to where on disk the file is located which i think is immaterial for contiguous or defraged data.

a) PIO is not the same as DMA.

b) I was talking about disk location metadata. For larger files, there will be more metadata in almost all cases even if it is contiguous on disk, depending on extent (or similar for NTFS) size and file size.

Rest of the slacks (of different layers through which data will pass) remains the same for both FLAC vs WAV but on the contrary your audio player or codec who is decoding FLAC does not have to dedicate thread to decode in case of wav. All this only if you think ( in mind ) that lower CPU cycles can bring in improvement in Sound Quality. To me, from 5% to 1% should not be something to be bothered about.

You assume you will read entire file data (several megabytes) with a single request which is never the case. You have multiple much smaller IO requests. In simple terms, more data generally means more requests. And for each IO request, the entire stack has to be processed by CPU.

You mentioned PCI / shared bus that is something (sharing of resources) I already have mentioned can affect true parallellism unless you use dedicated PCs which is a luxury lot of us do not have.

In two PC scenario, you would still have the network card and the sound card/dac contending for the pci bus. If you are receieving uncompressed data over network, the network card will again need to lock the bus more often to receieve the extra data which means system will not have the oppurtunity to send data to SC/DAC for a longer time. As I said, one requires a far better understanding of how computers work to get the complete picture and probably out of scope for most of us including me.

This is getting very technical and does not serve the purpose of this thread, so I will refrain from adding more. We can discuss this offline via pm if you want to.
 
...Or in another thread. It is all good stuff.

I think we're suffering from jitter again. Ranjeet must be going to introduce this stuff: if we are going to talk about it on this thread, we really ought to wait :o
 
Probably we are failing to understand each others specific areas except the fact of taking this offline :) which I belive we both can agree for now :)

By the way, if someone could throw some light on the dual PC setup (JPlay and JRiver) and its pros and cons too that would be great as well looking at the momentum of this thread.
 
:o Sorry I didn't add anything here in a long time. I started writing next chapter and got diverted due to other things.

I will come back with more soon. I am not gonna leave this here (though I do forget things, but not this one).

Thanks for your patience!
 
Can I ask a dumb question?

Why is it that CPU and DMA and bus locking and contention and jitter and interrupts and all the low level problems is causing issues with audio reproduction.. However video which contains audio as well and arguably requires 10x the resources, seems to do a reasonably realistic job on even an average computer?

We can even watch videos reliably (with audio) on pint sized phones and tablets. Why is jitter and io contention not an issue when watching an entire movie on a cell phone or a device like roku or popcorn hour or apple TV?

I am not trying to be snarky, really. But why dedicated audio servers? And why get this low level?
 
Last edited:
Get the Wharfedale EVO 4.2 3-Way Standmount Speakers at a Special Offer Price.
Back
Top