Below are my idle ramblings on the subject, so please take it with a big bucket of salt
. Besides, Iam no subwoofer user, so don't have any real experience with them.
I had severe bass problems in my small room. So i was using a Dspeaker Antimode 2.0 to correct the bass only on the bass driver of my 3 way speaker. This bass driver was handling frequencies below 300 hz, While the mids and tweeter were run directly without any DSP or processing. I was quiet pleased with the improvement. Then
@drkrack happened to come over for a listen, and he identified this delay in the bass notes immediately. So, what I would gather is to use DSP over the entire range, so that mids and treble can be delayed to match the processing time of the bass. If this has to be avoided, then the second option is to have the subwoofer closer to the listening position than the main speakers, by the distance equivalent to the processing delay.
I also have a minidsp SHD for subwoofer syncing, its claimed processing delay with max processes engaged was to the tune of 25ms if I remember right. When we convert that time into distance, it equates to 337 mtrs x 0.025 secs = 8.42 mtrs. So for optimum time alignment between the mains and the subwoofer, the sub woofer has to be closer to the listener than the main speakers by 8.42 mtrs. Which is impossible in a domestic environment, so the only solution is to delay the main speakers by 0.025 ms to align with the sub, if the sub is placed at the same distance from the listener as the main speakers. This will entail that the main speakers are also fed through the same DSP.
Just when the above seemed correct mathematically, we get a googly in the form of psychoacoustics. As per experiments conducted, it takes the brain 50 ms to process a 40 hz sound wave. And it takes 40 ms for the 40hz sound wave to even form completely, emanating out of a driver. So, in relation to the 50ms that the brain takes, the 25 ms that the Minidsp SHD takes, pales into comparison, and will be un-noticable by the brain.
Why
@drkrack noticed that processing delay in my setup, was because I was using the correction upto 300hz. And as we all know the ear/brain is more sensitive to time / distance / placement of sounds as the frequency increases. However, considering that the ear/brain loses directivity at 80hz. I would hazard a guess that if a sub is used below 80hz, then the resultant time delay might not be noticable.
The above said, we still experience disjointed bass when a sub woofer is used. When we look at why Rel insists on speaker level inputs, the main idea is that the bass driver in the speaker and the sub is receiving the same signal. When we mean same signal, it means signal same in phase. Both drivers move in the same direction at all times. This removal of phase error probably helps a great deal. Second is their advise against using a high pass for the speakers. But to run the speakers full range, and to blend the sub where the speakers naturally roll off to our ears ( Not to the microphone ). This again avoids two different sources of sound at the same frequency, so that they are not distinguishable as being separate sources of sound.
Hence, in my opinion, if a sub woofer is driven at 60hz or lower, at the correct phase, then it might not be identifiable despite the time delay. Would fondly look forward to the real world experiences of subwoofer users with their own findings.