ASIO drivers allow for latency setup in samples (or time or byte size). Recording demands ultra low latencies (as low as 32 samples). The same holds true for playback but its rationale is different. In recording, there's an explicit time delay between when an instrument is played vs when its recorded. Here, realtime processing is desired, i.e. sound is recorded as it happens.
Does size matter (during playback)?
Refined setups will readily reveal sound quality changes with changing latency. Its best to use lowest stable latency. A good programmer would argue that latency is a non-issue for playback. "Just set it to highest level as we get less context switches which is more efficient...". This is not correct for best sound quality.
At a software, firmware and hardware level, PCI prefers small payloads.
"Latency jitter" as in variations in latency was thought to be the cause for why latency affects sound quality. This idea has been scrapped.
From a Jitter viewpoint, when a soundcard's buffer is populated (whilst the other buffer is converted to SPDIF or whatever), there's a burst of electrical activity. The idea is to keep this burst as short as possible thereby reducing interference to the soundcard's XO, i.e. reduce Jpp. We achieve this by setting latency to the lowest possible level. Of course, using such a low latency would mean more frequent buffer loads. This is the ASIO frequency (or ASIO Hz). At 32 samples latency for 96k output, ASIO Hz is 3kHz. This is now periodic in nature and is digitally induced. We now have Periodic Jitter - the worst kind which exists for all digital playback systems. ASIO gives us control over this.
Higher ASIO Hz is preferred and you definitely want to avoid anything less than 1kHz. Why? A soundcard's PLL or PLLs down the chain will be able to further reduce this periodic jitter as the frequency is likely to be above the PLL's cut-off.