rabski

Moderator
  • Content Count

    27,699
  • Joined

  • Last visited

  • Days Won

    205

rabski last won the day on October 1

rabski had the most liked content!

Community Reputation

6,785 Excellent

5 Followers

About rabski

  • Rank
    Everything in moderation
  • Birthday 17/06/1956

Personal Info

  • Location
    Kettering
  • Real Name
    Richard

Wigwam Info

  • Turn Table
    Well Tempered/GL75
  • Tone Arm & Cartridge
    Audio Technica 50ANV
  • SUT / Phono Stage
    Kairsound/DIY valve
  • Digital Source 1
    Stable platter/CD63
  • Digital Source 2
    Computer/HiFace/D10
  • DAC
    DIY AD1865/others
  • Integrated Amp
    Lost count
  • Pre-Amp
    Various, most valve
  • Power Amp/s
    845 SET/others
  • My Speakers
    Living Voice
  • Headphones
    AKG
  • Trade Status
    I am not in the Hi-Fi trade

Recent Profile Visitors

6,001 profile views
  1. As we're selectively quoting: "Both parts ignore the local crystal clock once locked onto the S/PDIF signal. Better (i.e. more expensive) outboard DACs use additional tighter PLLs after the receiver chip to further cleanup the clock. Generally there is a trade off between low jitter PLL and wide locking such that low jitter PLLs may result in a DAC being unable or slow to lock onto a high jitter S/PDIF input requiring use of a low jitter Class 1 source as defined in the 'Red Book' spec. from Sony/Philips. This lack of universality and the fact that parts budget spent on low jitter PLLs (like individual crystals for each sample rate) reduces the parts budget for everything else (like filters, DACs, analog circuits, power supplies, and case) leads many designers to leave well enough alone and use the clock straight out of the receiver chip investing the saved resources elsewhere in the design or lowering the cost. This allows the DACs master clock to be strictly a function of the source and interface." Please look up what phase lock loop actually entails and how it works, and you should then be able to understand why the clock signal transmitted via SPDIF from the source is critical, regardless of the DACs internal clock. I'm finished here.
  2. The DAC does not reclock an SPDIF signal. The clock embedded in the SPDIF signal is used to syncronise the clock in the DAC, if the DAC uses an internal clock for SPDIF. With USB, there is always an internal clock and synchronisation is not required.
  3. The Node is a streamer and DAC, so obviously it has a clock, because it is a source as well as a DAC. I have no idea of the internal architecture but the whole point here is that whether or not it makes a difference (I haven't heard the combination so have no wish to comment) if the Node is used to feed an external DAC via SPDIF, then the Node is sending the clock signal. Flash is using a Metrum Pavane. Perhaps you should ask Cees Ruijtenberg how the Pavane implements the necessary PLL to synchronise the clock signals. As before, SPDIF is not asynchrous so the sending clock signal matters. It cannot just be ignored whether or not there is an internal clock in the DAC.
  4. Incidentally, I assume you are aware that even when using an internal clock, with an SPDIF input, the clock signal in the SPDIF is still used to regulate the internal clock in the DAC. It has to.
  5. I thought you were good at measuring, so please note that 'from' is not the same as 'since'. The ASR quote is from a month ago, incidentally. As you like to quote ASR, we have to assume it may be considered valid.
  6. Naim: "Since 1991 when the first Naim CD player – the CDS – was launched, Naim’s design philosophy has been that for best sonic performance from digital audio the master clock must be positioned close to the DAC chips. When the clock and DAC chips are closely coupled, timing errors are minimised. Whereas if a CD player is connected to an external DAC via S/PDIF, the master clock is in the CD player and the DAC chips are in the DAC, ie they are separated by the S/PDIF interface." Plenty of threads on your favourite other forum suggest similar: "A lot of pro equipment has the option of using the S/PDIF stream for clocking to maintain synchronization across multiple devices (e.g. recorders). But most will buffer it, or have the option to buffer it, to clean it up a bit especially if synchronization is not critical. For consumer systems, with almost zero research, again some do and some don't." More detailed, Norman Tracy: "Popular S/PDIF receiver chips like the Yamaha YM3623B and Crystal CS8412 are NOT crystal controlled but rather recover the necessary clock from internal Phase Locked Loops (PLL) locked onto the incoming data stream. The simple two pin can crystals often seen directly attached to '3623's and '8412's are optional. The 3623 uses the crystal clock to quickly lock onto the S/PDIF signal. The 8412 uses the crystal clock to determine and display the sample rate and jitter level of the S/PDIF signal. Both parts ignore the local crystal clock once locked onto the S/PDIF signal. Better (i.e. more expensive) outboard DACs use additional tighter PLLs after the receiver chip to further cleanup the clock. Generally there is a trade off between low jitter PLL and wide locking such that low jitter PLLs may result in a DAC being unable or slow to lock onto a high jitter S/PDIF input requiring use of a low jitter Class 1 source as defined in the 'Red Book' spec. from Sony/Philips. This lack of universality and the fact that parts budget spent on low jitter PLLs (like individual crystals for each sample rate) reduces the parts budget for everything else (like filters, DACs, analog circuits, power supplies, and case) leads many designers to leave well enough alone and use the clock straight out of the receiver chip investing the saved resources elsewhere in the design or lowering the cost. This allows the DACs master clock to be strictly a function of the source and interface. A few companies which make both transports and external DACs have implemented schemes in which the S/PDIF signal is supplemented with a second line carrying the master clock back from the external DAC to the transport. In this way the DAC's crystal becomes the master rather than the transport and the problems of recovering a spectrally pure clock are eliminated. No standards for this type of implementation exist. In reference #4 Dr. Hawksford calls for the clock signal to be transmitted on a second S/PDIF line, I know of no actual product which implements this scheme. Sony (in one product) and Arcam send the actual clock, Linn argues this leads to RFI problems and so they send a DC servo voltage which controls a VCOX in the transport.
  7. SPDIF/AES carries both the clock and audio signal. ESS chips reclock internally, many do not. USB is asychronous and requires an internal clock. Before USB came on the scene, I believe the vast majority of DACs used the SPDIF clock. That's the whole basis of SPDIF.
  8. No they don't Keith. Not all of them or always. They do if it's USB input, but many DACs do not use an internal clock when fed by an SPDIF source. Some rely on the SPDIF clock signal. It depends on the DAC.
  9. Keith, you totally missed George's point, which is very much the one I have already made. A 'test' with one person proves (or disproves) absolutely nothing. It never has and never will.
  10. Personally I wouldn't want to get involved in this at any show for two reasons. First, properly run DBT takes a lot of time, a large enough listening panel and, most importantly, a great deal of statistical analysis afterwards. Anything else is not blind testing, but more like a blind guess. People often forget the element of chance in testing, in that there is a significant possibility people will make a random choice that turns out to be 'correct' through pure luck. The second reason is that a show (ours anyway) ought to be fun. Carrying out a proper DBT isn't fun. It's laborious, time consuming and anything but simple. Ideally, especially when differences are likely to be (very) small, you need about 30 listeners and two solid days, plus subsequent time for all the analyses. You can certainly do a 'can you hear a difference' test (blind), but assuming this is in any way 'proof' is plain wrong. As for the measurement, I did think about this. A lab test would need some work, as it's only going to be valid if the cable under test is carrying a mains AC voltage. I could imagine you could then inject some spurious waveforms, with some (very) high frequency signals, and run the input and output through spectrum analysis. I wouldn't want to do it, as I can see some issues with the basics. Most oscilliscopes won't have a problem with 'looking' at a mains voltage, but you've got to find a way to get some HF signals onto the waveform in the first place, which won't be straightforward. Or cheap. Plus the mains waveform you are using would have to be perfect, and you'd also have to do it under different simulated loads. Of course, ideally you'd be not just be measuring any specific filtering effect of a cable. It's actually quite likely that ANY cable will have impedance effects at extremely high frequencies. The ideal would be to setup an entire signal chain and measure comparatively the amplifier output. And then we're back to sqaure one, because all you'll be able to realistically measure is any changes to a simple sine-wave signal or simple combination. As I've said countless times, music is anything but a simple signal and it's effectively impossible to measure changes (potential changes) to all combinations of frequencies and levels, because it's close to infinite. Your ears can't be the judge, because listening is incredibly fallible and audio memory is hopelessly innacurate. Like everything in this mad hobby, it's your money and your choice. I choose to use a mix of cables here, some of which are horribly costly and many of which are anything but. I would never, ever spend 'serious' money on wire because even if it makes a difference, it makes far, far less of a difference than spending that money elsewhere. It's entirely up to anyone what they choose to do, but telling other people there are massive and obvious differences is no better than me telling everyone there aren't. Neither has any basis scientifically.
  11. Leave some for her. I don't any longer To return to the subject, I suspect the EMI and RFI is a red herring. In fact, it has to be. Assuming there is RFI and EMI present, then is it not rather likely that it will equally be picked up by the mass of unshielded 2.5mm twin and earth cable running all around your home and connected directly to one end of your costly mains cable? I don't use shielded mains cables in some places to keep RFI and EMI out, I use it to keep it in. Especially near the turntable, SUT and phono stage, shielded cables help to keep the noise floor down. The signal from my cartridge to the SUT is uincredibly low level, so every bit helps.
  12. I bet you can't say that sentence after three bottles.
  13. I use some fancy braided shielded stuff in assorted places where cables run near the turntable, SUT and phono stage. I have some old Kondo mains cable for most runs for the simple reason that I had some end of reel stuff in the parts pile. The rest is anything I had close to hand when I needed a cable. It makes a difference when it's unplugged. Beyond that I wouldn't want to comment. I would say, however, that if a mains cable makes a 'massive' difference, then the one it's replacing must have been faulty.
  14. I can see every attachment here except the ones Baz has posted. Hosted on 'liveinternet.ru', so I suspect our firewall is blocking it