- From: Carlos O'Donell <carlos_at_baldric.uwo.ca>
- Date: Wed, 29 Oct 2003 10:13:05 -0500
On Tue, Oct 28, 2003 at 07:35:12PM -0500, Kenneth Jacker wrote: > We're using comedi_data_read_delayed() on a MCC 1602/16 board in our > application with a recent CVS version of COMEDI. > > Two questions: > > o Though the ADC doesn't begin converting until "nano_seconds" > time elapses, does the function return immediately? Or, will > the function only return when the conversion is completed? That's for comedi people to answers, but I'll take a pot shot at the second question. > o What determines the minimal *usable* value of the "nano_seconds" > parameter? We want to delay for ~800ns on a 500MHz machine ... > should this work? Kernel's nanosecond syscall. Architecture dependant. On a 500MHz x86 system this measurement via 'rdtsc' gets you the insn clock counter, which the kernel uses internally in 'sys_nanosleep.' The largest problem is that you burn hundreds of insns entering the kernel and hundreds on the way out (which might be random since signals are examined and handled on the return path). I believe, that if you have specified a realtime priority for your process, that nanosleep will busywait and thus you only have entry/exit overhead. You'll have to do some timing yourself using 'rdtsc' from userspace to see how many clock cycles have elapsed. c.
Received on 2003-10-29Z15:13:05