Tcl Source Code

View Ticket
Login
Ticket UUID: 625454
Title: new arg to 'time' - iterations in secs
Type: RFE Version: None
Submitter: davidw Created on: 2002-10-18 23:13:25
Subsystem: 18. Commands M-Z Assigned To: dkf
Priority: 5 Medium Severity:
Status: Open Last Modified: 2009-07-14 17:53:19
Resolution: Later Closed By:
    Closed on:
Description:
[ I hate the brevity of the summary field ]

Hi,

While running some testing code that I wanted to time,
it came up as a requirement that users want to run it
not by number of iterations, but by a certain length of
time.

The idea occurred to me that [time] could take an
additional argument - a qualifier stating that
iterations is a number of seconds (I suppose you could
also use 'minutes' or 'hours' for the additional arg if
you wanted).

The loop would then run for at least, but not
precisely, that amount of time, and return the amount
of microseconds per iteration.

It would also be possible to script the whole thing, of
course.  Maybe that's a better course of action?  The
nice thing about time is that the loop overhead is
minimal.  Having it coded in C would mean that the
results would be comparable to regular [time] results.
User Comments: dkf added on 2009-07-14 17:53:19:
I've attached a file that comes from some the TclOO benchmarking suite. It tries *very* hard to get a stable set of performance figures, since it is aimed at timing very fast code (method calls originally).

If this (or simple adaptations of it) is suitable for the purposes requested, it's perhaps worth putting in tcllib.

dkf added on 2009-07-14 17:50:31:

File Added - 334999: cps.tcl

dkf added on 2003-03-10 18:12:48:
Logged In: YES 
user_id=79902

One possible technique is to use exponential growth of
numbers of iterations per clock measurement until such time
as the uncertainty in the measurement drops low enough for a
reasonable estimate in the number of times to run the code
to be made (or the total allotted time is exceeded.)  The
problem with this is that to estimate the uncertainty you
need to know the accuracy of the platform's clock... :^/

dkf added on 2003-03-10 04:53:17:
Logged In: YES 
user_id=79902

One problem with this.  How to go about calibrating so that
the time-reading code is not called very often, as reading
the time is a fairly expensive operation (i.e. at least one
syscall.)  Techniques that would work well for slow chunks
of code will be poor for fast chunks, and vice versa...

Attachments: