This is my (in)activity log. You might like to visit my employer
Novell which is an amazing company, and also
Dell who in days of yore provided me with a
free laptop for Gnome development / conferences.
Also if you have the time to read this sort of stuff you could enlighten
yourself by going to Unraveling Wittgenstein's net or if
you are feeling objectionable perhaps here.
Stuff Michael Meeks is doing
- Up early, gave the wife a lie-in (it has been
brought to my attention that there is some lack of clarity
here - it just means: I wrestled children quietly for some
hours so she could sleep). Took H. to & from pre-school.
- Lunch; to work. Mail. Poked a11y fix for the
fix for threaded / gtk+ / accessible apps.
- It seems Federico enthused a chap called Ashley
Pittman with my plan to create an I/O-grind type implementation:
think cachegrind but for I/O, like cachegrind there are two parts:
- First separate all time into 1/2 seek time
intervals (cf. H. Nyqist), ie. ~5ms intervals.
- in each interval track & log program behavior:
- all explicit I/O syscalls: eg.
- track all implicit I/O: 1st time writes to
new pages, touches of mmapped files (code & data).
- generate a log-file with filename / offset / size &
stack-trace entries (elide lots of call traces
across this interval).
Processing / visualisation
Anyhow - the next part is turning this data into meaningful
performance numbers, in a deterministic fashion - ie.
we need to be able to run the same data-set twice and get the
- a standard disk I/O latency simulator apparently
there are a lot of these around the place.
- simple (deterministic, but statistical?) filesystem
model (to turn paths to block numbers).
- noddy swap / LRU page replacement algorithm.
- simple page-cache / inode / dentry cache simulation.
- mindless [ and deterministic ] process scheduler.
So - of course the simulations don't have to be particularly
advanced, or faithful to the O/S to give interesting results
I think: much in the way that callgrind's 'cycle estimation'
is a helpful guesstimate. That's important, as while the valgrind
piece is hard to write, creating super-accurate simulators of
kernel behavior is rather harder.
It is my contention though, that cold-start (and hence login
time, boot time etc.) remain such untractible problems due to a
lack of profiling tools for unexpected I/O behavior:
particularly excessive seeking, and/or just reading too much
data. Without tools to measure this - and particularly to be
able to repeatably (and ~instantly) profile different data sets,
with different (kernel) algorithms would make a huge difference.
Though it is indeed a KDE program, KCachegrind rocks my
world, integrating with that beasty, and allowing the profiling
to be tweaked eg. having a drop-down selection of: "aggregate
I/O latency", "explicit I/O latency", "page dirties", "I/O bandwidth"
etc. would be wonderful, also some model parameters to tweak:
a "system memory" spin-button, a coarse disk characteristic:
"Laptop" vs. "Desktop", a filesytem button: "ext3" vs.
"ReiserFS" etc. etc.
- Unfortunately, can't hack on it myself, until waded through swamp
of work, and finished the warm-start gcc/binutils optimisations.
- Wrote LXF column, and got some internal review on it, finally
even remembered to send it off.
In case it's not painfully obvious: the reflections reflected here are my
own; mine, all mine ! and don't reflect the views of Novell, The
Lithuanian Gov't or Arnold Schwarzenegger. It's also important to
realise that I'm not in on the Swedish Conspiracy.
Occasionally people ask for formal photos for conferences,
Michael Meeks (firstname.lastname@example.org)