In some programming languages, asynchronous events can occur at any time.
For instance, in Ruby, there are subclasses of Exception that can be raised at any time — there are few lines of code safe from interruption. Some of exceptions, due to their cause, are not recoverable at all.
Stumbled upon this curious syntax in a talk by Sussman:
(define ((f x) y) ...)
I’ve seen a lot of business statistic graphs over the years and many of them are useless for communicating trends. Key Performance Indicators (KPIs) should be ubiquitous and understandable to everyone in a business.
Rules to follow
- Absolute numbers (e.g. page views per day, widgets sold per day) tell us how a business is doing.
- Ratios of X/unit tell us how systems are doing.
- Ratios of Units/Profit tell us how much they cost and bridge the gap between Biz metrics and Tech metrics.
- Simple KPIs with concrete units are easy understand. KPIs without well-understood units require too much explanation.
- KPI plots must have the actual units specified in the legends.
- KPI plots with the same numerator should be in the same graphic.
- Plot Y axies should be appropriately scaled and must have a 0-origin.
- The collection of the KPI data does not need to be exact, it must only be agreed upon.
- KPIs must be versioned; if a KPI collection method is changed, the KPI must have a new version.
The KPIs below are relatively easy to compute and simple enough for anyone to craft given access to basic operational data.
Assume we are selling Widgets, for each Application and collectively for all Applications, sampled for each day:
- W = number of new widgets sold
- P = total money in-flow – out-flow (gross profit)
- WP = projected new widgets gross profit (estimated)
Applications (internal and externally facing)
- P = page views
- PE = page errors
- PEP = PE / P = errors / pages
- PT = total page time
- PTP = PT / P = page time / pages
- PW = P / W = pages / new widget sold
- PTW = PT / W = page time / new widgets sold
- PTWP = PT / WP = page time / new widgets sold gross profit
Offline Processing – Batches
- B = batch items
- BE = batch item errors
- BEB = BE / B = errors / items
- BT = total batch item time
- BTB = BT / B = item time / item
- BTP = BT / P = item time / gross profit
The header comments to a shell script after spending an afternoon with Chef…
# Why did I write this as a bash script,
# ... instead of using chef, puppet or even cfengine?
# Because my requirements are small and well-understood,
# and because, frankly, as the fable goes:
# Everything that humans do can be replaced with a shell script.
# Chef, at first glance, seems to be an exercise in metaphors:
# "cookbooks", "recipes" which are simply collections of
# macros for shell commands, which are already well-understood.
# The simple act of creating a user in Chef
# requires a plugin; the plugin requires a librarian,
# etc. I can invoke "adduser" in a single shell script line,
# just like a human admin would at a command prompt.
# System administrators and their machines
# are the "audience" of system automation, and both understand "shell":
# Shell is the lingua-franca of sys-adminstery.
# Chef, on the other hand, is not readable those who are not
# Chef literate.
# I find it terribly annoying when a blog post:
# "How to setup XYZ", turns out to be a long narrative
# with punctuated shell commands -- The whole thing could
# have been a shell script with the *narrative* as comments,
# Read the comments like a blog or read the code as if you were typing it.
# ... or use this as an executable specification.
# -- kurt 2012/12/20
Should Combined-Object-Lambda-Architecture really be Combined-Lambda-Object-Architecture?
Ian Piumarta’s IDST bootstraps a object-system, then a compiler, then a lisp evaluator. Maru bootstraps a lisp evaluator, then crafts an object system, then a compiler. Maru is much smaller and elegant than IDST.
Are object systems necessarily more complex than lambda evaluators? Or is this just another demonstration of how Lisp code/data unification is more powerful?
If message send and function calls are decomposed into lookup() and apply(), the only difference between basic OO message-passing and function calling is lookup(): the former is late-bound, the latter is early bound (in the link-editor, for example). Is OO lookup() the sole complicating factor? Is a lambda-oriented compiler fundamentally less complex than a OO compiler?
Use mmap() to allocate heaps by default.
Use mmap() MAP_ANON, instead of /dev/zero.
Align mmap() size to pagesize.
Align heap allocation size to pagesize.
Expand heap slotlimit to fit in aligned allocation.
New $RUBY_HEAP_* options:
initial number of slots per heap.
value is independent of RUBY_HEAP_INIT_SLOTS.
max number slots for a heap.
defaults to PAGESIZE or 4096.
This patch reduces the stack buffer memory footprint of dead Threads as early as possible, rather than waiting until the Thread can be GCed.
This is applicable only to the zero-copy context switch patch.