Lizard Brain Web DesignLast winter, I saw a great talk by Scott Davis called
"Lizard Brain Web Design". The theme was to apply psychology and neuro sci ideas to web sites, and to explain why simplicity and good design can really work. For example, we want the site to stay "out of the way" so that the users stay in a primal, "lizard" mode of consciousness with respect to the site. In this way, they can concentrate on what matters.
During the talk, I remember thinking that all of the principles discussed apply to more than surfing content on the web. They also apply to surfing
code in an IDE. That is, topics such as:
- Whitespace is a critical aspect of design
- Group related items (locality of reference)
- Our minds can only stack N items (N = 7 ?)
- Principle of least surprise
apply just as well to our APIs, our code organization, and coding conventions.
For months now, I've wondered if there were studies that applied neuro sci to developers.
SubitizingLet's play a game: as you add a parameter to a method, how many parameters triggers your sense of "this is too many -- I need to refactor this".
Seriously, go ahead, think of a number, N, for your threshold.
You probably said N = 3 or 4. True, that's what everyone says, but here is one reason why. The delightful book
Mind Hacks discusses
subitizing (item #35): given a set of N objects, where N is 4 or less, we process counting in a
much faster way. The book claims 250 ms for the first 4 items and a full second (!) for every 4 items after that.
There is debate as to how this works (see Mind Hacks for academic references), but one conjecture is that when N <= 4, the "counting" is a side effect of visual processing: i.e. it is done by the lizard, reptilian level of the brain. When N goes past 4, we have to do some work.
Now, let's be clear: the book talks about counting shapes. Stars, circles, beads on an abacus. I have no idea if this applies to Java or C# parameters.
But I'm willing to bet money that it does.
Eye-Tracking and Variable NamesThis article came across the transom recently, and dovetailed with the above ideas. The gist is that the researchers used scientific techniques (e.g. eye-tracking) to evaluate productivity of programming styles.
The claim in the paper is that the Scala style of using comprehensions is more productive than Java's iterative loops. Also, for small code blocks, well-named intermediate variables may not matter.
I didn't read the paper, and I have no idea of the validity of the science. However, I find the approach to be very fascinating. I'm sure scientific methods have been used for a long time with respect to lines of code, and productivity, but I wonder if neuroscience will have a future impact on language
design?
It would be fascinating to see if researchers start hooking up developers to
functional MRI machines, to see how the brain works while coding. (I know that my
amygdala lights up when I see a 80-line method!)
The UpshotImagine a geek conference where a new language is unveiled: instead of its design being driven by a sense of tradition or aesthetic, what if its design was modeled on hard evidence from a neuro lab?
Neat stuff.