EMAIL    TWITTER    RSS

Design Eye for the Dev Guy slides from YOW! Nights

Thursday, 2 Aug 2012

The YOW! crew certainly know how to organise things fast keep things running smoothly. One weeks notice, three cities, almost 500 people. Thanks to Lisa and the crew for their great support.

And thanks to all the fine guys and gals who showed up over the last week and contributed to some great discussions.

For those asking, I’m making available the latest slide deck used in Melbourne, Brisbane and Sydney.

DOWNLOAD (4.5MB)

Pixel Count: 32 Pixels. 27% More Signups.

Monday, 30 Jul 2012

At Agile Australia, Doug Blue from seek.com.au (Australia’s premier job search site) gave a great talk explaining elements of their product strategy and A/B feature testing. Doug included an example that perfectly illustrates just how much impact a few pixels can have.

Seek wanted to increase the number of people signing up for email alerts. This is not a primary task for a user looking for a job, its a second- or third-order goal. This means a balancing act between over emphasizing an action that’s not critical and having users miss a useful feature entirely.

The existing UI for the job search results page is pretty complex, providing the results themselves, refinement options for the search criteria and leaving some space for sponsored ads. I don’t have screenshots, but a sketch gives you some idea of the screen’s layout and the position of the link for email notifications:

Sketch of Seek results screen with A/B test.

Blue/pink/white are the colours of Seek’s brand. While there’s a often risk of bad branding over powering the information hierarchy on a page, Seek keeps it under control: blue for highlighted regions, pink for key information and calls to action. Even if you’re not a fan of the colours, the consistent use throughout the site works reasonably well.

In Option ‘B’ you can see just one small change. Taking the pink “call to action” colour, they added a 2-pixel stripe across the top of the envelop icon:

Closeup sketch of email icon A/B test.

Such a small change! Just a 2 by 16 pixels region. A total of just 32 pixels. On a typical laptop with a 13 inch screen at 1440×900 resolution, 32 pixels is an almost insignificant 0.0003% of the screen!

So, what was the measured impact? A massive 27% increase in signups. That’s almost 1% per pixel. If your UI is crunched for space, maybe you should consider measuring conversion impact per pixel and see how you compare.

One of the key attributes of visual processing is our uncanny ability to easily detect tiny amounts of contrasting colour. One theory of the origin of this ability relates plants and animals evolving in tandem for mutual benefit. As vision evolved and became able to detect colour, one key use was detecting the few ripe pieces of fruit in a forest of green. Plants evolved at the same time to paint their fruit brightly to entice animals to eat, encouraging animals transport their seeds. For the animal, improved colour vision became directly linked to how efficiently they could scan the jungle for a hint of blushing colour.

It may just be a theory, but its one that goes some way to explain why we are so good at detecting small contrasts in colour:

it is to the plants’ advantage to have its fruit remain un-eaten until the seeds are ready for dispersal, so the color change in ripening fruit is a signal that the plants are sending to the animals. In turn the animal gets the greatest benefit from eating the ripened fruit, so it is to the animals’ advantage to recognize when the fruit is ripe.

While the origins of “ripe fruit detection” may be hypothetical, there are many simple experiments that show its power today in many animals. Even lizards.

Whether or not Seek knew it, their A/B test was testing visual processing that’s evolved over millions of years.

Design Eye on YOW! Nights

Wednesday, 18 Jul 2012

Just got the dates confirmed from the fine people at YOW! Nights. Its all go for later this month. Melbourne, Brisbane and Sydney.

If you missed Design Eye for a Dev Guy at Agile Australia ’12, now’s the chance to catchup. See how everyone on the team can understand how visual perception is critical for a great user interface, learn simple tricks to assess what you’ve already built and walk away with a list of techniques to apply immediately.

I’m sure I’ll see the regulars in Melbourne, and I’m looking forward to catching up with the latest in Brisbane and Sydney. Book now. Cya there!

Visualisation without Analysis is just fine

Friday, 6 Jul 2012

When we’re faced with large amounts of data, ofttimes we scurry to aggregate, summarise, tally and total so we can glean meaning from the mass of numbers. The trouble is we don’t always know what the data is concealing. We don’t know the best analysis to perform that will reveal insight. One compute device that is extraordinary at finding patterns in large amounts of data is the human brain and its sophisticated peripheral, our eyes. All we need to do to activate this capability is visualise the raw data. Here are a couple of examples.

Last week I came across some fun visualisations by Vijay Pandurangan showing the historical change in movie poster colours. Starting back in 1914 and running through to this year, Vijay collected a stack of movie posters for each year. He then wrote a pretty simple program to chart the different colours used in each poster he could find:

I first made a unified view of colour trends in movie posters since 1914. Ignoring black and white colours, I generated a horizontal strip of hues in HSL. The width of each hue represents the amount of that hue across all images for that year, and the saturation and lighting were the weighted average for all matching pixels. Since HSL has a fixed order, comparisons can be made between years visually.

The results show an obvious trend.

Movie Poster colours by year

Its pretty clear that over time, there’s trend towards an 50/50 split between orange blue. And sure enough, this is a known fad in movie posters. The high contrast of orange-blue has been road tested to death and can be seen on billboards everywhere.

3 movie posters

A large number posters we see today are dominated by the complementary colour mix, juxtaposing warm and cool colours. The great thing about Vijay’s visualisation is it makes no presumption about the underlying data. It just presents the frequencies as a proportionally sized area in the graphic. From this, we instantly see two interesting facts. Both today’s dominance of orange-blue, and the trend from mainly orange over the last century.

Another favourite example, from 160 years ago, is the often referenced map of London used by John Snow when tracking the 1853-54 cholera outbreak. Dr Snow marked a map with black bars for each death. Each line parallel to the street and forming stacks indicating larger numbers of deaths in each location. Here is the full map drawn a year after the outbreak and a detailed view of the highest concentration of deaths.

Visual distribution of Cholera deaths in London, 1854

Snow theorised that cholera was transmitted by water, not by foul air (miasma) as was the current accepted theory. By plotting incidents of cholera on a map of London, a cluster around the Broad Street water pump became obvious. The visualisation may have come too late to stop this particular outbreak, but it provided evidence strong enough to convince ardent skeptics that cholera was born by contaminated water.

There’s a subtle detail to Snow’s visualisation in the care taken to draw each death-mark. Each mark is the same size – thickness of stroke and length. Each mark is neatly aligned to the street the house faced onto. Not just a question of tidiness, this consistency drives the black marks to be in proportion to the effect of cholera in each location. The graphic would be nowhere near as powerful if the number of deaths was written as a number. The reader would have to interpret each numeral, scanning the map for the largest number. Cognitively much harder (and more error prone) than looking for the biggest grouping of black lines.

Both these example demonstrate data visualisation, not analysis. Snow did not need an algorithm to take the geo-coordinates and find a likely centre. By rendering the data on the map, the association with the Broad Street pump is instantly obvious. When working with large data sets with unknown trends, we can apply the same principle.

Even just a few pixels per data point, like in the movie posters, can be enough for human visual processing to detect trends. Use the computer to visualise as much data as possible on one screen and let the humans do the visual analysis.

Windows Product? Read the Innovator’s Dilemma now!

Thursday, 5 Jul 2012

If your company or product is in anyway tied to Windows, you need to understand Clayton Christensen’s The Innovator’s Dilemma. Get it via this link, and I get a couple of cents.

While PC sales have been in decline recently, some have considered this more of a cyclical blip. Looking at the entire history of PC sales, Horace Dediu uses compelling numbers to starkly illustrate the rise and fall of Windows vs Apple. When looking at the gross data, it becomes clear the decline in PC sales is more structural. While this is of immediate concern for Microsoft and the PC manufacturers, the impact may be felt by your company sooner than you think. Dediu theorises:

Considering the near future, it’s safe to expect a “parity” of iOS+OS X vs. Windows within one or two years. The install base may remain larger for some time longer but the sales rate of alternatives will swamp it in due course.

The consequences are dire for Microsoft.

The numbers are pretty convincing. And dire for your own products. iOS and OS X are no longer platforms to consider for some alternate revenue. If you see the bulk of your revenue from Windows, you are on a path to having a niche product in decline. If Apple exceeds Windows in new sales in 1-2 years, the installed base of Windows will be in dramatic decline a couple of years later. As PC’s reach retirement age, more and more will be replaced by Macs and iPads. Possibly even Android and Surface tablets if they generate enough traction.

The biggest impact the iPad has had on the PC industry is finally achieving what Linux tried and failed at for two decades: Convincing everyone you don’t need a Windows PC to access the Internet and read your work email. Microsoft’s delay in responding to the iPhone and the iPad have let too many people experience computing without Windows and Office. People are beginning to realise on masse that its not only possible to function without Microsoft’s two cash cows, but desirable. The recent decline of Windows vs Apple is faster than the rise of Windows during the Internet boom of the late nineties.

If your company’s fortune is tied to the installed base of Windows, your are being disrupted right now. The next few years will be critical. You need take what you currently have in the bank and innovate new revenue streams as quickly as possible. Don’t be afraid to save money in the short term by reducing investment in additional Windows-only initiatives.

Via Daring Fireball.

Update • Microsoft just announced its first ever quarterly loss since going public. From the New York Times, 19th July, 2012

Sales in the PC market have stagnated as consumers have diverted their technology spending to more exciting technologies, including smartphones and tablet computers like the iPad.

Gesture Bio-Mechanics and Common Tasks

Friday, 29 Jun 2012

Developers writing software for tablets and phones are still finding new idioms and experimenting to find the best use of gestures. Some apps like Paper use subtle gestures that take a few moments to get your head around, then become pretty comfortable. Some patterns are emerging, becoming new, easily discovered interactions that are missed when not used in other apps, like pull down to refresh. Some developers are sweating the details of brand new interactions that may influence interfaces for the generation of software to come. Its a new frontier in software. Makes it exciting. But there are going to be a few mistakes.

One of the most used apps on my iPhone is Reeder. It allows me to skim dozens of RSS feeds and read articles in depth when needed. When I’m on the train or in a long queue, this is the app that competes with my Kindle for attention.

The latest version borrows from the current trend to have more screen space dedicated to content and moves some controls to gestures. Reeder has used gestures for a while, and overall the interactions are pretty good. The latest release, however, has added one little change that has become a major annoyance.

Reeder has a basic hierarchy of Groups, Feeds and Posts and uses a traditional master-detail interaction style to drill down into each article. As you drill down, you can go back a level using the Back button in the top left corner of each screen. Except on the screen you spend most time!

Example screens from Reeder

The Article screen uses a more “immersive” style where UI controls are subtle and the content takes the focus on the screen. A subdued Back arrow is there at the top-left, consistent at least in position with all other screens and easy enough to get to. There is nothing really wrong with how this screen looks when you first land on it.

The trouble begins when you start to scroll down and read the article. All the other views in Reeder leave controls at the top of the screen when scrolling. On the Article view, the controls move with the content. When scrolling to read the article, the controls scroll off the screen and are no longer visible.

When skimming news, I’ll read the first paragraph or two and might continue reading, or go back to the list view and look for something else interesting. But once I’ve scrolled down, how do I go back? I’m stuck! I have to scroll to the top to find the Back button. It would be nice if there was a short cut. And there is…

I can go back to the list view by using a new gesture. Swiping from left to right. A nice easy finger move when using the phone with two-hands. And a frustratingly tricky thumb move when using the phone with one hand! Can an interaction be that much harder based on which digit you’re using? Of course!

Bio-mechanics: Thumb vs Finger

When using a finger to swipe from left to right, its mostly the elbow joint doing the work. Moving your whole forearm in a broad arc. As the radius of motion from finger tip to elbow is relatively large compared to the phone screen size, a nice short, slightly curved, arc is drawn that the software on the phone can detect as left-to-right motion.

When using a thumb, the easy, natural motion when holding a phone in your left hand is also an arc. from the left side down to the bottom. This is an easy gesture. However, in Reeder, the software is not looking for an arc. To draw a straight left-to-right line with a thumb, your brain needs to coordinate multiple joints in the thumb. At the same time as the lower joint is rotating down, the top two joints need to extend the thumb. This extension and rotation needs to be carefully combined in order to make the line horizontal. Its feat of dexterity.

Making a stright line with your thumb is tricky.

For me, my left thumb gets this awkward move correct only 30% of the time! Most of the time, Reeder thinks I am trying to scroll up or down again. Compared to my elbow moving my finger to perform the same task which scores over 90%.

For Common Tasks use Simple Gestures

Reeder is not the only app that uses the same left-right swipe. Apple’s own Mail application uses the same move to archive a message from your inbox. I’ve never found it as annoying as I am finding the same gesture in Reeder. One reason for this is I don’t delete mail that often. I keep all real correspondence, and spam filtering is working pretty well. When I need to clean out a few mails, I’ll switch to using two hands and zip through the inbox.

Generally, I use my iPhone with one hand. So now that a very common task only has a one third hit rate it’s driving me nuts. Reeder seems to be optimised for going to the next article (with both gestures and extra controls). This is something I almost never do. I skim the index and only read a smallish percentage for most feeds. Reader is also less forgiving of a slight vertical movement. In Mail, as long as the gesture is generally horizontal and not vertical, it treats it as a “delete”. In Reeder, if feels as though you need to get the first fraction of a millimetre just right, or its a scroll.

With buttons on the screen, the number of ways a user interacts is fairly limited. There are only so many ways we can touch a screen or move a cursor. Gesture interactions require us to think hard about the different scenarios our users will be in when using our software.

Gesture Algorithms are easy, but the human body has limitations

We may also need to factor in a deeper understanding of joint movement. Here is an example from Clinical Biomechanics: Musculoskeletal Actions and Reactions by R. C. Schafer:

Simple translatory motions of a body part involve movements of more than one joint. This requires reciprocating actions of three or more segments at two or more joints if parallel lines are to be followed. For example, a fingertip cannot be made to follow the straight edge of a ruler placed in front if the wrist and elbow joints are locked. If the wrist and elbow are fixed, the fingertip must follow an arc and not a straight line.

The movements described are not identical to what in play when using Reeder. But the message is clear; some movements are harder than others, some are just impossible. When developing your application, if you need to use gestures for very common tasks, make sure they are bio mechanically simple. And remember what is algorithmically simple, may not be bio-mechanically simple!

As for Reeder, the “Mark as Unread” toggle seems to be very low-use feature compared to going back to the list view. I’d love to see this control swapped for my missing Back button.

Product churn or product charm?

Monday, 18 Jun 2012

Thinking further about new the MacBook Pro and the discussion around its unserviceability there is a big difference between the commodity PC manufactures and the great product innovators like Ford, Sony and Apple. These companies derive value for their customers using the pattern of break-through new product followed by years of incremental improvements and production efficiencies. Consumers see great new products that start a little pricey, grabbed by early adopters. Then are gifted with affordable luxury due to a steady decrease in price (often with an increase in quality as well).

Other companies launch products differently. They grab the latest spec components and bundle them together in a fashionable-for-today package. The top-end components are expensive, hiding the fact the there’s also higher than average margin being applied to arrive at the final early adopter price. Over time, the prices tend to come down, as with the innovators. But the price reduction is based on the cost of the components becoming cheaper. Rather than a investing in production efficiencies, Dell can benefit from innovation in the supply chain (higher chip yields, etc) that produce cheaper parts. By holding the retail price high as long as possible an incremental increase in margin flows from every supplier innovation. Eventually, competition puts pressure on the fat margin and retail prices are dropped.

The high “launch price, price drop over time” cycle appears, at least from the consumer perspective, to be similar to say the Model-T Ford ($850 to $260 over twenty years). The difference is the point in the product life cycle where the company makes the most money. Is it at the launch, or in the tail?

The key to commodity PC prices coming down is the production innovation from the component manufactures, not production efficiencies from the PC guys. How does this effect the product strategy? It drives the need to get to market with the latest, fastest machine as quickly as possible. Utilising the latest, fastest components to beef up the spec sheet. Not so many fast parts as to make a great device. Only enough of them to get the tick on the box and ensure a margin that’s fatter than average. As soon as the margin is eaten away, its time to look for a new bundle of gadgets and the latest fashion’s colour.

If this is your product strategy, what does your marketing department do when briefed on the companies vision for the new product (latest components = high margin)? It talks up the specs of course! Sound familiar?

The key difference between the spec-sheet strategy and the blockbuster-refine strategy lies in where the bulk of the margin comes from. For innovation companies, paying off the investment in plant setup gives an lower effective operating margin at the start of the production. The longer the same product is made, the more profitable each additional item off the line. Two different models:

  • Quick to market, low average margin, high product churn
  • Slow to market, high average margin, low product churn

They are exact opposites. Both, if executed well, can be profitable. One demonstrates belief in design, trust in getting it right, respect for customers and simplicity. The other does not.

Apple’s Long Tail

Sunday, 17 Jun 2012

This week Apple launched its next generation laptop, the MacBook Pro with Retina Display. A few days later, Kyle Wiens wrote a piece for Wired that examined the internals after a teardown. The article and its conclusion, widely circulated among the technocrats, has created a bit of a stir.

The Retina MacBook is the least repairable laptop we’ve ever taken apart.

Some potential customers are unhappy that these amazing new top of the line machines are virtually unserviceable and non-upgradable. Why not use standard components? Why not use commodity parts that can be upgraded and easily replaced?

Jony Ive, his team and all of Apple, are focused on building the best laptop conceivable. The questionable design choices have resulted in:

  • Long battery life (thanks to glued, bespoke cells)
  • Thin case (thanks to soldered RAM and customer flash storage)
  • Low reflective display (thanks to bonded LCD)
  • Light weight (thanks to no bulky standard slots)

In his opinion piece Wiens suggests Apple has crossed some line that was not meant to be crossed. Its a common refrain from the technocrats. In more than name Apple has transitioned from a computer company to a consumer electronics company, selling to many different types of customer. This has been a long running tension in Apple.

Steve Wozniak and Steve Jobs, co-founders of Apple, always had different views on opening a computer’s case. Wozniak, the engineer, wanted expandability and flexibility. Jobs believed expandability to be a bug, not a feature. I can’t imagine Wozniak is a fan of the new machine.

The new MacBook Pro is a clear signal that Job’s vision for Apple is doing just fine under Tim Cook. Why does Apple make these tradeoffs? Here are a couple of explanations.

A Different Target Customer

Apple’s increasing mass appeal over the last 5-10 years has changed their customer mix, even for its top-shelf products. Today, Apple’s pro-customers are people with large movie libraries, small movie studios, professional photographers, game artists. Developers and others who understand the difference between a memory bus and a universal serial bus are a much smaller percentage of sales than they used to be.

The reality is the new “prosumer” are a much larger segment than the traditional Mac developer and Apple knows this. Sometimes they misstep, like the Final Cut pro kerfuffle last year. But in general Apple will be happy to lose a few sales if it means a better long term position. The new MacBook Pro is the first step in a new long term game.

The bargain hunters who can find various computer components online, crack open a laptop and install upgrades are less important compared to other advantages Apple get from soldered RAM and bespoke flash storage.

In the short term, Apple’s target customer for the new MacBook Pro with Retina Display are the prosumer who want the fastest, best built laptop. Its a small group compared to their entire customer base. They are the “early adopters” willing to pay a premium for the latest and greatest. The bargain hunters looking to buy a shell and fill it with cheap RAM need not apply.

What about the bargain hunters? Just wait.

The Early Pay for those with Patience

When the first MacBook Air came out in 2008, I was an early adopter. For a long time, I had used a 12 inch PowerBook G4 and I loved it. But the G4 processor was struggling and when it came time to switch, I replaced it with a 15 inch Intel MacBook Pro. When the Air was announced, I’d been lugging this 15 inches around for a few years and was over it! The screen was good, but not that much bigger that it made up for the bulk and the weight.

So I pre-ordered one of the new Air’s, paid a premium, loved its small size and lived with its many faults. The current generation Air fixes virtually everything that was wrong with my machine. The 11 inch is even more portable and they’re about half the price I paid!

This style of product innovation, quantum shift, follow by incremental refinements and price reductions, is not unique to the MacBook Air, or even Apple. For another example, consider the product/price evolution of the first Sony PlayStation.

The main board of all these devices are essentially them same. As the PlayStation evolved, more of the components are merged into more complex chips. Once designed, high yield wafers with multiple chips are exceptionally cheap! The more functionality put into one chip, the cheaper the overall cost of manufacture. Ditch the CDROM and add an LCD, and a new line of portable devices defines itself with very little development cost.

This price/product evolution is repeated often, especially in the game console industry. But its not limited to electronics. Henry Ford launched the Model-T with a massive price advantage over the competition. Not content, cost savings thorough production improvements were passed on to customers.

Fifteen million Model-T’s were produced spanning almost two decades.

There are two key elements a company needs to following this product trajectory. One is sufficient control over manufacturing to allow every efficiency possible. The second is long product cycles. Apple has both.

Companies like Dell and HP, using industry standard components, are limiting the areas where they can make manufacturing improvements. RAM and SSD drives that come in standard electronic packaging must fit into industry standard slots. While these standards reduce the initial design effort, they also limit how far a production line can innovate. Innovation on a production line equates to lower costs. Standard components are a limit below which its hard to get cost savings due to production line improvements.

There is an additional benefit to soldering RAM directly. Certainly Jony Ive gets to wield his magic over the internal design, but Jeff Williams (SVP Operations) also gets a massive reward. By avoiding pre-packaged RAM modules, Apple are able to go direct to the RAM manufactures and get cheaper prices. They skip the middle man. They get larger margins. Instead of paying to have RAM chips packaged, and paying to have slots soldered onto the main board, Apple pays just for the RAM chips and solders them directly. Less space, but also less waste and more opportunity for streamlined production. The same for custom flash packaging.

A new production line that includes more component work like arranging RAM modules and SSD chips is going to be, naturally, more expensive to build. But this is an investment. With Apple’s, massive cash reserves, the new MacBook Pro is an inevitable design. Fully integrated custom build on a complicated production line that was expensive to set up, but easily justified if Apple plan to produce same device for years to come, with minor improvements.

Prices of the MacBook Pro this week reflect that setup cost. Early adopters pay a premium. But after few short years the plant has paid for itself, and been refined and optimized. Pricing can lowered to that of just component costs plus margin. With components cheaper than using industry standard parts, a long product life ends with low prices and maintains high margins.

Tick, Tock

The clock is already ticking for Apple’s competitors. Apple is paying off its expensive production line with every early adopter purchase. Soon, Apple will start passing on cost savings to customers. When this happens, the competitors scratch their head wondering how Apple can produce products of the quality they do at the price they do. Unable to move past this quarter’s earning call, unable to invest heavily in a future product line, they’ll inevitably decide to compete in a different space: cheaper, lower quality, more flexible options. All with less margin. Low margin and lower sales, hmmm.

If you need a new high-end laptop, and don’t care about price, grab the state of the art from Apple now. If you can wait 6-12 months, you can expect a modest price drop. Either way, forget about trying to fix the machine yourself. You going to have to add AppleCare to your budget. How long before this is included in the price, I wonder?

Form over Function over Form

Thursday, 14 Jun 2012

Great design is defined not by how nice something looks, but by how well it works. My favourite things, those contraptions where the design is beautiful, have an intrinsic elegance based on how they perform when used. The elegance of the single speed, fixed wheel bicycles I restore – the bare minimum required to make the most efficient human powered transport. The simplicity of the Converse Chuck Taylor shoes I’ve been wearing each day for 15 years – long lasting, renewable rubber and canvas, built with a minimum of fuss.

Recently I came across a story by Ralf Herrmann about designing a typeface for street signs.

The story begins in 2006 with a trip down Route 66. Day in, day out, I looked at U.S. traffic signs that were either set in the old, somewhat clumsy “FHWA font series” or the new Clearview HWY typeface. Approaching the signs, I would often test myself: which typeface works best from a distance, and which of its features or details might be responsible for its performance.

Herrmann started with the human-based outcome of the most legible typeface possible. It’s a great illustration of applying rigour to the design process and three things stood out while reading.

Distinguished Letter Forms

First, was the application of visual principles to the form of the letters – a very practical application of the “design eye” ideas I’ve been talking about for a while. Helvetica is a great font. Simple in its design, stripped of all clutter with a grid-based form that’s symmetrical to a fault. When introduced in 1957, Helvetica was a demonstration of elegance that designers of the time (and since) embraced enthusiastically. Its now such a universal presence it has its own movie and a wave of detractors keen to show their design independence.

Helvetica is not the only font used for signage, but its perfect grid-based geometry make it a good baseline for highlighting the distinguishing marks used to improve legibility. Herrmann’s font is not based on Helvetica, its built from the ground up, but contrasting letter forms between the two typefaces highlights how Herrmann came up with the shapes for his perfectly legible type.

Here you see how the grid-base of Helvetica, clean and regular as it is, actually inhibits the ability to quickly discern between different letters like ‘R’ and ‘B’. Helvetica is naked of serifs. Serifs add a degree of visual clutter, but aid in reading quickly. The elegance in Herrmann’s work is in modifying the letter forms for every character just enough to make them distinguishable at a glance while still keeping an overall visual harmony to words when the letters are combined on a sign.

Legibility Tool

Second was the use of visual tools to refine and test the character forms. For as long as I can remember, I’ve used a little trick to assess designs where I shrink the screen down to about 1/4 size and see how the visual hierarchy looks when details are obscured by being too small to digest. It’s a crude attempt to get a sense of the pre-attentive view of a screen. Herrmann went to the effort to build a Legibility Test Tool. Awesome!

For street signs, different weather, lighting and distances are key to legibility. Simulating these conditions, like the “glow” of a reflective sign above, provides a test tool to help refine each letter. Before you give up on this great idea as being not worth effort to try, you might not need such a sophisticated approach for what you’re building; eg, if your building a web app, try hitting ⌘- (Command-Minus) a few times in your browser. How well can you access your core functionality this way?

The Form of Function

Since Donald Normal raised the awareness of affordances in design, there’s been an expectation that in great products, form and function are inexorably linked.

Comparing each of Hermann’s characters one-on-one against a grid will identify tails sticking out, a mixture of angles and even different line widths for light-on-dark vs dark-on-light contrasts. The driving function of legibility forces each character to be sufficiently different to read in a glance. Homogenous forms may be visually more balanced, but fail to excel at the core task of being understood at a glance while hurtling down the road.

Individual letters are not where a sign typeface needs to excel. The subtle elegance of Wayfinding Sans Pro is in the clarity of phrases rendered next to each other on a sign. The detail in each character is there, but hidden in how effective and legible words are when rendered. It’s a great example of a carefully crafted contraption that gets its beauty from its high performance.