BENEFITS OF ORDERED INFO
Why does it matter if we have clean layout?
Our brain is evolved to create order from chaos.
Chaotic page layouts are debt on our brain's concentration budget. Instead of reading your page with all their attention the reader's brain is also trying, valiantly but subconsciously, to find a hidden pattern that isn't there.
Indirect Costs of Information Chaos
Cognitive psychologists like Conrad Lorenz in Switzerland began serious and methodical studies of children's brain development in the 1950s. Their theory was that physical and cognitive brain structures developed in childhood affected our perception and thinking throughout our lives. This has been repeatedly proven correct.
The core findings and their effects:
Visual: Scientists estimate that 80% of our brain is given over to processing visual stimuli. As an infant develops, the first sense which comes under control is vision; these are the deepest, fastest, and most sophisticated structures in our brains.
Static images are powerful, but moving images are completely gripping. Think of the pressures on vision in the environment where we evolved; the search for food and for threats made vision very important. That is how most people can determine whether they've seen an old movie when flipping channels late at night (within 5 seconds of seeing footage). That is an incredible ability. It also explains the large percentage of populations that are very susceptible to video advertising that speaks directly to many of these deepest brain structures.
Aural: Hearing sense develops next, adding sound to the infant's world. This sense builds on the visual sense and the infant begins to associate sounds with visual objects. Outside of gaming and notifications, we use little sound in interaction designs.
Sense of Self: Comes with kinesthetic (body movement) awareness and a new level of higher-order thinking. The infant begins to control their own body, then to know their own limits, and to realize that they can/cannot control other objects around them. So the sense of self is a two edged cognitive tool, balanced by the Sense of Not-self, or otherness. The sense of self builds on visual and aural cues; so we see that higher order functions are based on earlier, simpler structures.
Abstraction: The final key cognitive structure builds on all of the above. Abstraction is the growing ability in childhood to learn elements and manipulate them as symbols. Spoken language is the principal method for abstract concept manipulation, but we all use other systems as well such as currency, mathematics, and body language. We continue to build and use systems of abstraction throughout our lives.
In philosophy you can easily argue that we live in a sea of energy. Photons bounce off molecules in our eyes and some energy is absorbed. Air moves in waves and pushes on our eardrums we interpret as sound, Molecules of our fingers interact with other molecules and we feel touch. Of course, a lot of chemical/physical/organic phenomena are required to get the effects of that energy into our brains.
But we don't think about energy absorption every time we see the color red. It took thousands of years to devise the physics model which "is" absorption.
The point is, without our brain's ability to take all this energy and assign meaning to it, and then create metaphorical handles to manipulate those meanings, we would not be able to complete the simplest tasks because we would be overwhelmed with detail. Instead we develop internal hierarchies of meaning and symbol as we learn to allow us to move through our day-to-day tasks without spending too much attention on our environment -- we have imposed our own vision of order onto our universe.
Ordering concepts by labeling and rejection, like any useful tool, has good and bad uses. When we walk down the aisle of the grocery store and are able to find the correct brand of tomato paste without reading every can or bottle or box or bag, that is a good use.
When we refuse to re-evaluate past values and assumptions in the face of new and contradictory information, that can hurt us.
This competes for our mental resources.
When we're faced with a new terrain of seemingly chaotic organization (like some web pages) the brain will begin searching for meaning and organization. This occurs on many levels, with some processes occurring beneath the level of consciousness. The brain will waste processing power searching for these missing relationships; this has been proven in experiments with reaction time and complex visual environments, reaction time goes down as the brain is occupied processing a complex (read chaotic) environment. The solution to this is to simplify the interface design, making tables clearly aligned, creating consistent navigation, and using whitespace and lines to create clear groupings of like information.
An example of the power of the visual structures of the brain and layers of abstraction is the "Rooms of Animals" exercise, which I first heard Alan Kay discuss in 1990.
If you have two rooms, one with walls filled with pictures of animals, the other with walls filled with names of animals, and you ask a volunteer to enter a room and find a specific animal, that subject will always find the picture faster, usually by a factor of 2 to 3.
This is because the user dealing with names must 1) read the word, 2) recall associations - a reverse abstraction to recall meaning, 3) determine whether there is a fit, and 4) accept or deny the term.
On the other hand, a volunteer in the picture room can let their faster visual brain structure accept or reject the representations directly (while abstractions, the pictures are sufficiently visual to allow the low level, nonverbal visual brain to make comparisons).
Use of slang, mnemonics, TV or literary metaphors, poetic descriptions, riddles, etc., all slow down an interaction even further since the user is now dealing with layers of abstraction (but it can also make the experience richer and more enjoyable).
Visual, Verbal, and Kinesthetic.
This comes from a separate branch of behavioral psychology and was hijacked as Neuro-Linguistic Programming, or NLP, but it's useful for UX design and any presentations to people.
We can all use these three structural modes to interpret our world, rooted in the earliest cognitive structures of the brain. But humans tend to select and reinforce one of those modes over time, and become more sensitive and attuned to information in their own dominant mode.
Graphical User Interfaces (GUI) are successful because they communicate in all three modes:
Visual The GUI is Graphic, using visual representations of data and programs, this appeals to people with a visual bias. The majority of the human population is/are visually dominant.
Language cues for a visual thinker:
"I see what you're saying..." or "This doesn't look right..."
Verbal The use of language and some sounds, appeals to dominant verbal mode users. The menu structures and written instructions in GUIs work on this level, and the hyperlinked aspect added to language by the Web is very powerful. Note that Aural users include language (an abstraction layer) as a working skill.
Language cues for a verbal thinker:
"I'm listening..." or "What did they say?"
Kinesthetic users are sensitive to body position, emotions, and movement through space find the mouse movements and apparent spacial relationships of GUIs appealing.
Language cues for a kinesthetic thinker:
"What did you do?" or "Where is it?"
Regardless of an individual user's biases, it is important to communicate in multiple modes whenever possible. This ensures that the interface will be effective for different groups, and that most users (who actually use all the modes regularly) will have redundant information to help them better understand the interface.