Skip to main content

attention problems

Why good readers might have reading comprehension difficulties and how to deal with them

The limitations of working memory have implications for all of us. The challenges that come from having a low working memory capacity are not only relevant for particular individuals, but also for almost all of us at some points of our lives. Because working memory capacity has a natural cycle — in childhood it grows with age; in old age it begins to shrink. So the problems that come with a low working memory capacity, and strategies for dealing with it, are ones that all of us need to be aware of.

Today, I want to talk a little about the effect of low working memory capacity on reading comprehension.

A recent study involving 400 University of Alberta students found that 5% of them had reading comprehension difficulties. Now the interesting thing about this is that these were not conventionally poor readers. They could read perfectly well. Their problem lay in making sense of what they were reading. Not because they didn’t understand the words or the meaning of the text. Because they had trouble remembering what they had read earlier.

Now these were good students — they had at least managed to get through high school sufficiently well to go to university — and many of them had developed useful strategies for helping them with this task: highlighting, making annotations in the margins of the text, and so on. But it was still very difficult for them to get hold of the big picture — seeing and understanding the text as a whole.

This is more precisely demonstrated in a very recent study that required 62 undergraduates to read a website on the taxonomy of plants. Now this represents a situation that is much more like a real-world study scenario, and one that has, as far as I know, been little studied: namely, drawing together information from multiple documents.

In this experiment, the multiple documents were represented by 24 web pages. Each page discussed a different part of the plant taxonomy. The website as a whole was organized according to a four-level hierarchical tree structure, where the highest level covered the broadest classes of plants (“Plants”), and the lowest, individual species. However — and this is the important point — there was no explicit mention of this organization, and you could navigate only one link up or down the tree, not sideways. Participants entered the site at the top level.

After pretesting, to assess WMC and prior plant knowledge, the students were given 18 search questions. Participants were asked both to read the site and answer the questions. They were given 25 minutes to do so, after which they completed a post-test similar to their pre-test of prior knowledge: (1) placing the eight terms found in the first three levels on the hierarchical tree (tree construction task); (2) selecting the correct two items from a list of five, that were subordinates to a given item (matching task).

Neither WMC nor prior knowledge affected performance on the search task. Neither WMC nor prior knowledge (nor indeed performance on the search task) directly affected performance on the post-test matching task, indicating that learning simple factual knowledge is not affected by your working memory capacity or how much relevant knowledge you have (remember though, that this was a very simple and limited amount of new knowledge).

But, WMC did significantly affect understanding of the hierarchical structure (assessed by the tree construction task). Prior knowledge did not.

These findings don’t only tell us about the importance of WMC for seeing the big picture, they also provide some evidence of what underlies that, or at least what doesn’t. The findings that WMC didn’t affect the other tasks argues against the idea that high WMC individuals may be benefiting from a faster reading speed, or that they are better at making local connections, or that they can cope better at doing multiple tasks. WMC didn’t affect performance on the search questions, and it didn’t affect performance on the matching task, which tested understanding of local connections. No, the only benefit of a high WMC was in seeing global connections that had not been made explicitly.

Let’s go back to the first study for a moment. Many of the students having difficulties apparently did use strategies to help them deal with their problem, but their strategy use obviously wasn’t enough. I suspect part of the problem here, is that they didn’t really realize what their problem was (and you can’t employ the best strategies if you don’t properly understand the situation you’re dealing with!).

This isn’t just an issue for people who lack the cognitive knowledge and the self-knowledge (“metacognition”) to understand their intrinsic problem. It’s also an issue for adults whose working memory capacity has been reduced, either through age or potentially temporary causes such as sleep deprivation or poor health. In these cases, it’s easy to keep on believing that ways of doing things that used to work will continue to be effective, not realizing that something fundamental (WMC) has changed, necessitating new strategies.

So, let’s get to the burning question: how do you read / study effectively when your WMC is low?

The first thing is to be aware of how little you can hold in your mind at one time. This is where paragraphs are so useful, and why readability is affected by length of paragraphs. Theoretically (according to ‘best practice’), there should be no more than one idea per paragraph. The trick to successfully negotiating the hurdle of lengthy texts lies in encapsulation, and like most effective strategies, it becomes easier with practice.

Rule 1: Reduce each paragraph to as concise a label as you can.

Remember: “concise” means not simply brief, but rather, as brief as it can be while still reminding you of all the relevant information that is encompassed in the text. This is about capturing the essence.

Yes, it’s an art, and to do it well takes a lot of practice. But you don’t have to be a master of it to benefit from the strategy.

The next step is to connect your labels. This, of course, is a situation where a mind map-type strategy is very useful.

Rule 2: Connect your labels.

If you are one of those who are intimidated by mind maps, don’t be alarmed. I said, “mind map-type”. All you have to do is write your labels (I call them labels to emphasize the need for brevity, but of course they may be as long as a shortish sentence) on a sheet of paper, preferably in a loose circle so that you can easily draw lines between them. You should also try to write something by these lines, to express your idea of the connection. These labels will also provide a more condensed label for the ideas being connected. You can now make connections between these labels and the others.

The trick is to move in small steps, but not to stay small. Think of the process as a snowball, gathering ideas and facts as it goes, getting (slowly) bigger and bigger. Basically, it’s about condensing and connecting, until you have everything densely connected, and the information getting more and more condensed, until you see the whole picture, and understand the essence of it.

Another advantage of this method is that you will have greatly increased your chances of remembering it in the long-term!

In a situation similar to that of the second study — assorted web pages — you want to end up with a tight cluster of labels for each page, the whole of which is summed up by one single label.

What all this means for teachers, writers of text books, and designers of instructional environments, is that they should put greater effort into making explicit global connections — the ‘big picture’.

A final comment about background knowledge. Notwithstanding the finding of the second study that there was no particular benefit to prior knowledge, the other part of this process is to make connections with knowledge you already have. I’d remind you again that that study was only testing an extremely limited knowledge set, and this greatly limits its implications for real-world learning.

I have spoken before of how long-term memory can effectively increase our limited WMC (regardless of whether your WMC is low or high). Because long-term memory is essentially limitless. But information in it varies in its accessibility. It is only the readily accessible information that can bolster working memory.

So, there are two aspects to this when it comes to reading comprehension. The first is that you want any relevant information you have in LTM to be ‘primed’, i.e. reading and waiting. The second is that you are obviously going to do better if you actually have some relevant information, and the more the better!

This is where the educational movement to ‘dig deep not broad’ falls down. Now, I am certainly not arguing against this approach; I think it has a lot of positive aspects. But let’s not throw out the baby with the bathwater. A certain amount of breadth is necessary, and this of course is where reading truly comes into its own. Reading widely garners the wide background knowledge that we need — and those with WMC problems need in particular — to comprehend text and counteract the limitations of working memory. Because reading widely — if you choose wisely — builds a rich database in LTM.

We say: you are what you eat. Another statement is at least as true: we are what we read.

References

Press release on the first study (pdf, cached by Google)

Second study: Banas, S., & Sanchez, C. a. (2012). Working Memory Capacity and Learning Underlying Conceptual Relationships Across Multiple Documents. Applied Cognitive Psychology, n/a-n/a. doi:10.1002/acp.2834

Is multitasking really a modern-day evil?

In A Prehistory of Ordinary People, anthropologist Monica Smith argues that rather than deploring multitasking, we should celebrate it as the human ability that separates us from other animals.

Her thesis that we owe our success to our ability to juggle multiple competing demands and to pick up and put down the same project until completion certainly makes a good point. Yes, memory and imagination (our ability to project into the future) enable us to remember the tasks we’re in the middle of, and allow us to switch between tasks. And this is undeniably a good thing.

I agree (and I don’t think have ever denied) that multitasking is not in itself ‘bad’. I don’t think it’s new, either. These are, I would suggest, straw men — but I’m not decrying her raising them. Reports in the media are prone to talking about multitasking as if it is evil and novel, and a symptom of all that is wrong in modern life. It is right to challenge those assumptions.

The problem with multitasking is not that it is inherently evil. The point is to know when to stop.

There are two main dangers with multitasking, which we might term the acute and the chronic. The acute danger is when we multitask while doing something that has the potential to risk our own and others’ safety. Driving a vehicle is the obvious example, and I have reported on many studies over the past few years that demonstrate the relative dangers of different tasks (such as talking on a cellphone) while driving a car. Similarly, interruptions in hospitals increase the probability of clinical errors, some of which can have dire consequences. And of course on a daily level, acute problems can arise when we fail to do one task adequately because we are trying to do other tasks at the same time.

A chronic danger of multitasking that has produced endless articles in recent years is the suggestion that all this technology-driven multitasking is making us incapable of deep thought or focused attention.

But Smith argues that we do not, in fact, engage in levels of multitasking that are that much different from those exhibited in prehistoric times. ‘That much’ is of course the get-out phrase. How much difference is too much? Is there a point at which multitasking is too much, and have we reached it?

These are the real questions, and I don’t think the answer is something we can draw a line with. Research with driver-multitasking has revealed significant differences between drivers, as a function of age, as a function of personal attributes, as a function of emotional or physical state. It has revealed differences between tasks —e.g. talking that involves emotions or decisions is more distracting than less engaging conversation; half-overheard conversations are surprisingly distracting (suggesting that having a passenger in the car talking on a phone may be more distracting than doing it yourself!). These are the sort of things we need to know — not that multitasking is bad, but when it is bad.

This approach applies to the chronic problem also, although it is much more difficult to study. But these are some of the questions we need to know the answers to:

  • Does chronic multitasking affect our long-term ability to concentrate, or only our ability to concentrate while in the multitasking environment?
  • If it does affect our long-term ability to concentrate, can we reverse the effect? If so, how?
  • Is the effect on children and adolescents different from that of adults?
  • Does chronic multitasking produce beneficial cognitive effects? If so, is this of greater benefit for some people rather than others? (For example, multitasking training may benefit older adults)
  • What are the variables in multitasking that affect our cognition in these ways? (For example, the number of tasks being performed simultaneously; the length of time spent on each one before switching; the number of times switching occurs within a defined period; the complexity of the tasks; the ways in which these and other factors might interact with temporary personal variables, such as mood, fatigue, alcohol, and more durable personal variables such as age and personality)

We need to be thinking in terms of multitasking contexts rather than multitasking as one uniform (and negative) behavior. I would be interested to hear your views on multitasking contexts you find beneficial, pleasant or useful, and contexts you find difficult, unpleasant or damaging.

Shaping your cognitive environment for optimal cognition

Humans are the animals that manipulate their cognitive environment.

I reported recently on an intriguing study involving an African people, the Himba. The study found that the Himba, while displaying an admirable amount of focus (in a visual perception task) if they were living a traditional life, showed the same, more de-focused, distractible attention, once they moved to town. On the other hand, digit span (a measure of working memory capacity) was smaller in the traditional Himba than it was in the urbanized Himba.

This is fascinating, because working memory capacity has proved remarkably resistant to training. Yes, we can improve performance on specific tasks, but it has proven more difficult to improve the general, more fundamental, working memory capacity.

However, there have been two areas where more success has been found. One is the area of ADHD, where training has appeared to be more successful. The other is an area no one thinks of in this connection, because no one thinks of it in terms of training, but rather in terms of development — the increase in WMC with age. So, for example, average WMC increases from 4 chunks at age 4, to 5 at age 7, 6 at age 10, to 7 at age 16. It starts to decrease again in old age. (Readers familiar with my work will note that these numbers are higher than the numbers we now tend to quote for WMC — these numbers reflect the ‘magic number 7’, i.e. the number of chunks we can hold when we are given the opportunity to actively maintain them.)

Relatedly, there is the Flynn effect. The Flynn effect is ostensibly about IQ (specifically, the rise in average IQ over time), but IQ has a large WM component. Having said that, when you break IQ tests into their sub-components and look at their change over time, you find that the Digit Span subtest is one component that has made almost no gain since 1972.

But of course 1972 is still very modern! There is no doubt that there are severe constraints on how much WMC can increase, so it’s reasonable to assume we long since hit the ceiling (speaking of urbanized Western society as a group, not individuals).

It’s also reasonable to assume that WMC is affected by purely physiological factors involving connectivity, processing speed and white matter integrity — hence at least some of the age effect. But does it account for all of it?

What the Himba study suggests (and I do acknowledge that we need more and extended studies before taking these results as gospel), is that urbanization provides an environment that encourages us to use our working memory to its capacity. Urbanization provides a cognitively challenging environment. Our focus is diffused for that same reason — new information is the norm, rather than the exception; we cannot focus on one bit unless it is of such threat or interest that it justifies the risk.

ADHD shows us, perhaps, what can happen when this process is taken to the extreme. So we might take these three groups (traditional Himba, urbanized Himba, individuals with ADHD) as points on the same continuum. The continuum reflects degree of focus, and the groups reflect environmental effects. This is not to say that there are not physiological factors predisposing some individuals to react in such a way to the environment! But the putative effects of training on ADHD individuals points, surely, to the influence of the environment.

Age provides an intriguing paradox, because as we get older, two things tend to happen: we have a much wider knowledge base, meaning that less information is new, and we usually shrink our environment, meaning again that less information is new. All things being equal, you would think that would mean our focus could afford to draw in. However, as my attentive readers will know, declining cognitive capacity in old age is marked by increasing difficulties in ignoring distraction. In other words, it’s the urbanization effect writ larger.

How to account for this paradox?

Perhaps it simply reflects the fact that the modern environment is so cognitively demanding that these factors aren’t sufficient on their own to enable us to relax our alertness and tighten our focus, in the face of the slowdown in processing speed that typically occurs with age (there’s some evidence that it is this slowdown that makes it harder for older adults to suppress distracting information). Perhaps the problem is not simply, or even principally, the complexity of our environment, but the speed of it. You only have to compare a modern TV drama or sit-com with one from the 70s to see how much faster everything now moves!

I do wonder if, in a less cognitively demanding environment, say, a traditional Himba village, whether WMC shows the same early rise and late decline. In an environment where change is uncommon, it is natural for elders to be respected for their accumulated wisdom — experience is all — but perhaps this respect also reflects a constancy in WMC (and thus ‘intelligence’), so that elders are not disadvantaged in the way they may be in our society. Just a thought.

Here’s another thought: it’s always seemed to me (this is not in any way a research-based conclusion!) that musicians and composers, and writers and professors, often age very well. I’ve assumed this was because they are keeping mentally active, and certainly that must be part of it. But perhaps there’s another reason, possibly even a more important reason: these are areas of expertise where the proponent spends a good deal of time focused on one thing. Rather than allowing their attention to be diffused throughout the environment all the time, they deliberately shut off their awareness of the environment to concentrate on their music, their writing, their art.

Perhaps, indeed, this is the shared factor behind which activities help fight age-related cognitive decline, and which don’t.

I began by saying that humans are the animals that manipulate their cognitive environment. I think this is the key to fighting age-related cognitive decline, or ADHD if it comes to that. We need to be aware how much our brains try to operate in a way that is optimal for our environment — meaning that, by controlling our environment, we can change the way our brain operates.

If you are worried about your ‘scattiness’, or if you want to prevent or fight age-related cognitive decline, I suggest you find an activity that truly absorbs and challenges you, and engage in it regularly.

The increase in WMC in Himba who moved to town also suggests something else. Perhaps the reason that WM training programs have had such little success is because they are ‘programs’. What you do in a specific environment (the bounds of a computer and the program running on it) does not necessarily, or even usually, transfer to the wider environment. We are contextual creatures, used to behaving in different ways with different people and in different places. If we want to improve our WMC, we need to incorporate experiences that challenge and extend it into our daily life.

This, of course, emphasizes my previous advice: find something that absorbs you, something that becomes part of your life, not something you 'do' for an hour some days. Learn to look at the world in a different way, through music or art or another language or a passion (Civil War history; Caribbean stamps; whatever).

You can either let your cognitive environment shape you, or shape your cognitive environment.

Do you agree? What's your cognitive environment, and do you think it has affected your cognitive well-being?

Improving attention through nature

Until recent times, attention has always been quite a mysterious faculty. We’ve never doubted attention mattered, but it’s only in the past few years that we’ve appreciated how absolutely central it is for all aspects of cognition, from perception to memory. The rise in our awareness of its importance has come in the wake of, and in parallel with, our understanding of working memory, for the two work hand-in-hand.

In December 2008, I reported on an intriguing study (go down to "Previous study")that demonstrated the value of a walk in the fresh air for a weary brain. The study involved two experiments in which researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. There are two important aspects to this finding: the first is that this effect was achieved by walking in the botanical gardens, but not by walking along main streets; the second — far less predictable, and far more astonishing — was that this benefit was also achieved by looking at photos of nature (versus looking at photos of urban settings).

Now, most of us can appreciate that a walk in a natural setting will clear a foggy brain, and that this is better than walking busy streets — even if we have no clear understanding of why that should be. But the idea that the same benefit can accrue merely from sitting in a room and looking at pictures of natural settings seems bizarre. Why on earth should that help?

Well, there’s a theory. Attention, as we all know, even if we haven’t articulated it, has two components (three if you count general arousal). These two components, or aspects, of attention are involuntary or captured attention, and voluntary or directed attention. The first of these is exemplified by the situation when you hear a loud noise, or someone claps you on the shoulder. These are events that grab your attention. The second is the sort you have control over, the attention you focus on your environment, your work, your book. This is the type of attention we need, and find so much more elusive as we get older.

Directed attention has two components to it: the direct control you exert, and the inhibition you apply to distracting events, to block them out. As I’ve said on a number of occasions, it is this ability to block out distraction that is particularly affected by age, and is now thought to be one of the major reasons for age-related cognitive impairment.

Now, this study managed to isolate the particular aspects of attention that benefited from interacting with nature. The participants were tested on three aspects: alerting, orienting, and executive control. Alerting is about being sensitive to incoming stimuli, and was tested by comparing performance on trials in which the participant was warned by a cue that a trial was about to begin, and trials where no warning was given. Alerting, then, is related to arousal — it’s general, not specifically helpful about directing your attention.

Orienting, on the other hand, is selective. To test this, some trials were initiated by a spatial cue directing the participant’s attention to the part of the screen in which the stimulus (an arrow indicating direction) would appear.

Executive control also has something to do with directed attention, but it is about resolving conflict between stimuli. It was tested through trials in which three arrows were displayed, sometimes all pointing in the same direction, other times having the distracter arrows pointing in the opposite direction to the target arrow. So this measures how well you can ignore distraction.

So this is where the findings get particularly interesting: it seems that looking at pictures of nature benefited executive control, but not alerting or orienting.

Why? Well, attention restoration theory posits that a natural environment gives your attentional abilities a chance to rest and restore themselves, because there are few elements that capture your attention and few requirements for directed attention. This is more obvious when you are actually present in these environments; it’s obvious that on a busy city street there will be far more things demanding your attention.

The fact that the same effect is evident even when you’re looking at pictures echoes, perhaps, recent findings that the same parts of the brain are activated when we’re reading about something or watching it or doing it ourselves. It’s another reminder that we live in our brains, not the world. (It does conjure up another intriguing notion: does the extent to which pictures are effective correlate with how imaginative the person is?)

It’s worth noting that mood also improved when the study participants walked in the park rather than along the streets, but this didn’t appear to be a factor in their improved cognitive performance; however, the degree to which they felt mentally refreshed did correlate with their performance. Confirming these results, mood wasn’t affected by viewing pictures of nature, but participants did report that such pictures were significantly more refreshing and enjoyable.

Now, I’ve just reported on a new study that seems to me to bear on this issue. The study compared brain activity when participants looked at images of the beach and the motorway. The researchers chose these contrasting images because they are associated with very similar sounds (the roar of waves is acoustically very similar to the roar of traffic), while varying markedly in the feelings evoked. The beach scenes evoke a feeling of tranquility; the motorway scenes do not.

I should note that the purpose of the researchers was to look at how a feeling (a sense of tranquility) could be evoked by visual and auditory features of the environment. They do not refer to the earlier work that I have been discussing, and the connection I am making between the two is entirely my own speculation.

But it seems to me that the findings of this study do provide some confirmation for the findings of the earlier study, and furthermore suggest that such natural scenes, whether because of the tranquility they evoke or their relatively low attention-demanding nature or some other reason, may improve attention by increasing synchronization between relevant brain regions.

I’d like to see these studies extended to older adults (both of them were small, and both involved young adults), and also to personality variables (do some individuals benefit more from such a strategy than others? Does reflect particular personality attributes?). I note that another study found reduced connectivity in the default mode network in older adults. The default mode network may be thought of as where your mind goes when it’s not thinking of anything in particular; the medial prefrontal cortex is part of the default mode network, and this is one of the reasons it was a focus of the most recent study.

In other words, perhaps natural scenes refresh the brain by activating the default mode network, in a particularly effective way, allowing your brain to subsequently return to action (“task-positive network”) with renewed vigor (i.e. nicely synchronized brainwaves).

Interestingly, another study has found a genetic component to default-mode connectivity (aberrant DMN connectivity is implicated in a number of disorders). It would be nice to see some research into the effect of natural scenes on attention in people who vary in this attribute.

Meditation is of course another restorative strategy, and I’d also like to see a head-to-head comparison of these two strategies. But in any case, bottom-line, these results do suggest an easy way of restoring fading attention, and because of the specific aspect of attention that is being helped, it suggests that the strategy may be of particular benefit to older adults. I would be interested to hear from any older adults who try it out.

[Note that part of this article first appeared in the December 2008 newsletter]

Seeing without words

I was listening on my walk today to an interview with Edward Tufte, the celebrated guru of data visualization. He said something I took particular note of, concerning the benefits of concentrating on what you’re seeing, without any other distractions, external or internal. He spoke of his experience of being out walking one day with a friend, in a natural environment, and what it was like to just sit down for some minutes, not talking, in a very quiet place, just looking at the scene. (Ironically, I was also walking in a natural environment, amidst bush, beside a stream - but I was busily occupied listening to this podcast!)

Tufte talked of how we so often let words get between us and what we see. He spoke of a friend who was diagnosed with Alzheimer’s, and how whenever he saw her after that, he couldn’t help but be watchful for symptoms, couldn’t help interpreting everything she said and did through that perspective.

There are two important lessons here. The first is a reminder of how most of us are always rushing to absorb as much information as we can, as quickly as we can. There is, of course, an ocean of information out there in the world, and if we want to ‘keep up’ (a vain hope, I fear!), we do need to optimize our information processing. But we don’t have to do that all the time, and we need to be aware that there are downsides to that attitude.

There is, perhaps, an echo here with Kahnemann’s fast & slow thinking, and another to the idea that quiet moments of reflection during the day can bring cognitive benefits.

In similar vein, then, we’d probably all find a surprising amount of benefit from sometimes taking the time to see something familiar as if it was new — to sit and stare at it, free from preconceptions about what it’s supposed to be or supposed to tell us. A difficult task at times, but if you try and empty your mind of words, and just see, you may achieve it.

The second lesson is more specific, and applies to all of us, but perhaps especially to teachers and caregivers. Sometimes you need to be analytical when observing a person, but if you are interacting with someone who has a label (‘learning-disabled’, ‘autistic’, ‘Alzheimer’s’, etc), you will both benefit if you can sometimes see them without thinking of that label. Perhaps, without the preconception of that label, you will see something unexpected.

Retraining the brain

A fascinating article recently appeared in the Guardian, about a woman who found a way to overcome a very particular type of learning disability and has apparently helped a great many children since.

As a child, Barbara Arrowsmith-Young had a brilliant, almost photographic, memory for information she read or heard, but she had no understanding. She managed to progress through school and university through a great deal of very hard work, but she always knew (although it wasn’t recognized) that there was something very wrong with her brain. It wasn’t until she read a book (The Man with a Shattered World: The History of a Brain Wound - Amazon affiliate link) by the famous psychologist Luria that she realized what the problem was. Luria’s case study concerned a soldier who developed mental disabilities after being shot in the head. His disabilities were the same as hers: “he couldn't tell the time from a clock, he couldn't understand bigger and smaller without drawing pictures, he couldn't tell the difference between the sentences ‘The boy chases the dog’ and ‘The dog chases the boy’.”

On the basis of enriched-environment research, she started an intensive program to retrain her brain — 8-10 hours a day. She found it incredibly exhausting, but after 3-4 months, she suddenly ‘got it’. Something had shifted in her brain, and now she could understand verbal information in a way she hadn’t before.

The ‘Arrowsmith Program’ is now available in 35 schools in Canada and the US, and the children who attend them have often, she claims, been misdiagnosed with ADD or ADHD, dyslexia or dysgraphia. She has just published a book about her experience (The Woman Who Changed Her Brain: And Other Inspiring Stories of Pioneering Brain Transformation - Amazon affiliate link).

I can’t, I’m afraid, speak to the effectiveness of her program, because I can’t find any independent research in peer-reviewed journals (this is not to say it doesn’t exist), although there are reports on her own website. But I have no doubt that intensive training in specific skills can produce improvement in specific skills in those with learning disabilities.

There are two specific things that I found interesting. The first is the particular disability that Barbara Arrowsmith-Young suffered from — essentially, it seems, a dysfunction in integrating information.

This disjunct between ‘photographic memory’ and understanding is one I have spoken of before, but it bears repeating, because so many people think that a photographic memory is a desirable ambition, that any failure to remember exactly is a memory failure. But it’s not a failure; the system is operating exactly as it is meant to. Remembering every detail is counter-productive.

I was reminded of this recently when I read about something quite different: an “inexact” computer chip that’s 15 times more efficient, “challenging the industry’s 50-year pursuit of accuracy”. The design improves efficiency by allowing for occasional errors. One way it achieved this was by pruning some of the rarely used portions of digital circuits. Pruning is of course exactly what our brain does as it develops (infancy and childhood is a time of making huge numbers of connections; then as the brain matures, it starts viciously pruning), and to a lesser extent what it does every night as we sleep (only some of the day’s events and new information are consolidated; many more are discarded).

The moral is: forgetting isn’t bad in itself. Memory failure comes rather when we forget what we want or need to remember. Our brain has a number of rules and guidelines to help it work out what to forget and what to remember. But here’s the thing: we can’t expect an automatic system to get it right all the time. We need to provide some direct (conscious) management.

The second thing I was taken with was this list of ‘learning dysfunctions’. I believe this is a much more useful approach than category labels. Of course we like labels, but it has become increasingly obvious that many disorders are umbrella concepts. Those with dyslexia, for example, don’t all have the same dysfunctions, and accordingly, the appropriate treatment shouldn’t be the same. The same is true for ADHD and Alzheimer’s disease, to take two very different examples.

Many of those with dyslexia and ADHD have shown improvement as a result of specific skills training, but at the moment we’re still muddling around, not sure of the training needed (a side-note for those who are interested — Scientific American has a nice article on how ADHD behavioral therapy may be more effective than drugs in long run). So, because there are several different problems all being lumped into a single disorder, research finds it hard to predict who will benefit from what training.

But the day will come, I have no doubt, when we will be able to specify precisely what isn’t working properly in a brain, and match it with an appropriate program that will retrain the brain to compensate for whatever is damaged.

Or — to return to my point about choosing what to forget or remember — the individual (or parent) may choose not to attempt retraining. Not all differences are dysfunctional; some differences have value. When we can specify exactly what is happening in the brain, perhaps we will get a better handle on that too.

In the meantime, there is one important message, and it is, when it comes down to it, my core message, underlying all my books and articles: if you (or a loved one, or someone in your care) has any sort of learning or memory problem, whatever the cause, think very hard about the precise difficulties experienced. Then reflect on how important each one is. Then try and discover the specific skills needed to deal with those difficulties that matter. That will require not only finding suggested exercises to practice, but also some experimentation to find what works for you (because we haven’t yet got to the point where we can work this out, except by trial and error). And then, of course, you need to practice them. A lot.

I’m not saying that this is the answer to everyone’s problems. Sometimes the damage is too extensive, or in just the wrong place (there are hubs in the brain, and obviously damage to a hub is going to be more difficult to work around than damage elsewhere). But even if you can’t fully compensate for damage, there are few instances where specific skills training won’t improve performance.

Sharing what works is one way to help us develop the database needed. So if you have any memory or learning problems, and if you have experienced any improvement for whatever reason, tell us about it!

Multitasking

  • Doing more than one task at a time requires us to switch our attention rapidly between the tasks.
  • This is easier if the tasks don't need much attention.
  • Although we think we're saving time, time is lost when switching between tasks; these time costs increase for complex or unfamiliar tasks.
  • Both alcohol and aging affect our ability to switch attention rapidly.

A very common situation today, which is probably responsible for a great deal of modern anxiety about failing memory, is that where we're required to “multitask”, that trendy modern word for trying to do more than one thing at a time. It is a situation for which both the normal consequences of aging and low working memory capacity has serious implications.

There’s an old insult along the lines of “he can’t walk and chew gum”. The insult is a tacit acknowledgment that doing two things at the same time can put a strain on mental resources, and also recognizes (this is the insult part!) that well-practiced activities do not place as much demand on our cognitive resources. We can, indeed, do more than one task at a time, as long as only one of the tasks requires our attention. It is attention that can’t be split.

You may feel that you can, in fact, do two tasks requiring attention simultaneously. For example, talking on a cellphone and driving!

Not true.

What you are in fact doing, is switching your attention rapidly between the two tasks, and you are doing it at some cost.

How big a cost depends on a number of factors. If you are driving a familiar route, with no unexpected events (such as the car in front of you braking hard, or a dog running out on the road), you may not notice the deterioration in your performance. It also helps if the conversation you are having is routine, with little emotional engagement. But if the conversation is stressful, or provokes strong emotion, or requires you to think … well, any of these factors will impact on your ability to drive.

The ability to switch attention between tasks is regulated by a function called prefrontal cortex. This region of the brain appears to be particularly affected by aging, and also by alcohol. Thus, talking on a cellphone while driving drunk is a recipe for disaster! Nor do you have to actually be under the influence to be affected in this way by alcohol; impaired executive control is characteristic of alcoholics.

More commonly, we get older, and as we get older we become less able to switch attention fast.

The ability to switch attention is also related to working memory capacity.

But multitasking is not only a problem for older adults, or those with a low working memory capacity. A study [1] using young adults found that for all types of tasks, time was lost when switching between tasks, and time costs increased with the complexity of the tasks, so it took significantly longer to switch between more complex tasks. Time costs also were greater when subjects switched to tasks that were relatively unfamiliar.

Part of the problem in switching attention is that we have to change “rules”. Rule activation takes significant amounts of time, several tenths of a second — which may not sound much, but can mean the difference between life and death in some situations (such as driving a car), and which even in less dramatic circumstances, adds appreciably to the time it takes to do tasks, if you are switching back and forth repeatedly.

To take an example close to home, people required to write a report while repeatedly checking their email took half again as long to finish the report compared to those who didn't switch between tasks!

In other words, while multitasking may seem more efficient, it may not actually BE more efficient. It may in fact take more time in the end, and the tasks may of course be performed more poorly. And then there is the stress; switching between tasks places demands on your mental resources, and that is stressful. (And not only are we poorer at such task-switching as we age, we also tend to be less able to handle stress).

There is another aspect to multitasking that deserves mention. It has been speculated that rapid switching between tasks may impede long-term memory encoding. I don’t know of any research on this, but it is certainly plausible.

So, what can we do about it?

Well, the main thing is to be aware of the problems. Accept that multitasking is not a particularly desirably situation; that it costs you time and quality of performance; that your ability to multitask will be impeded by fatigue, alcohol, stress, emotion, distraction (e.g., don’t add to your problems by having music on as well); that your ability will also be impaired by age. Understand that multitasking involves switching attention between tasks, not simultaneous performance; and that it will therefore be successful to the extent that the tasks are familiar and well-practised.

This article originally appeared in the February 2005 newsletter.

Planning to Remember

References

Rubinstein, J.S., Meyer, D.E. & Evans, J.E. 2001. Executive Control of Cognitive Processes in Task Switching. Journal of Experimental Psychology - Human Perception and Performance, 27 (4), 763-797.

Short-Term Memory Problems

  • Short-term memory problems are, by and large, attention problems.
  • Attention involves both the ability to keep focused on the information you want to keep active, and the ability to not be distracted by competing and irrelevant stimuli.
  • You need to actively attend to keep information active, particularly as you get older.
  • Many of us over-estimate how much information we can keep active at one time.

Many people, particularly as they get older, have concerns about short-term memory problems: going to another room to do something and then forgetting why you’re there; deciding to do something, becoming distracted by another task, and then forgetting the original intention; uncertainty about whether you have just performed a routine task; forgetting things you’ve said or done seconds after having said or done them; thinking of something you want to say during a conversation, then forgetting what it was by the time it’s your turn to speak, and so on.

This is clearly an issue for many of us. Part of the reason, I believe, is simply that we expect too much from ourselves. For example, research has shown that even a very, very short delay between recalling an intention and being able to carry it out is sufficient to dramatically reduce the likelihood that you will remember to do the intended action — we are talking about a delay of only 10 seconds!

The problem is exacerbated by age (I’m not talking about advanced age — I’m afraid certain aspects of cognitive processing begin to decline as early as the 30s).

Part of the problem is also that we tend to believe that we don’t need to do anything to maintain a thought, particularly when it has “popped” into our minds easily. But current estimates are that unrehearsed information lingers in working memory for less than two seconds!

Some of these problems are dealt with in my article on action slips (these problems are not, strictly speaking, a failure of memory, but a failure in attention), and in my book on Remembering intentions.

But in this article I want to talk about another aspect: the relationship between working memory, and attention (and, as it happens, intelligence!).

In my article on working memory and intelligence I talk about the difference between crystallized and fluid intelligence — that fluid intelligence is probably a better measure of what we think of as “intelligence”, and that working memory capacity is often used synonymously with fluid intelligence. A new theory is that the relationship between working memory and fluid intelligence is due to the ability to control attention.

This theory emphasizes the role of attention in keeping information active (i.e. in working memory), and argues that working memory capacity is not, as usually thought, about the number of items or amount of information that can be held at one time. Instead, it reflects the extent to which a person can control attention, particularly in situations where there is competing information / demands.

I have to say that this makes an awful lot of sense to me. I can’t, in the space I have here, go into all the evidence for and against the theory, but here’s one situation which is interesting. The “cocktail party phenomenon” is a well-known method in psychology, whereby people are given two streams of audio, one for each ear, and instructed to listen only to one. At some point, the person’s name is spoken into the unattended stream, and about a third of people pick that up. In a recent take of that classic study, researchers compared the performance of people as a function of their working memory capacity. Only 20% of those with a high capacity heard their name in the unattended channel compared to 65% of low-capacity people. The point being that a critical aspect of good attentional control is the ability to block our irrelevant information.

This ability is one that we already know is worsened by increasing age.

The message from all this, I guess, is that:

  • short-term memory problems are, by and large, attention problems.
  • attention involves both the ability to keep focused on the information you want to keep active, and the ability to not be distracted by competing and irrelevant stimuli.
  • you need to actively attend to keep information active, particularly as you get older.
  • many of us over-estimate how much information we can keep active at one time.

And if you want strategies to help you keep more information active, I suggest you look at improving your ability to chunk, condense and label information. If you can reduce a chunk of information to a single label quickly, all you need to do is remember the label. (I explain all this at length in my book The Memory Key, but I’m afraid it needs far too much explanation to go into here).

Anyway, I hope this helps those of you (most of us!) with short-term memory problems.

This article originally appeared in the April 2005 newsletter.

Planning to Remember

References

Heitz, R.P., Unsworth, N. & Engle, R.W. 2004. Working memory capacity, attention control, and fluid intelligence. In O. Wilhelm & R,W. Engle (eds.) Handbook of Understanding and Measuring Intelligence. London: Sage Publications.