This graph shows recent results from the test I used to track my brain function. The test is a choice reaction task done on my laptop: see a digit (e.g.,”2″), press the corresponding key as fast as possible. The x axis shows the time of the test. The ticks (“Sat”, etc.) mark the beginning of the associated days. The y axis shows the average percentile of the reaction times. Higher percentile = faster. (Let me explain what “percentile” means: Each reaction time is compared to earlier reaction times with the same stimulus, and its percentile is computed. For example, a percentile of 60 means that 60% of previous responses were slower.) An average of 60 is quite good and 40 is quite bad. I usually do two tests per day, one right after the other, in the late afternoon (e.g., 4:30 pm). (more…)
Archive for the 'brain tracking' Category
This graph shows my brain test reaction times over roughly one year. Each point is a different test; I usually do two tests per day back to back. I assume faster = better. In February 2013 I returned to Berkeley from Beijing. In August 2013 I went back to Beijing. When I returned to Berkeley, my scores got worse (slower). I was shocked. Surely Berkeley is healthier than Beijing. At first I thought it was jet lag, but the scores stayed worse long after that made sense. Then I thought it might be some difference in diet, even though I eat similar food in the two places. I tried to make my Berkeley diet closer to my Beijing diet. This might have helped. I noticed accidentally that chocolate improved my score and started eating chocolate frequently. This artificially reduced the difference since in Beijing I had not been eating chocolate. In Berkeley I started doing two things I hadn’t done in Beijing: alternate-day fasting and whole-body vibration. I don’t know if they made a difference. When I returned to Beijing in September, my scores got better, even though I was not eating chocolate. Eventually I improved my sleep in Beijing but that seemed to make little difference. The comparison is far from perfect — many things varied — but by and large my scores got worse when I went from Beijing to Berkeley and improved when I went from Berkeley to Beijing.
What might have caused this? There are a hundred possibilities but one stands out. In both places, I brew and drink several cups of tea every day. In Beijing, everyone, including me, drinks water from big plastic bottles that are delivered to your house. You can choose pure water or “mineral” water, which has added magnesium and potassium. In Berkeley I use tap water (Brita filtered). I don’t think potassium affects brain function — for example, eating bananas makes no difference — but there is plenty of evidence that magnesium improves brain function. In Beijing I had tested a magnesium supplement and found no effect, consistent with the idea that I was already getting enough. Magnesium is also believed to improve sleep. In Beijing I seemed to sleep better than in Berkeley. Again, this is consistent with a difference in magnesium levels (more in Beijing). If ordinary magnesium-enriched water improves brain function, it would be significant because it is so easy, in contrast to other ways of increasing magnesium levels.
Six years ago I started using a reaction-time (RT) test (a test where you press a key in response to something as fast as possible) to track my brain function. I took the test daily. It must use only a small part of the brain but I assumed that something that made me faster would probably improve overall brain function. Behind this belief, which I call better RT, better brain, were countless studies of brain anatomy and physiology, which had shown that neurons and glial cells all over the brain share many features. Cells in different parts of the brain are much more alike than different. More support for this assumption was that certain doses of flaxseed oil improved both RT and other measures of brain function, such as balance.
I also assumed that changes that improved RT would probably improve overall health — what I call the better RT, better body assumption. It was less plausible than the better RT, better brain assumption because the cells in different organs of the body differ so much. They have many similarities but also many differences. I believed it for two reasons. (a) Flaxseed oil not only improved several measures of brain function, it improved my gums, no doubt because it reduced inflammation. It had been far from obvious that improving gums was so easy or that flaxseed oil (in the right dosage) would do so. The assumption better RT, better body had made a surprising prediction, you could say, that turned out to be true. (b) The brain gets much the same blood as the rest of the body. (Not exactly the same, because of the blood-brain barrier.) In the same way, all plug-in electrical appliances use the same house current. Just as all appliances have been designed to work well with that current, all our organs should have been shaped by evolution to work well with same mix of nutrients. You can’t feed your brain differently than your heart.
When I discovered that butter improved RT, the better RT, better body assumption made a second even more surprising prediction: Eating more butter improved my health. This contradicted the claims of all mainstream health experts, who say saturated fats cause heart disease. I stuck with my assumption — I still eat a lot of butter. The data I’ve seen since then has supported my conclusion. For example, my Agatston score got better, not worse, after a year of eating lots of butter. The Agatston score is currently the best predictor of heart disease.
I recently found more support for the better RT, better body assumption. Several studies have found that RT is a good predictor of health (better RT, better health). Even more impressive, it is a better predictor than many of the predictors we already know of. The RT test used in these studies is close to the test I now use, which I developed independently. The RT test in these studies involves showing a digit (0-4), after which the subject presses one of five keys (labelled 0-4) as fast as possible. My current RT test is very similar but uses 7 digits instead of 5.
A 2005 study looked at the oft-reported correlation between higher IQ and lower mortality. The IQs and RTs of about 900 persons were measured in 1988. Deaths until 2002 were noted. RT was associated with lower mortality, even after taking out associations with smoking, education and social class. RT and IQ are correlated (better RT, higher IQ). When the RT-death association was removed, IQ no longer predicted death. So RT does a good job of capturing whatever it is about IQ that predicts mortality.
A 2009 study compared RT to more conventional health predictors (“risk factors”). About 7,000 subjects were followed from 1984 to 2005. RT in 1984 was a good predictor of all-cause mortality compared to classic risk factors. Smoking was by far the best predictor, followed by RT. RT was a better predictor than physical activity, blood pressure, a questionnaire measuring “psychological distress”, resting heart rate, waist/hip ratio, alcohol intake, and body mass index.
A third study, based on the same subjects as the 2009 study, found that amount of decline (slowing) in RT (from one test to a second test seven years later) predicted death. People with more decline were more likely to die.
All this supports studying how your RT is controlled by your environment, especially what you eat. You have to choose wisely what to study. The point is not to be as fast as possible regardless of everything else. Lots of drugs (stimulants, such as caffeine) decrease RT for short periods of time. I doubt they improve health. (If they harm sleep, they probably worsen health.) What makes sense is to look for two things: 1. Poisons. Things that slow you down. I discovered that tofu did so. I gave several reasons for thinking that tofu affects many people this way, not just me. Billions of people eat tofu, unaware of this possibility. 2. Deficiencies. Study things that are missing from your life now but were likely to be present when we evolved. It is quite plausible that our ancient ancestors ate more omega-3 (in fish, but also in flaxseed) and more animal fat (from big animals, but also in butter) than we do now. My data suggest omega-3 and animal fat are nutrients necessary for health whose importance mainstream nutrition researchers have not fully appreciated.
My RT data have shown me there’s a lot I didn’t/don’t know about how my food affects me. Maybe everyone can say that. Unlike almost anyone else, however, I can reduce my ignorance myself. I don’t need to rely on experts.
I’ve been testing my brain function daily for the last six years. I use a reaction-time test (see digit, type digit as fast as possible) that takes about five minutes. I have gradually improved the test over the years — this is about version 8. One reason for this testing is that I might observe a sudden change. That could suggest a new factor that affects brain function — whatever was unusual before the change (e.g., a new food). This is how I discovered the effect of butter. My score suddenly improved, I investigated. Another sudden change (improvement) happened soon after I switched from Chinese flaxseed oil to American flaxseed oil. I hadn’t realized that something was wrong with the Chinese flaxseed oil. I started brain tracking after I noticed a sudden improvement in balance the morning after I swallowed about five flaxseed oil capsules. Millions of people had taken flaxseed oil capsules, but no one, it seemed, had noticed the balance improvement. Maybe other big changes in brain function go unnoticed, I thought.
In 2010, Doctor’s Data, an Illinois clinical lab, sued Stephen Barrett, who runs Quackwatch, for making false and misleading statements about them. The lawsuit is still in progress. I am glad they sued. As far as I can tell, Quackwatch does contain false and misleading statements. (more…)
After I discovered that butter made me faster at arithmetic, I started eating half a stick (66 g) of butter per day. After a talk about it, a cardiologist in the audience said I was killing myself. I said that the evidence that butter improved my brain function was much clearer than the evidence that butter causes heart disease. The cardiologist couldn’t debate this; he seemed to have no idea of the evidence.
Shortly before I discovered the butter/arithmetic connection, I had a heart scan (a tomographic x-ray) from which is computed an Agaston score, a measure of calcification of your blood vessels. The Agaston score is a good predictor of whether you will have a heart attack. The higher your score, the greater the probability. My score put me close to the median for my age. A year later — after eating lots of butter every day during that year — I got a second scan. Most people get about 25% worse each year. My second scan showed regression (= improvement). It was 40% better (less) than expected (a 25% increase). A big increase in butter consumption was the only aspect of my diet that I consciously changed between Scan 1 and Scan 2.
The improvement I observed, however surprising, was consistent with a 2004 study that measured narrowing of the arteries as a function of diet. About 200 women were studied for three years. There were three main findings. 1. The more saturated fat, the less narrowing. Women in the highest quartile of saturated fat intake didn’t have, on average, any narrowing. 2. The more polyunsaturated fat, the more narrowing. 3. The more carbohydrate, the more narrowing. Of all the nutrients examined, only saturated fat clearly reduced narrowing. Exactly the opposite of what we’ve been told.
As this article explains, the original idea that fat causes heart disease came from Ancel Keys, who omitted most of the available data from his data set. When all the data were considered, there was no connection between fat intake and heart disease. There has never been convincing evidence that saturated fat causes heart disease, but somehow this hasn’t stopped the vast majority of doctors and nutrition experts from repeating what they’ve been told.
Recently the Berkeley City Council heard testimony about a proposed ban on mercury amalgam dental fillings. A young man named D— M—, shown in the video, told the Council that he had grown up in Berkeley and had gotten mercury amalgam fillings from local dentists. They did not tell him the fillings were dangerous. He attended Berkeley High, Harvard, and finally the clinical psychology program at UC Berkeley — which I know is extremely hard to get into, as he says. They accept about 1 in 500 applicants.
In 2007, three years into the program, he started clenching his teeth. He began to have problems resembling mercury poisoning, such as fatigue and poor concentration. He had to leave the psychology program. Hair tests showed large amounts of mercury. He did not eat unusual amounts of fish, so it’s likely that his fillings were the source of the mercury. By 2012, he could no longer work and pay rent.
I had no idea that teeth clenching and mercury fillings were so dangerous together. A few years ago, I found, to my surprise, that removal of mercury fillings improved my score on the reaction time test I use to measure brain function. At first, I had thought the improvement had other causes. Only when I tested these causes and found no supporting evidence did I look further and discover the improvement had started exactly when I got my fillings removed. After I discovered this, I looked around for other evidence that mercury fillings were dangerous. To my surprise (again), my evidence seemed more persuasive than anything I found. M—’s story is much scarier than mine and supports my conclusion that mercury fillings are dangerous.
Had M— been using my reaction-time test day after day, he might have discovered deterioration on that test before he noticed other problems. The test might have provided early warning. I hadn’t noticed problems with concentration or fatigue, yet when my fillings were removed I got better on my test. Had M— noticed the problem earlier, he might have figured out the cause earlier.
If you don’t monitor yourself as I do — and almost no one does — you are trusting your dentist, your doctor, your food providers, and so on, to be well-informed and truthful about the safety of their products. If the problems aren’t obvious, there is plenty of reason for them to put their hands over their eyes and say “I don’t want to know” about problems with their products. Drug companies have often hidden the dangers of their products and surgeons have hidden the dangers of their procedures. Few people grasp that “evidence-based medicine”, with its disregard of bad side effects, is biased in favor of doctors. (Ben “Bad Science” Goldacre is a prominent example of someone who fails to understand this.) If you monitor yourself you are less at the mercy of other people’s poor science, lies, and motivations that conflict with finding and telling the truth.
Brain tracking — frequent measurement of how well your brain is working — will become common, I believe, because brain function is important and because the brain is more sensitive to the environment (especially food) than the rest of the body. You will find it easier to decide what to eat if you measure your brain than if you measure other parts of your body. For example, I have used it to decide how much flaxseed and butter to eat. I have used R and the methodological wisdom of cognitive psychologists to make brain tracking tests. Alex Chernavsky, who lives in upstate New York, recently tried the most recent version:
In August, Seth solicited readers to help him test a new brain-tracking program. I said I was interested. I had a number of reasons for volunteering:
- My job involves working a lot with computers, so I thought I had a decent shot at ferreting out any bugs or usability issues.
- I have been tracking my weight daily for over eleven years, so I was confident that I would have enough motivation to do the test on a regular basis.
- I have a long-standing interest in neuroscience, so I was eager to help advance the field, even if in a very small way.
- I’m in my late 40s, and I’ve noticed a distinct increase in my forgetfulness. There are probably other, less-noticeable decreases in my cognitive function. Thus I have an interest in finding ways to boost the performance of my brain. Hacking brain function is obviously much easier if you can assay it via a quick, reliable proxy (i.e., reaction time).
The program itself was relatively easy to set up. The code is written in a free, open-source scripting language called R, so you have to install R on your Windows computer in order to run the program. Upon downloading the script (which is contained within an R workspace), you have to edit a function to specify the Windows folder that contains the workspace file. After that, you’re ready to go.
The three-month pilot study did not involve testing any hypotheses with regard to the effectiveness of interventions (for example, measuring reaction times before and after flaxseed oil). My task was simply to perform the test once or twice a day.
Taking the test involves hitting a number key (2 through 8, inclusive) to match a random target number that is displayed on the screen. The program measures the latency of your response. If you hit the wrong key, the program forces you to repeat the same trial until you press the right key. Reaction-times from these “correction trials” are not used in any subsequent data analysis. A session consists of 32 individual trials and takes about four minutes to complete.
I performed the test daily for three months, although I did miss two days. The test stopped short of being fun, but it was certainly not onerous. The biggest hassle was having to wait for my laptop to boot into Windows. If I had to do the pilot study over again, I would install R on both my home and my work desktop computers, so I could perform the test more easily (perhaps as a way to take a short break from whatever other task I happened to be working on).
The original plan was for me to email the R workspace to Seth once a week or so. However, I suggested to Seth that we could improve efficiency by using a shared DropBox folder. He agreed, and that is the method we adopted. Using this system, Seth had ongoing access to the latest data, and he could also easily make any bug-fixes or other edits that would take effect the next time I ran the script.
I did identify one bug in the script. After each trial, the script briefly displays some feedback in the form of your reaction-time (in milliseconds) for that trial, your cumulative average for that session, and a percentile figure that compares your latest speed with past trials for that same target key. I noticed that the percentile scores didn’t seem to make sense for some of the keys. Seth examined his code and agreed that this was indeed a bug. He made some adjustments and the bug was fixed.
I found that over time, as expected, my scores improved substantially. They seemed to plateau after six weeks. However, my accuracy suffered. During the third month of the pilot study, I made a conscious effort to reduce my error rate. I had some success, but I also found myself frustrated by my inability to reduce the errors as much as I would have liked. Making errors, despite my best efforts, was the only vexing part of taking the test.