Lumping all of the medications in question into a single study is disingenuous, and seems biased towards concluding their ineffectiveness.
All three of the drugs in question have different mechanisms of action, which means that they will have a different effect on people with different brain chemistry. My read here is that if everyone takes all three and a placebo, performance may be enhanced in one case, but the remaining three cases can make the study come across as conclusive that on the whole, this course of treatment is ineffective.
Not a lot of detail on the placebo and why it performs so much better, but I'm curious about the "nocebo" effect and subjects' prior exposure to the non-placebo dosage. In other words, is the feeling of being on a new "smart" drug too distracting to successfully take the test in question for the first time?
The sample size is incredibly small at only 40 people, and skewed in terms of population. These were self-selecting individuals who responded to campus advertisements. This has been the modern critique of psychological experimentation in academia for decades, and I find it a little disparaging that we still have to suffer through it. I wish the HRB would ban things like campus advertisements as a means of recruiting test subjects -- we all really ought to know better by now.
While the sequencing of studies on each individual helps to avoid some amount of sequence-based confounding variables, and the spacing of seven days between studies allows for a pretty decent return to homeostasis, the number of study participants is still too low to be conclusive. I'd need to see this study get repeated numerous times, at a larger scale, over individuals with a more consistent neurological history to be considered conclusive.
I don't entirely buy the efficacy of working on individuals without any history of psychotropic medication or illicit drug use -- remember that this will rule out participants who have taken things like fluoxetine -- ruling out nearly 20% of adults in many countries -- or slightly skirted Australia's drug laws. These are people answering an ad on a college campus, likely for compensation. I don't think it's likely that all the subjects truthfully self-reported.
If anything I'd expect the neurological variability of a fully unscreened or undiagnosed population to make it really truly hard to measure these drugs effectiveness. Given the mechanisms of action, I'd expect that the likely 20% of subjects with undiagnosed depression and additional likely 5% of individuals with undiagnosed ADHD would really throw a wrench in any conclusive numbers.
I'm not trying to say that smart drugs are the answer to performing better at work. In fact, I'd expect a good night of sleep to make a bigger bump in these kinds of test scores. But I found the BPS is really jumping to conclusions based on what looks to me like very flimsy evidence.
Testing on people is hard, but publishing summaries of articles that don't entirely stand on their own and drawing authoritative conclusions is incredibly easy.
Lumping all of the medications in question into a single study is disingenuous, and seems biased towards concluding their ineffectiveness.
All three of the drugs in question have different mechanisms of action, which means that they will have a different effect on people with different brain chemistry. My read here is that if everyone takes all three and a placebo, performance may be enhanced in one case, but the remaining three cases can make the study come across as conclusive that on the whole, this course of treatment is ineffective.
Not a lot of detail on the placebo and why it performs so much better, but I'm curious about the "nocebo" effect and subjects' prior exposure to the non-placebo dosage. In other words, is the feeling of being on a new "smart" drug too distracting to successfully take the test in question for the first time?
The sample size is incredibly small at only 40 people, and skewed in terms of population. These were self-selecting individuals who responded to campus advertisements. This has been the modern critique of psychological experimentation in academia for decades, and I find it a little disparaging that we still have to suffer through it. I wish the HRB would ban things like campus advertisements as a means of recruiting test subjects -- we all really ought to know better by now.
While the sequencing of studies on each individual helps to avoid some amount of sequence-based confounding variables, and the spacing of seven days between studies allows for a pretty decent return to homeostasis, the number of study participants is still too low to be conclusive. I'd need to see this study get repeated numerous times, at a larger scale, over individuals with a more consistent neurological history to be considered conclusive.
I don't entirely buy the efficacy of working on individuals without any history of psychotropic medication or illicit drug use -- remember that this will rule out participants who have taken things like fluoxetine -- ruling out nearly 20% of adults in many countries -- or slightly skirted Australia's drug laws. These are people answering an ad on a college campus, likely for compensation. I don't think it's likely that all the subjects truthfully self-reported.
If anything I'd expect the neurological variability of a fully unscreened or undiagnosed population to make it really truly hard to measure these drugs effectiveness. Given the mechanisms of action, I'd expect that the likely 20% of subjects with undiagnosed depression and additional likely 5% of individuals with undiagnosed ADHD would really throw a wrench in any conclusive numbers.
I'm not trying to say that smart drugs are the answer to performing better at work. In fact, I'd expect a good night of sleep to make a bigger bump in these kinds of test scores. But I found the BPS is really jumping to conclusions based on what looks to me like very flimsy evidence.
Testing on people is hard, but publishing summaries of articles that don't entirely stand on their own and drawing authoritative conclusions is incredibly easy.