Planck ran the first RCT of Seth Roberts’ appetite theory. You can read more about this here.
First, we will describe the design we actually ran. Afterwards we will describe an idealized design for which we are funding a replication. The design we actually ran: we recruited volunteers via reddit, twitter, and a major stats blog . Then the participants used the following protocol, published openly for review and comment prior to the study being run:
The measure of hunger we are providing for people is a simple scale taken from Blundell et al.’s (2010) review of validated appetite measures. In our rec- ommended design, we have two experimental conditions: extra-light olive oil (ELOO) taken with flavor and ELOO taken without flavor. Self-experimenters are randomized into each condition. If participants don’t want to randomize, we also offer them the ability to choose either condition or to simply track their hunger.
Beginning on a day of their choosing, though Monday is recommended, par- ticipants are contacted three times a day at regular intervals and asked to track their hunger. This goes on for five days at which point they are then given a questionnaire which asks them to describe what they did, so as to check that the protocol was followed. We also invite them to track an alternate condition for another 5 days to establish a comparison. That is, if they tested ELOO we suggest they track their hunger without it so they can better learn whether it affected them. This also allows us to help ensure baseline symmetry between comparison groups.
We have pre-registered with aspredicted.org and our proposed data analysis is just graphic visualization and a simple regression. We intend to run other interesting tests, models, and analysis but won’t emphasize confidence intervals/p-values in those (or much at all really.) Note that our design analysis relied on results from Arechar et al. (2017) and Kirkmeyer & Mattes (2000) to get plausible priors for attrition and appetite effect size, respectively.
Here’s what’s cool about this design.
- It is biased against finding a result in favor of Seth’s Theory. If you’re in the treatment condition you have to go two hours without eating or drinking anything. Thus you should actually be hungrier than if you’re in the control treatment (these people get to eat whenever). So if we find that the treatment makes people less hungry this would be extra surprising.
- It is semi-blinded in the sense that those who haven’t read carefully about Seth’s actual theory might not know which condition they are in. E.G. if some in the control think it’s just about the olive oil itself then they will believe themselves to possibly be in the treatment.
- A commitment to Graphic Visualization and Simple Regression helps alleviate “garden of forking path” problems in analysis. The actual study is just graphs of the data and a handful of very, very basic regressions that are effectively exploratory.
Now let’s talk about the “replication” design which, again, we’re just going to improve the test rather than try and run the same test — with all its requisite compromises — again. Along with our original design, we listed a kind of “dream design” of the best test we could come up with. We did not get to run this. Here it is:
Randomly selected non-”W.E.I.R.D.” individuals eat their normal familiar breakfasts in a controlled environment. Each person takes a handful of opaque, liquid-filled capsules during breakfast. Randomly each day, half of the group is administered capsules that are filled with 200 calories of extra light olive oil while the other half takes capsules filled with water. Participants and experimenters cannot tell which capsule is which.
Following breakfast, all participants observe a 2 hour flavorless window. In the middle of this 2 hour period, participants who got the water pill earlier now get the oil pill and vice versa. Hunger levels (along with other measures like ghrelin and leptin hormones) are measured at regular intervals throughout the period. The analysis is pre-registered and the data can only be accessed using an R portal that updates commands to github continuously.
We will run a competition to improve on this design (if that’s even possible ;) ) and *that* will be the new study. If you’ve made it this far, I might as well tell you that a viable version may be 25 ETH, based on estimates from a specialist. I’ll make sure we get there eventually, but the journey starts here.