I'm not sure about the reason for this experiment. This experiment, along with a massive number like it (high/low gravity, yeast viability, generation mutations, etc) have been done. Not just done but done in a properly controlled lab setting. Kai and I have chatted about things like this quite a bit. Its a big pet peeve of mine for people to blindly do experiments without doing any basic research to see if anything like this has been done. Kai does a good job of finding original sources and then replicating the experiment to the best of his abilities. I'm not crapping on the effort here boys, what Im still wondering is that how this data is supposed to hold water. The vast majority of studies is not based on subjective analysis rather objective. Samples pulled are tested for numerous things along with being put on an HPLC to determine exactly what the differences are. Meaning levels of ethanol, isoamly alcohol, ethyl hexanoate, etc etc. There are tables that indicate sensory anaylsis of the human palate and the correlation with the amount of some chemical. Meaning that they can correlate the amount of X in a beer to the average human perception of it. My case in point: http://www.mbaa.com/TechQuarterly/Abstracts/1996/tq96ab09.htm
Patino does some very good work.
Recently Chris White did a good job of summarizing high gravity fermentations of which pitch rate was talked about a lot:http://www.ahaconference.org/presentations/2008/ChrisWhite_HighGravityFermentation.pdf
To this specific experiment, in my professional opinion, here are the biggest problems I see. This is not meant to bring down the one doing the experiment but to help them, and everyone else, where small changes can make very large effects:
- vernacular - if one are going to do research use the terms that everyone uses in the industry. Pitching rate is always discussed at millions of cells/ ml not billions/ L.
- yeast age - 52 days is very very old for a slurry even under the best conditions. CO2 toxicity is a big deal.
- yeast count - Assuming number of cells in a 'starter' is an absolute no-no. If one doesn't count the yeast, the experiment can't be done.
- yeast starters - the starters need to be done exactly the same way, same speed stiring, etc etc. Regardless of anything else, they should have at least been done together and then split at the very end.
- Yeast viability - Irregardless of your actual number you are pitching you have no idea of how viable they are (eg. methylene blue stain). Are you sending in old grannies or soldiers? Very important. Additionally, decanting starters is very hairy in that how much is too much to decant, how much did you lose etc etc.
- Experimental controls - Three beers are needed. An underpitch, an over pitch and a 'correct' pitch. Two beers doesn't give enough variables.
- OG - Its just too high. What would be a yeast pitch rate experiment to one has change, instantly, to a yeast pitch rate of high gravity beers...unless you wanted to do a high gravity experiment but I didn't read that.
- Open fermentation and headspace - It wasn't clear to me if this experiment was done fermenting 'open' in buckets or in buckets with a lid. If they were closed the head space was absolutely massive which could skew the experiment. Books have been written on 'fermenter-head space' specifics.
- Yeast choice - The yeast type makes a massive difference in the outcome of the experiment.
- Sensory evaluation - should have been done using a double blind test and not a triangle test. The double blind takes all of the bias out.
- Format of sensory form - Its much easier to get good data but using a polar type plot for assessing peoples subjective perceptions. Also called a 'spider plot'. http://www.appellationbeer.com/images/20091217-spider.jpg
- Data presentation - the data should be presented in a histogram format with the average indicated. This way one can acutal see where each individual lands. A simple sd and T-test would be very easy to do if you did the spider plot.
- Summary of the summary - Using a starter makes better beer. This had absolutely nothing to do with the actual experiment.
Yes this list is extensive but all the points I've listed are not exhaustive. They all need to be addressed for all experiments and not just this one. That's why data is always peer reviewed. Point short, there is nothing I can ascertain from the data presented. There are too many holes for even the smallest assumption to be made.
This is the world I work in. When data is presented its up to the researcher to be able to support it. If someone doesn't show people whats expected for make an actual assumption they we are all living 'blind' and will allow falsehoods to continue and hearsay to continue.