Richardt, from your last post it sounds like you don't take any measurements at elevated temperatures that are subsequently verified by a cooled post boil sample, or a mash gravity reading that is verified against the laboratory yield of the mash. So if you do not verify any of your measurements, why are you arguing that you don't need to cool the sample in a sealed container???
This is a forum to discuss differences in home brewing techniques and methodology, and to learn more about the principles of brewing science. This discussion is proof of that.
Nyakavt, my reason is simple: IMO, there is no need to take a HIGH TEMP sample during the boil, so I don’t. Your statement that I don’t “verify any of my measurements” is misleading. I calibrate my instruments and use them appropriately—therefore I have high confidence in their results. If I use the instruments outside of normal operating conditions, I temper/adjust my interpretation of the results accordingly. IMO, the cover plate on my refractometer is my “sealed container” [see additional discussion below]. OTOH, given the observations of others who do this, if you’re taking a larger sample from boiling wort, you’ll need a larger container and should seal it while it chills to minimize any evaporation.
Whether you take the sample at the beginning of the boil or when you‘re done chilling the wort, you still need to know two things to do your efficiency calculations: wort volume and specific gravity. A refractometer is just a quicker way to get a SG reading. See “Efficiency Calculations” on
http://en.wikipedia.org/wiki/Plato_scale.
I’m suggesting that refractometer (or hydrometer) readings of already chilled wort might be a more sensible time to gather data for efficiency calculations, namely wort volume and specific gravity. Markings on the brew stick or on the fermenter indicate the wort volume. If done at that time, sample evaporation and temperature extremes are less of a concern with regards to measurement accuracy. Still, there remains some inaccuracy which arises from using refractometers to measure mixtures other than pure sucrose (for which they were originally designed). Nonetheless, the BYO article shows us how we can still use the refractometer with rather good accuracy if certain adjustments are made.
Why take a sample at the time you begin your boil (i.e., when the sample is at 212 F and must be chilled to get a more accurate reading)? Why make more work for yourself?
My highest temp sample (<170F) is done at lautering to make sure I remain >3 Brix (> 1.012 SG), and not to calculate efficiency. For purposes of estimating the SG of final runnings, I’m comfortable with a “fuzzier” result or ballpark number. However, a sample at 212F is nearly 45F hotter. A10t2 (Sean) says he does it at full boil because he wants to adjust his boil time, if necessary.
I did read on a Fresno, California Extension Service site that excessively warm ambient temps (e.g. >68-86F, 20-30C; California Valley) can warm up the refractometer enough to cause falsely elevated Brix readings (Source:
http://cefresno.ucdavis.edu/files/43066.pdf ). According to the site, taking a reading at 50F (10C) can throw off your Brix reading by 0.89 (nearly 4 gravity points higher). I suspect this is due to the excessive effect of the excessively high or low ambient temperature on the bimetal strip and the optical wedge it holds (the “ATC”, or automatic temperature compensator) within the refractometer and the effect of temperature on refractive index of the sample (RI is inversely related to temperature, i.e., RI decreases as temp increases). Having said that, one or two drops of hot or warm liquid on the refractometer (and 30 seconds wait) probably does not have as much of a difference, as long as the refractometer temperature is within the ambient range (and preferably on the lower end of it). I do know the sample is at ambient temp (cool) when I lick it off the refractometer surface after 30 seconds.
IMO, since the dropperette and the refractometer surface act as heat sinks (reducing the temperature closer to ambient temps), the temperature of my sample is well below the boiling point (reducing the amount of evaporation), and that I take two seconds or less to sample/smear/cover (very little opportunity for evaporation as I’ve created my own “sealed container” with the cover plate) with the refractometer, I’m less likely to experience temperature error. That may not be the case if 212F wort were sampled or the ambient temps during the brew day were really cold or really hot. Stratification may be a concern if I’m sampling from the boil kettle after chilling and may play a bigger role with darker and more dextrinous or proteinaceous beers, e.g. higher mash temps or wheat beers. Stratification errors can be minimized by mixing prior to sampling (i.e., during a rolling boil, or immediately after transfer to the fermenter). I may be making a mountain out of a molehill, but I think it can and does play a role—whether it is negligible or not, I just don’t know. Some sites I've looked at seem to suggest that it does matter.
I’m a homebrewer, not a commercial brewer--I don’t care if I miss a calculated target by a small amount. I also don’t care about my “efficiency”—which runs around 75% and sometimes higher. I do care about following the recipe accurately with regards to the amount of water used, the temps of the rest(s), the times of the various hop additions, etc. so that I can reproduce it again. I also have no doubts about the veracity of my measurements within the limitations of my instruments and the manner in which I use them.