Hi all,
I have been continuing someone else’s work using our evolver unit and when they left, they documented that several of the sleeves either do not work altogether or have significant issues taking OD measurements. Curious to see if there were other less obvious issues or if anything had changed since the last experiments were run on our unit, I carried out a calibration of my own (using E. coli in LB) and compared it to a calibration done by Brandon during an on-site visit. I chose to verify the calibrations against 4 standards that I prepared (of E. coli culture in LB). The standards had OD600 values of 0.12, 0.23, 0.50, and 0.92. The ODs of the standards were taken on a Nanodrop 2000 in a cuvette with a 10mm path length. Both my calibration and the measurements of the standards were taken at a stir rate of 8 (in an effort to optimize between fan halting and stir bar jumping).
These graphs illustrate the measured OD values in each sleeve (averaged over at least 3 readings from the evolver) and the lines show the actual ODs of the standards. The first graph corresponds to Brandon’s calibration and the second shows my own. The generated curves for each of the calibrations are provided for reference. Please note, data for certain standards in certain sleeves was omitted due to extreme variance or noise, or due to the sleeve recording “nan” as if there were no vial present. In the graphs, each color corresponds to the evolver-reported OD of a particular standard, and the line sharing that color shows the true OD of that standard. The readings are grouped by sleeve.
Looking first at the calibration curves, a couple things stand out:
-
A number of the curves in my self-run calibration look to be poorly fitted or appear to have some kind of non-physical behavior. For a number of the vials, this corresponds to the poor data acquisition that had been reported by the previous operator of our evolver unit (see vials 12 and 14).
-
When I ran the calibration method myself, I ran duplicates of most of the OD values. In vials 12 and 14, these are clearly seen as the OD duplicates often corresponded to rather different values from the sensor. In other vials too, disparities in the raw output can be seen between duplicate ODs. Perhaps this could be due to optical variations between the glass vials themselves or measurement artifacts like jumping stir bars.
Moving on to the accuracy testing:
-
Looking at Brandon’s calibration, the order of the readings from each sleeve is correct overall, but they are unfortunately neither accurate, nor consistent between different sleeves. One apparent trend is that the majority of the sleeves cannot report an OD value higher than about 0.5. Building on this, readings from the evolver seem very compressed (spanning a much smaller range than the true OD values of the standards). Could this be due to a lack of sensitivity in the extreme high and low OD regimes?
-
Looking at my calibration, there are new issues. Again, the inability to report OD higher than 0.5 is seen, as well as an across-the-board lowballing of the ODs. A large amount of variation can also be seen in the low OD regime. Admittedly, I don’t know how the calibration function actually generates a calibration curve, but the reporting of negative values is obviously non-physical and not representative of the samples. Interestingly, the magnitude of the steps between OD readings in the same vial often appear to be reasonable, however the curve fitting may be leading to the odd reported values.
Moving forward, I have a few questions:
- Is it possible that by decreasing the range of ODs over which we make a calibration, we could improve accuracy in that range and avoid the apparent “compression” of reported OD values? For example, if instead of attempting to calibrate over a range of 0 – 0.8 (operating range reported in Wong et al. 2018) we performed a calibration spanning 0.1 – 0.5, could we reduce skewing of the calibration curve by OD measurements outside of the sensitive range?
- What do you think the implications are (if any) of duplicate vials reading differently in the calibration method, and could this lead to poor curve-fitting? Likewise, could the lack of duplicates when doing a calibration lead to a less accurate calibration curve?
I hope this can be useful to others and I am looking forward to anyone’s thoughts or ideas.