Verifying Calibrations - Poor Accuracy

Hi all,

I have been continuing someone else’s work using our evolver unit and when they left, they documented that several of the sleeves either do not work altogether or have significant issues taking OD measurements. Curious to see if there were other less obvious issues or if anything had changed since the last experiments were run on our unit, I carried out a calibration of my own (using E. coli in LB) and compared it to a calibration done by Brandon during an on-site visit. I chose to verify the calibrations against 4 standards that I prepared (of E. coli culture in LB). The standards had OD600 values of 0.12, 0.23, 0.50, and 0.92. The ODs of the standards were taken on a Nanodrop 2000 in a cuvette with a 10mm path length. Both my calibration and the measurements of the standards were taken at a stir rate of 8 (in an effort to optimize between fan halting and stir bar jumping).

These graphs illustrate the measured OD values in each sleeve (averaged over at least 3 readings from the evolver) and the lines show the actual ODs of the standards. The first graph corresponds to Brandon’s calibration and the second shows my own. The generated curves for each of the calibrations are provided for reference. Please note, data for certain standards in certain sleeves was omitted due to extreme variance or noise, or due to the sleeve recording “nan” as if there were no vial present. In the graphs, each color corresponds to the evolver-reported OD of a particular standard, and the line sharing that color shows the true OD of that standard. The readings are grouped by sleeve.

2019_04_17_calibration

2019_06_28_calibration

Looking first at the calibration curves, a couple things stand out:

  1. A number of the curves in my self-run calibration look to be poorly fitted or appear to have some kind of non-physical behavior. For a number of the vials, this corresponds to the poor data acquisition that had been reported by the previous operator of our evolver unit (see vials 12 and 14).

  2. When I ran the calibration method myself, I ran duplicates of most of the OD values. In vials 12 and 14, these are clearly seen as the OD duplicates often corresponded to rather different values from the sensor. In other vials too, disparities in the raw output can be seen between duplicate ODs. Perhaps this could be due to optical variations between the glass vials themselves or measurement artifacts like jumping stir bars.

Moving on to the accuracy testing:

  1. Looking at Brandon’s calibration, the order of the readings from each sleeve is correct overall, but they are unfortunately neither accurate, nor consistent between different sleeves. One apparent trend is that the majority of the sleeves cannot report an OD value higher than about 0.5. Building on this, readings from the evolver seem very compressed (spanning a much smaller range than the true OD values of the standards). Could this be due to a lack of sensitivity in the extreme high and low OD regimes?

  2. Looking at my calibration, there are new issues. Again, the inability to report OD higher than 0.5 is seen, as well as an across-the-board lowballing of the ODs. A large amount of variation can also be seen in the low OD regime. Admittedly, I don’t know how the calibration function actually generates a calibration curve, but the reporting of negative values is obviously non-physical and not representative of the samples. Interestingly, the magnitude of the steps between OD readings in the same vial often appear to be reasonable, however the curve fitting may be leading to the odd reported values.

Moving forward, I have a few questions:

  1. Is it possible that by decreasing the range of ODs over which we make a calibration, we could improve accuracy in that range and avoid the apparent “compression” of reported OD values? For example, if instead of attempting to calibrate over a range of 0 – 0.8 (operating range reported in Wong et al. 2018) we performed a calibration spanning 0.1 – 0.5, could we reduce skewing of the calibration curve by OD measurements outside of the sensitive range?
  2. What do you think the implications are (if any) of duplicate vials reading differently in the calibration method, and could this lead to poor curve-fitting? Likewise, could the lack of duplicates when doing a calibration lead to a less accurate calibration curve?

I hope this can be useful to others and I am looking forward to anyone’s thoughts or ideas.

Thanks for all your detailed documentation of this issue and hope we can all help resolve it going forward. I think what is going on is a combination of a few things, all with the modern iteration of the Smart Sleeve, which has been slightly different than what we published on.

Known Issues

1. Angle of the Photodiode-LED Pair Changed
We changed the angle of the LED photodiode pair to try to get a better dynamic range on the OD sensors. If you look at the supplementary section to the eVOLVER manuscript, we have two modes of operation for the OD sensor, a high density range and a low density range. The electronic board design isn’t the same in this hardware iteration, so the absolute values of the resistor packs aren’t relevant. The main point is that there was some limitation on dynamic range that we tried to extend on the next design. We can change this by altering the LED-photodiode angle, resistor pack, or the LED power.

The most dramatic way of improving the sensor is playing with the LED-diode angle. In our NBT publication, we published with 135 degree offset IR LED-photodiode measurements. Unfortunately, that offset resulted in a low dynamic range, usually resulting in a dynamic range of measurements below 0.8 OD600, varying with how exactly the sleeves were constructed. For your system, I used a 90 degree offset to get a higher dynamic range and cleaner signal. This is reflected in the calibration curves seen in your eVOLVER. Unfortunately, it also seems that the lower measurements were more sensitive to how the sleeve was constructed, leading to several sleeves with poor OD readings in the lower densities. This also made the measurement more sensitive to other factors, like different cell types, that we didn’t fully characterize and impact what you see in your traces.

2. Software Updates
We have been currently retooling the software to be more robust and have more functionalities moving forward. Unfortunately, this has lead to some growing pains (e.g. bugs what we might not have caught) that might be impacting the calibration process. We are currently ironing it out and @heinsz and @mgalardini have been amazing with getting that up and running from the Khalil Lab.


Solutions Currently Under Testing

We are currently patching the software for issues and updating the system to have an additional LED-photodiode pair (at a different angle) in order to get the best of both worlds, in both dynamic range and accuracy. We hope to post some updates here soon on this.

Here is some trial data with milk to show the difference in sensor readings from the offset angle.

image

We hope to fit a 3D curve to the data, like the following:

image

X axis is the 135 deg diode sensor values (100k Ohm, Resistor Pack), Y axis is the 90 deg diode (1M Ohm, Resistor Pack) sensor values, and Z axis is the OD values. Again, this is with milk so we still need to characterize with different cell types.

I would not recommend this, and it may even exacerbate the problem. The calibration function that is being fit tends to flatten out at either end (basically sigmoidal). Often times the reason for “compression” of OD values is that the function is flat above a certain OD. I usually try to calibrate over as wide a range as feasible (even if that means concentrating cells when centrifuging) to make sure any sigmoid shapes are real and measured, not falsely applied during the fit. This can also lead to NaNs showing up for OD during experiments, if high cell densities result in photodiode readings beyond what were explored during calibration and end up beyond the sigmoid asymptote.

You can track the progress of this via this thread: