The 2021 symposium was held remotely, and videos from roughly half of the talks are posted onto YouTube, at the CMAP_CEGS channel. This includes Kenny’s talk from the workshop portion this year. In contrast to his talk from last year (which really focused on the basic with few specifics), the talk from this year was much more about a specific example fitting into some of those more general considerations (with unpublished data to boot!).
Quantitating Western blots
Almost everybody hates doing Western blots since they’re so labor intensive (and somewhat finicky), but they are undeniably useful and will forever have a place in molecular biology / biomedical research. We’re currently putting together a manuscript where we express and test variants of ACE2, which requires some Western blotting quantitation. Since I’m about to do some of the quantitation now, I figured I’d just record the steps I do it so trainees in the lab have a basic set of instructions they can follow in the future.
This already assumes you have a “.tif” file of western blot. While I someday hope to do fluorescent westerns, this will likely be a chemiluminescent image. Hopefully this isn’t a scan of a chemiluminescent image captured on film, b/c film sux. So you presumably took it on a Chemidoc or an equivalent piece of equipment. Who knows; maybe I finally found the time to finish / standardize my chemiluminescent capture using a standard mirrorless camera procedure (unlikely). I digress; all that matters right now is you already have such an image file ready to quantitate.
My favorite method for Western blot quantitation is to use “Image Lab” from BioRad. You know, I like BioRad. And I definitely like the fact that they provide this software for free. Anyway, download and install it as we’ll be using it.
Once you have it installed, start it up and open your image file of interest. The screen will probably look something like this:

First off, stop lying to yourself. By default, the imagine is going to be auto-scaled so that the darkest grey values in your image get turned to black, while the lightest grey values get turned white. But in actuality, the image you see is not going to be the “raw” range of your image, so you may as well turn off the auto-scaling so you see your image for what it really is. Thus, press that “Image Transform” button…

… and get a screen that looks like below:

See where those low and high values are? Awful. Set the low value to 0 and the high value to the max (65535).

And now the image looks like this, which is great, since this is what the actual greyscale values in your image actually look like.

OK, now we can actually start quantitating the bands in the plot. First off, don’t expect your western blot to look 100% clean. Lysates are messy, and you can’t always get pure, discrete bands. Sure, some of the lower-sized bands may be degradation products that happened when you accidentally left the lysates out of ice (you should avoid that, of course). Then again, you may have done everything perfectly bench-wise, and the lower-sized bands may be because you’re overexpressing a transgene, and proteins naturally get degraded, and overexpressed proteins may tax the normal machinery and get degraded more obviously. I say the best thing is to acknowledge that happens, show all your results / work, and make the more educated interpretations that you can. Regardless, in that above plot, we’re going to try to quantitate the density of the highest molecular weight band, since that should be the full-length protein. To do that, first select the “Lane and bands” button on the left.

I then press the “Manual” lane annotation button.

In this case, I know I have 11 lanes, so I enter that and get something that looks like this:

Clearly that’s wrong, so grab the handles on the edges and move the grid such that it actually falls more-or-less on the actual lanes.

Sure the overall grid is pretty good, but maybe it’s not perfect for every single individual lane. The program also lets you adjust those. To do that, click off the “Resize frame” button on the left so it’s no longer blue…

And then adjust the individual lanes so they fit the entire band as your human eyes see them, resulting in an adjust grid that looks like this:

Nice. Now go to the left and select the tab at the top that says “Bands”, and then click on the button that says “Add”.

Once you do that, start clicking on the bands you want to quantitate across all of the lanes. You may have to grab the dotted magenta lines in each lane to adjust them so that the actual band is within them (and presumably somewhere near the solid magenta line which should be somewhere in between them). This is what it looks like after I do that:

It’s good to check how well the bands are being seen by the program. Go to the top and press the “Lane profile” button. It should give you a density plot. This is also the window where you can do background subtraction. Find a number that seems sensible (in this case, a disk size of 20 mm seems reasonable), and make sure you hit the “apply to all lanes” button so it propagates this across lanes. While I’m only showing the picture for lane 5, it’s probably worth scanning across the lanes to make sure the settings are sensible.

Now with those settings fine, close out, and then click on “Analysis table” at the top. Once that is open, go to the bottom and click on “lane statistics”. These should be the numbers you’re looking for.

Now export the statistics (either pressing the “copy analysis table to clipboard” button and pasting in an spreadsheet you want to use, or the “export analysis table to spreadsheet”). The number you’ll be looking to analyze will be those in the “Adj. Total Band Vol” column.
Note: Now that I’m doing this, the “standard curve” button is now ringing a bell. I’m fairly certain that in my PhD work, when I ran a ton of Western blots or just straight up protein gels stained with coomassie, that I would run dilutions of lysates / proteins to make a standard curve of known proteins amounts that I could calibrate the densitometry against. We obviously didn’t do that here, since we didn’t have the space, so these numbers aren’t going to be quite as accurate if we had done so. Still, getting some actual numbers we can compare across replicates is still a major step up than not quantitating and having everything be even more subjective.
Terrific for lentivector growth?
1/2/2025 edit: FYI, we do all of our lentivector transformations in NEB stable cells now (NEB equivalent of STBL2 or STBL3 like cells); we still grow in TB, although perhaps we should systematically test this at some point too (or maybe I told someone to do this and we already have, and I’ve simply forgotten the results…)
At some point, I was chatting with Melissa Chiasson about plasmid DNA yields, and she mentioned that her current boss had suggested using terrific broth instead of Luria broth for growing transformed bacteria. I think both of us were skeptical at first, but she later shared data with me showing that DNA from e.coli grown in TB had actually given her better yield. I thus decided it was worth trying myself to see if I could reproduce it in my lab.
There are two general types of plasmids we tend to propagate a lot in my lab. attB recombination vectors, for expressing transgenes within the landing pad, and also lentiviral vectors of the “pLenti” variety, which play a number of different roles including new landing pad cell line generation and pseudovirus reporter assays.
I first did side-by-side preps of the same attB plasmids grown in TB or LB, and TB-grown cultures yielded attB plasmid DNA concentrations that were slightly, albeit consistently worse. But I eventually I tested some lentiviral vector plasmids and finally saw the increase in yield from TB that I had been hoping for. Relaying this to Melissa, she noted she had been doing transformations with (presumably unrelated sets of) lentiviral vectors herself, so these observations had been consistent after all.
Thus, if you get any attB or pLenti plasmids from me, you should probably grow them in LB (attB plasmids) and TB (pLenti plasmids), respectively, to maximize the amount of DNA yields you get back for your efforts.

COVID testing at CWRU
As a PI, I feel it’s important to know how safe my employees are if coming to the campus to work during the pandemic. While CWRU was rather slow to respond in getting on-campus testing set up, they did set up a surveillance testing program and a public website to post the results, which has largely been reassuring. I’ve been keeping track of the results every week for the last few months and will continue to do so for the foreseeable future. This is what things currently look like:

As of writing this (the first week of February), the absolute numbers of infected students / faculty / staff in a given week are firmly in the double digits, but thankfully the test percent positivity has been at or under 1%, unlike November & December. Now that the students are back for the new semester, we will see how the pattern may change, but at least the pandemic has felt largely under control here, at least in the broader context of the conflagration of viral spread we’ve been seeing in this country over the past year.
Using SpliceAI for synthetic constructs
I try to be as deliberate as I can be about designing my synthetic protein-coding constructs. While I’ve largely viewed splicing as an unnecessary complication and have thus left it out of my constructs (though, who knows; maybe transgenes would express better with splicing, such as supposedly happens with the chicken ß-actin intron present in the pCAGGS promoter/5’UTR combo), there’s still a very real possibility that some of my constructs encode cryptic splice sites that could be affecting expression. In a recent conversation with Melissa Chiasson (perhaps my favorite person to talk syn-bio shop with), she noted that there is actually a way to use SpliceAI to predict splicing signals in synthetic constructs, with a nice correspondence here showing what the outputs mean. Below is my attempt to get this to work in my hands:
First is installing Splice AI, following the instructions here.
I started by making a new virtual environment in anaconda:$ conda create --name spliceai
I then activated the environment with:$ conda activate spliceai
Cool, next is installing spliceai. Installing tensorflow the way I have written below got me around some errors that came up.$ conda install -c bioconda spliceai
$ conda install tensorflow -c anaconda
OK, while trying to run the spliceAI custom sequence script, I got an error along the line of “Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized…”. I followed the instructions here, and typed this out into my virtualenv:$ conda install nomkl
Alright, so that was fixed, but there was another error about no config or something (“UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually….”). So I got around that by writing a new flag into the load_module() function based on this response here.
OK, so after that (not so uncommon) struggling with dependencies and flags, I’ve gotten things to work. Here’s the result when I feed it a construct from the days when I was messing around with consensus splicing signals early during my postdoc. In this case, it’s a transcript that encodes the human beta actin cDNA, with its third intron added back in. It’s also fused to mCherry, found on the C-terminal end.

And well, that checks out! The known intron is clearly observed in the plot. The rest of Actin looks pretty clean, while there seems to be some low-level splicing signals within mCherry. That said, the fact that they’re in the wrong order probably means it isn’t really splicing, and I’m guessing the signals are weak and far away enough that there isn’t much cross-splicing with the actin intron.
Oh, and now for good measure, here’s the intron found in transcripts made from the pCAGGS vectors, with this transcript belonging to this plasmid from Addgene encoding codon optimized T7 polymerase.

Nice. Now to start incorporating it into analyzing some of the constructs I commonly use in my research…
Shuttling one ORF into another
I’ve gone over how to make mutations of a plasmid before (since that’s the simplest molecular cloning process I could think of), but there will be many times we will need to make a new construct (via 2-part Gibson) by shuttling a DNA sequence from one plasmid to another. Here’s a tutorial describing how I do that.
12/2/24 update: Much like my SDM primer design post, I’ve also updated this strategy as well. See the relevant update at the bottom of the post.
First, open the maps for the two relevant plasmids on Benchling. Today it will be “G871B_pcDNA3-ACE2-SunTag-His6” serving as the backbone, and “A49172_pOPINE_GFP_nanobody” serving as the insert. The goal here will be to replace the ACE2 ectodomain in G871B with the GFP nanobody encoded by the latter construct.
Next, duplicate the map for the backbone vector. Thus, find the plasmid in the plasmid under the “Projects” tab in Benchling, right click on it to open up more options, and select “Copy to…”.

Then, make a copy into whatever directory you want to keep things organized. Today, I’ll be copying it into a directory called “Cell_surface_labeling”. A very annoying thing here is that Benchling doesn’t automatically open the copy, so if you start making edits to the plasmid map you already have open on the screen, you’ll be making edits to the *original*. Thus, make sure you go to the directory you had just located, and open duplicated file (it should start with “Copy of …”). Once open, rename the file. At this point, the plasmid won’t have a unique identifier yet, so I typically just temporarily prefix the previous identifier with “X” until I’m ready to give it it’s actual identifier. To rename the file, go to the encircled “i” icon on the far right of the screen, second from the bottom icon (the graduation cap). After you enter the new name (“XG871B_pcDNA3-Nanobody[GFP-pOPINE]-SunTag-His6” in this case), make sure you hit “Update Information” or else it will not save. Hurrah, you’ve successfully made your starting campus for in-silico designing your future construct.

Cool, now to the plasmid map with your insert, select the part you want, and do “command C” to copy it.

Now select the part of the DNA you want to replace. For this construct, we’re replacing the ACE2 ectodomain, but we want this nanobody to be secreted, so we need to keep the signal peptide. I didn’t already have the signal peptide annotated, so I’m going to have to do it now. When in doubt, one of the easiest things to do is to consult Uniprot, which usually has things like the signal peptide already annotated. Apparently it’s the first 17 amino acids, which *should* correspond to “MSSSSWLLLSLVAVTAA”. For whatever reason, the first 17 amino acids of the existing ACE2 sequence is “MSSSSWLLLSLVAVTTA”, so there’s a A>T mutation. It’s probably not a big deal, so I’ll leave it as it is. That said, to make my construct, I now want to select the sequence I want to replace, like so:

As long as you still have the nanobody sequence copied, you can now “Command V” paste it in place of this selection. The end result should look like so:

Great, so now I have the nanobody behind the ACE2 signal peptide, but before the GCN4 peptide repeats that are part of the Sun-tag. Everything looks like it’s still in frame, so no obvious screw-ups. Now time to plan the primers. I generally start by designing the primers to amplify the insert, shooting for enough nts to give me a Tm of slightly under 60*C. I also generally put in the ~ 17nt of homology appended onto these primers, thus. The fwd primer would look like this.

Same thing for the reverse primer, and it would look like this (make sure to take the reverse complement).

To recap, the fwd primer seq should be “ttgctgtTactactgctgttcaactggtggaaagcggc” and the reverse should be “ccgttggatccggtaccagagctcaccgtcacctgagt. Cool. Next, we need to come up with the primers for amplifying the vector. It could be worth checking to see if we have any existing primers that sit in the *perfect* spot, but for most construct, that’s likely not the case. Thus, design forward and reverse primers that overlap the homology segments, and have Tm’s slightly under 60*C. The forward primer will look like this:

And the reverse primer like this:

Hmmm. I just realized the Kozak sequence isn’t a consensus one. I should probably fix that at some point. But, that’s beyond the scope of this post. So again, to recap, the fwd and rev primers for the vector are “ggtaccggatccaacggtcc” and “agcagtagtAacagcaacaaggctg”. But now you’re ready to order the oligos, so go ahead and do this. As a pro-tip, I like to organize my primer order work-areas like this:

Why? Well, the left part can be copy-pasted into the “MatreyekLab_Primer_Inventory” google sheet, while the middle section can be copy-pasted into the template for ordering primers from ThermoFisher. The right part can be copy-pasted into the “MatreyekLab_Hifi_Reactions” google sheet in anticipation of setting up the physical reaction tubes once the primers come in.
_________________________
12/2/24 update: So the above instructions work fine, but I have since (as in like 3/4 years ago, haha) adopted a slightly different strategy. The original strategy prioritized having one primer (in the above example, the reverse primer, or KAM3686) being perfectly matching the WT sequence, so that it could double as a Sanger sequencing primer in other non-cloning circumstances. Well, we barely ever need these anymore, especially with the existence of whole-plasmid nanopore sequencing via Plasmidsaurus. Thus, I now:
1) Just design the primer pairs so that both the forward AND reverse primers are each encoding ~ 9 nt of “non-template matching” sequence at their 5′ ends. These “non-template matching” sequences won’t contribute to primer binding to the template, but will presumably contribute to primer binding to any amplicons that are being futher amplified during PCR, but their biggest importance are adding to the 18+ nt of homology necessary for Gibson.
2) I do shoot for the matching sequence on the 5′ end to still be somewhere in the 16 to 30 nt range, to get to the ~ 58*C Tm intended for the PCR reaction. Notably, as alluded to above, while this means that the Tm should be ~58*C when binding to and amplifying from the template, there will be 25 to 39 nt of matching sequence that occurs when the primer is binding to any further amplifying amplicons present in the PCR reaction (so not cycle 1 of the PCR, but presumably increasing in proportion in cycles 2+).
Thus, my updated primer strategy for the above reactions would look like these:


Illumina sequencing of barcoded amplicons from the landing pad
Illumina sequencing of barcoded amplicons is going to be large factor in the work we’ll be doing. Going to a completely new place, I now need to explain to people how it works. To keep this post simpler, I’m skipping all of the landing pad details (hopefully you already understand it). Let’s just start with what was genomically integrated after the recombination reaction. In the case of the PTEN library, it looks like this:

As you can see, the barcode is the blue “NNNNNNNNNNNNNNNNNN” region at the bottom of the above image. We can’t directly sequence it with primers directly flanking it, since those sequences will also be present in the unexpressed plasmid sequences likely contaminating our genomic DNA. Thus, we first have to create an amplicon containing our barcode of interest, but spanning the recombination junction (“Recomb jxn” above).
For the PTEN library, the barcode is located in back of the EGFP-PTEN ORF, so we have to amplify across the whole thing. We did this by using a forward primer located in the landing pad prior to recombination (such as KAM499 found in the Tet-inducible promoter and shown in red as the “Forward “Primer), as well as a reverse primer located behind the barcode associated with the PTEN coding region (KAM501 in this case).
The above is a simplistic representation. The forward primer / KAM499 is indeed just the sequence needed for hybridization, since we can’t going to do anything else with this end of the amplicon. On the other hand, we’ll add some more sequence to the reverse primer to help us with the next steps. In this case, this is a nucleotide sequence that wasn’t present before, so that we can amplify this specific amplicon in the next step. The actual amplicon will thus look something like below:

OK, so the above amplicon is huge; too big to form efficient cluster generation. Thus, we’ll now use that amplicon to make a much smaller, Illumina compatible second amplicon. We’ll also include an index of degenerate nucleotides “nnnnnnnn” that can be used to distinguish different amplicons from each other when we mix multiples samples together before doing the actual sequencing step. This second amplicon looks something like this:

This time, the forward primer has both the hybridizing portion (in the blue-purple) as well as one of the cluster generators, shown as that light blue. At the complete other end, you can see the “nnnnnnnn” index sequence in indigo, followed by the other cluster generator sequence in orange. Please note: Here, the index is a KNOWN sequence, like GTAGCTAC, or GATCGAGC. It’s just that each sample will have a different KNOWN sequence,. so it was simpler just to denote it with “n”s for the purpose of this explanation.
There’s more in the above map though. We’ll likely want to do paired sequencing of the barcode, so we’ll give the Illumina sequencer two read primers: read 1 (in red) which reads through the barcode in the forward direction, as well as read 2 (in green), which goes through the barcode in the reverse direction. We will also need to sequence the index, though we’ve tended to only do that with one primer. This primer is Index 1, and is colored in yellow.
For the actual primer sequences and everything, look at Supplementary Table 7 of my 2018 Nature Genetics paper.
Firmware flaw in recent Stirling SU780XLE -80C freezers
[This post is a follow-up to my previous post on this subject]
Wow, I never thought I’d learn so much about a freezer company, but here we are. I took a deep dive on this issue with the Stirling SU780XLE ULT freezers. It’s still second-hand (through company reps and people on social media) and I don’t know if I believe everything about the explanations I’ve received (for example, I count roughly 8 instances of freezer firmware getting stuck through various contacts, and I vaguely remember a company rep saying this has happened <= 10 times), but this is my understanding of the situation:
The issue is indeed a firmware problem, and it affects all units produced between ~ Aug 2019 and ~ Sep / Oct 2020. Aug 2019 is when they switched one of their key electronic components to a Beagle Bone (apparently a circuit-board akin to a Raspberry Pi). Part of its job is to relay messages from one part of the circuitry to another. The firmware they wrote for it had a flaw, where — in certain circumstances that the company still does not understand — one part of the relay no longer works, and the other part of the relay just keeps piling up commands that go unexecuted. So that’s the initial issue. There is also supposed to be “watchdog” code that recognizes these types of instances, but this was not working either. Thus, the freezer becomes stuck in the last state it was in before the relay broke. If it was in a “run the engine to cool down the freezer mode”, then it would have been stuck in a state that kept things cold. If it was in a “stay on but don’t do anything b/c it’s cool enough” mode, then it would have been stuck in a state where it didn’t cool the freezer at all. This is the state my freezer was stuck in**.
[** I’m actually not 100% convinced on this. My freezer stopped logging temperatures / door openings, etc at the end of August. If I look at number of freezer hours, it says ~8,000 hrs (consistent with Oct’19 through Aug’20) rather than the ~10,000 hrs for Oct’19 to Nov’20). It is definitely within the realm of possibility that my Stirling has been a zombie for the last 70+ days, and either slowly reached 5*C over time or had a second event over the last weekend that triggered the thaw in its susceptible state.]
It sounded like they had seen numerous freezers get stuck in the former format, which was the less devastating mode since it didn’t result in freezer thawing and product loss. They had seen one freezer get stuck in the catastrophic format before me, back in Aug 20th. They brought it back to their workspace, and couldn’t recreate the failure. They could artificially break the relay to reproduce the condition, allowing them to create additional firmware that actually triggers the “watchdog” (and other failsafes) to reset the system when it has sensed that things have gone wrong, event though they still don’t know what the original cause of the issue is. The reason the freezers produced after Sep / Oct 2020 are unaffected, is that these have already been programmed with the new firmware. The firmware I had when it had encountered the problem was 1.2.2, while it became 1.2.7 after it got updated.
Freezers made / distributed(?) within the last month were pre-programmed with the updated firmware, and are supposedly not susceptible to the GUI freezes. Apparently they’re having trouble updating the firmware in the units b/c the update requires a special 4-pin programming unit that is in short supply due to the pandemic.
I won’t get into the details of my experience with Stirling (it apparently even includes a local rep who contracted COVID). They completely dropped the ball in responding, and they know that (and I’m sure they regret it). What will remain a major stain on this situation is that THEY HAVE KNOWN ABOUT THIS FLAW FOR MONTHS AND DID NOT WARN ANY OF THEIR CUSTOMERS. I received an email ~ 8 days ago saying they were going to schedule firmware updates to “improve engine performance at warmer set points, enhance inverter performance and augment existing functionality to autonomously monitor and maintain freezer operation”. Other customers with susceptible units did not even receive this vague and rather misleading email. My guess is that they chose to try to maintain an untarnished public perception of their company over the well-being of the samples stored by their customers. My suspicion is that their decisions may have been exacerbated by the current demand for -80*C freezers for the SARS-CoV-2 mRNA vaccine cold chain distribution (Stirling has a major deal with UPS, for example), though there is no way I will ever confirm that.
After my catastrophic experience, they bungled their response, and only jumped to action after I tweeted about my experience. I really wanted to like this company, as they are local and not one of the science supply mega-companies (eg. ThermoFisher). My fledgling lab is still out almost $3k in commercial reagents, and many of my non-commercial reagents and samples were compromised. They did make a special effort to update my firmware today and answer my questions, but I still can’t help to feel like a victim of poor manufacturing and service. All of the effort I’ve put in the last few days was to get to some answers and help others avoid the same situation I was put in.
I’ll post any updates to this page if I learn any more, but I’m now satisfied with my understanding of what happened. Now back to some actual science.
Stirling -80C Freezer Failure
I’m getting really tired wasting time and brain-power on this, but unlike buying regular consumer goods (like the items on Amazon with hundreds to thousands of reviews) buying and dealing with research equipment is subject to really small sample sizes, so the more information that’s out there the better. Thus, I’ll keep this page as a running log of my experience with Stirling’s XLE Ultra Low Temperature (aka. -80*C) freezer.
TL;DR -> My 1 year-old freezer failed in the most catastrophic way: the firmware froze and displayed -80*C while the contents slowly thawed as it had reached 5*C by the time I noticed it wasn’t working. No alarms, as the firmware had crashed and was frozen (again, displaying -80*C the whole time). While I’ve had no issue with their mechanics, I suspect their firmware is potentially critically flawed.
Part 1) Discovering that the freezer had failed: I purchased a Stirling Ultracold SU780XLE, a little over a year now (purchased ~ October 2019), shortly after I started up my lab at CWRU. I’ve been in labs that had poor experiences with the ThermoFisher TSU series freezers, and the reviews for the Stirling seemed pretty good on twitter. Furthermore, CWRU has a rebate program with Stirling due to their energy efficiency, and probably also because they are local (they are based in Ohio).
I went into the lab last Sunday evening (Nov 8) to do some work. I went to retrieve something from my the Stirling -80*C, and saw that the usual ice on the front of the inner doors were gone. I opened up the inner doors and looked at the shelves, and there was water pooled on every shelf. I looked at some of the most recent preserved cryovials of cells we had temporarily stored on one of the shelves, and they were all liquid. Things had clearly thawed inside the freezer. I closed the outer door and looked at the screen at the top, and it was displaying -80*C. The screen is actually a touchscreen, so I tried to flip through its settings, but it was completely unresponsive to my touch. It became pretty clear to me in that moment that the freezer firmware had crashed with the screen displaying -80*C. Ooof.

I pulled the freezer out from the wall, found the on/off switch, and switched it to OFF. The first time, I actually flipped the switch too soon to ON, as the screen never reset. I’m guessing there must be some short term battery / capacitor that allows the freezer to keep running with momentary interruptions in power. So I then set it to OFF, waited for the screen to go blank, and then set it back to on. After booting up, the screen displayed 5*C. So there we go. It was indeed stuck on that screen, and rebooting the firmware showed it to show the real temperature again. Which is a VERY BAD real temperature.

I immediately emailed Stirling (email timestamped Sun, Nov 8, 7:37 PM). I received a response from a customer service representative Mon, Nov 9, 8:01 AM saying “I’m sorry to hear that you are having issues.” and that they were referring me to the service dept. Got an email from the Stirling service department Mon Nov 9, 8:39 AM asking for more information and a picture of the device’s service screen. I replied to this email with all requested information Mon, Nov 9, 10:43 AM. I got an email telling me I was “Incident-7576” on Mon, Nov 9, 11:00 AM. Complete radio silence from them as of writing this section of this post, which is ~ 72 hours later (Thurs, Nov 12, ~ 11:00 AM), even after I sent them a pretty strongly worded email yesterday at 6:00 AM. I’ll follow up on my continued experience interacting with the company in section 3 of this post.
Otherwise, the mechanics for the freezer seemed to be fine. It look me about an hour to mop up all of the water, and look through my boxes to see what had thawed (which was everything except the 15ml conicals, which seemed to have enough mass to them to have not fully thawed). I was still very aggravated and in a bit of shock to have had to deal with this, but still went about my work. Two hours later, the freezer was back down to -30*C. The next morning, it was back at -80*C. So the reset was clearly sufficient to make the freezer operational* again. ( *since it presumably still encodes the same firmware glitch which caused the problem in the first place).
Part 2) Taking stock of my lost items and forming my interpretation of what happened: Over the next couple of days, I had a chance to take stock of everything I had lost during the thaw. Being a new lab (and thus with a ~ 1 year old freezer) we didn’t have a ton of items in there, but they were not inconsequential. The commercial reagents were largely competent bacterial cells, which amounted to ~ $2,110 of lost material. There were also ~ $720 worth of chemicals, which upon freeze thaw cycles, are of somewhat questionable potency, and will likely need to be purchased again before use in publication. There were also dozens of cryovials of cell lines made in house. There were also a few cryovials of cells, dozens of tubes of patient serum, and viral stocks for SARS-CoV-2 research either given by other labs or provided by BEI resources, which would need to be replaced as we have no backups. While there is no monetary value associated with these reagents, the amount of work-time used in creating them and now replacing them is a major loss.
As a scientist, I think it’s natural for me to try to synthesize all the information I have to piece together what happened. There was no power loss (it was a sunny weekend without any storms, and no other equipment in the lab had any aberrant behavior). Nobody had gone into it for any extended amount of time, especially since it was over the weekend. The last time I had gone into it was Friday afternoon, when it seemed fine. That said, it is very well possible it had already crashed at that time. I don’t think I can visually tell the difference between a freezer at -80*C, -40*C, or maybe even -10*C. Frozen looks frozen. In lieu of any alarms or temperature readings provided by the freezer itself, the only visual clue was going to be water from the thawed ice in the freezer, which by that point was going to be too late.
To see if I could figure out when the freezer may have crashed / failed, I tried going back into the freezer log. This is all the information I could glean from the freezer:

So, uh, that history feature wasn’t all that informative, but still a couple of points I could glean from looking at it.
1) It goes from -80*C in the data points directly preceding the event, to being > 0*C to when I restarted it. So it completely stopped logging during the event. This is entirely consistent with the software having crashed, and the reason it was still showing -80*C on the screen while it had thawed.
2) Uhhhh. I can’t actually figure out what day and time it failed b/c it had apparently logged its most recent operation as August 26th. Clearly it wasn’t August 26th when it had failed, since August 26th was 72 days before Fri, Nov 6, which was the last time I had looked in the freezer before the event, when it was clearly still completely frozen. Weirdly, I didn’t have to tell it what day it was after I reset it, so it must have had an internal clock that knew it was Nov 8th upon the reset. So here’s another indication of there being something glitchy with their firmware.
Ironically, I had a separate low-temperature thermometer plugged into it TraceableLIVE® ULT Thermometer, Item#: LABC3-6510, which really isn’t a bad thermometer, but it eats up batteries and I ran out of disposable AAA batteries (I don’t think it takes a wall plug, which it should also do so it only needs to use the batteries during power-outage situations!), so I was waiting for some rechargeable AAAs to come in from Amazon. TBH, they had already come in a week or two ago, but the freezer was operating perfectly fine until this so it wasn’t high up on my to-do list to charge and replace the batteries and get the secondary thermometer up and running again. In hindsight, a very naive and critical mistake!
Part 3) Stirling’s response to this:
Thurs, Nov 12, 11:00 AM: So far, it’s been pretty nonexistent. I wrote them an email yesterday (Nov 11) saying 1) Everything I’ve seen is telling me this is a catastrophic failure of the freezer itself, so are you going to take responsibility for it? 2) I’m still quite worried about the freezer’s operation, since the glitch that caused this has not been addressed. I’m yet to get any non-automated response from them past the most recent email on Nov 9, 11 AM.
Thurs, Nov 12, ~ 5:00 PM: Tweeting about my experience seemed to have escalated things, as I got two phone calls. The first was from the technician handling my case (“Incident-7576”), who asked if anyone had been in touch with me about scheduling the fix on the previous Monday and Tuesday. I said no, this is the first response I had gotten. I also pointed out hat I had emailed him yesterday with some questions. Apparently he had not seem the email. So, a rather poorly managed customer and technical service response.
As soon as I got off the phone, the VP of Global Services called me (this is where I think the tweets likely made a difference). Provided apologies (as expected), but I also got to ask for answers to my specific questions. Here are things I learned:
1) “We’re not responsible for sample loss”. So they won’t cover anything that you lose if the freeze fails and thaws, even if it was in the most spectacularly bad way completely due to flaws in freezer design or production that torpedoes its operation.
2) The mechanics are covered for 7 yrs, but the material and labor warranty is only for 2 yrs. This includes things like “door handles and electronics”, with electronics clearly being the most relevant item here. They offered to extend this warranty to 3 yrs. I don’t think I’m unreasonable to feel like that is a pretty weak gesture based on the freezer failing the way it did.
3) I’ve had people tell me I should ask for a refund to get it replaced. Well, they don’t do that.
4) Apparently there are three parts to their firmware. One of them is called the “Beagle Bone”, which they said is responsible for making the real-time connection between the freezer settings and the parts. Quick google search suggests it’s something like this.
The saga continues. Let’s see what the technicians tomorrow say.
Fri, Nov 13th: Causing a stir on twitter apparently kicked things into action. I also put my detective hat on and I think I figured out what was going on. Too much to bury way down here, so I made a new post.