Amplifying from the landing pad

Someone recently asked what primers I use to amplify from the landing pad for sequencing. This will require me looking back at our published papers. Rather than just do this one-off and answer the email just to have to repeat this exercise in the future, I figured I’d write a blog post that I can just point people to in the future.

Why is primer design important here? The main thing is, you presumably want to avoid amplifying your unintegrated plasmid, since counts of those sequences will be completely unrelated to the genotype-phenotype relationship you’re presumably trying to measure (since only the singular integrated plasmid that expresses and is resulting in the phenotype is what matters and should be the only thing being sequenced). We normally do this by designing a PCR amplicon across an attR or attL recombination junction. Here’s an example amplicon schematic from Nisha’s Methods Mol Biology manuscript.

Amplifying across the attR junction (the near side):

This is relevant if you’re trying to do direct sequencing of our transgene of interest (much like Nisha is doing above), or if you have a barcode in the 3’UTR of your transgene of interest (or, well, the 5′ UTR, but that’s pretty uncommon). If the ampicon size becomes larger than the limit for Illumina sequencing, then you may have to do a pair of nested PCRs. In this case, the first PCR is performed with a primer recognizing the Tet-inducible promoter in the landing pad (like KAM499). The amplicon, only of recombined DNA, is then used as the second-round template for amplifying and subsequently sequencing the barcode. Here’s a screenshot of Fig 4 from the original 2017 NAR paper.

The primer we normally use to bind the Tet-inducible promoter for amplifying across the attR junction is KAM499: GAGAACGTATGTCGAGGTAGGC. If doing direct Illumina amplicon amplification (much like Nisha did, since her amplicon is sufficiently small), then the primer will presumably look something like KAM3748, which hybridizes in a pretty similar location (GCCTGGAGCAATTCCACAACAC) and has the full sequence of (AATGATACGGCGACCACCGAGATCTACACCGTGGACGGCGCCTGGAGCAATTCCACAACAC). We probably moved the binding site in KAM3748 to be closer to the attR junction just to make the amplicon ~75 nucleotides smaller than using KAM499, but no big deal either way (and especially when doing the nested approach).

Amplifying across the attL junction (the far side):

While the concept of amplifying across the attR junction is universally applicable, the details are not as the amplicon size will vastly change depending on the size of the transgene of interest. Furthermore, as mentioned above, the barcode typically ends up in the 3′ UTR, which is not a terrible place but certainly capable of having some unintended consequences like messing with the RNA steady-state abundance and translation rates. Thus, pretty early on, we switched over to barcoding to the left of the attB site in the recombination plasmid, with the idea that it is in close proximity to the transgene of interest in the plasmid itself, but allows you to amplify the attL junction in recombined genomic DNA, which ends up being far more universal between experiments. Here’s a screenshot demonstrating this in Fig 1 from our 2024 PLoS Pathogens paper, where we use it to identify samples within a library of barcoded ACE2 variants.

What primers do we use here for these amplifications?

The reverse primer tends to be something like the primer binding / hybridizing sequence in KAM4362 (ATGTGCTGCAAGGCGATTAA). In actuality, since this strategy allows you to forgo the nested PCR amplification, KAM4362 and other related ones tend to have an index (allowing for multiplexed custom Illumina sequencing) and the p5 adapter, with a full sequence like this: AATGATACGGCGACCACCGAGATCTACACCCTCGCAATTatgtgctgcaaggcgattaa

So that’s a primer that’s likely already in your attB plasmid (although you should certainly check). How about amplifying from the landing pad itself? Well, this will change a bit depending on the landing pad, since we typically use a sequence present in the most 5′ transgene encoded by the landing pad prior to recombination, such as mTagBFP2 in the original 293T AAVS1 Clone4s and the popular LLP-iCasp9-Blast Clone 12s, or Bxb1 integrase in the G542A Clone3s we typically use in the lab. For binding to mTagBFP2 we typically hybridize with this sequence (CTCGACCACCTTGATTCTCATGG) for a full primer sequence like KAM2162 (CAAGCAGAAGACGGCATACGAGATGGATCACGTctcgaccaccttgattctcatgg). For binding Bxb1 integrase, we typically hybridize with this sequence (GGCCTCCTCTTTCTGTCGAA) for a full primer sequence like KAM4364 (CAAGCAGAAGACGGCATACGAGATAGATTGCGAGggcctcctctttctgtcgaa).


Qubit dsDNA broad range

I’ve been spending more time in the lab recently, which has allowed me to do some hands-on things that I previously had to try to advise people on without ever having done. This includes something as mundane as using the Qubit dsDNA broad range kit to quantify some plasmid DNA prep concentrations. Here are some notes for people in my lab to keep in mind in the future.

  1. Standard 1 is just buffer. And it essentially gives the same amount of background signal, regardless of the volume of buffer or working solution (WS) added (within reason). For example:
    • 10uL of S1 in 190uL WS: 41.55 RFU
    • 10uL of S1 in 90uL WS: 44.72 RFU
    • 5uL of S1 in 95uL WS: 45.32 RFU
  2. The RFU (relative fluorescence units) that are reported are based on DNA concentration, rather than total DNA input.
    For example, with the S2 standard (at 100 ng/uL):
    • 10uL of S2 in 190uL WS = 1000 ng total, 5 ug/uL = 2421.87 RFU
    • 10uL of S2 in 90uL WS = 1000 ng total, 10ng/uL = 4914.62 RFU
    • 5uL of S2 in 95uL WS = 500 ng total, 5 ug/uL = 2546.59 RFU
  3. 200uL for a final volume in the tube is overkill. Looking at those standards, we get nearly the same exact RFU numbers when scaled down to 100uL. I mostly view this as ThermoFisher wanting you to burn through your reagent faster so you spend more $$$.
  4. I don’t have any data on hand to show this, but I swear that a year or two ago, somebody in my lab showed that you don’t have to use ThermoFisher branded Qubit tubes to actually use the qubit. I don’t quite remember what they used (whether it was 200uL PCR tubes or some off-brand 0.5mL tubes more akin to the Qubit tubes). This makes perfect sense, since all it needs to do is allow light to pass through. So ya, I imagine that most clear tubes are fine; you’ll of course want to make sure you’re using the same tubes for both the standards calibration and reading your own unknown samples.
  5. If you do want to scale down your WS volumes to conserve reagent, it gets a little tricky since the Qubit assumes you’re doing the standards calibration step as recommended by ThermoFisher (ie. for S2, it assumes that you’re doing 10uL of S2 in 190uL of WS, for a 20x dilution of 100 ng/uL to yield a in-tube concentration of 5ng/uL). So while it lets you adjust the volume of your sample you’re putting in (ie. you can tell it a volume between 1uL and 20uL), it doesn’t let you do the same thing for the standards, even if the concentrations end up being the same (like in the 5uL S2 in 95 uL WS case). If doing the 5:95 route, I suggest doing the same thing for both the standards and your samples, and just “telling” the Qubit machine that you’re doing 10uL. Then, if you need to do a smaller amount b/c the DNA is too concentrated (eg. 2uL instead of 5uL), then apply that dilution factor to the “told volume” (eg. 4uL assuming you told it 10uL).
  6. Probably the most important one: I saw a pretty huge discrepancy between what the Spec told me (measured with a Take3 plate on the BioTek) and what the Qubit was saying. For most samples, it was roughly a 2-3 fold diff (eg. 53ng/uL by spec, 24 ng/uL by Qubit), although there were also some samples that differed by 4-5fold (eg. 57 ug/uL by spec, 12.8 ug/uL by Qubit). I have no clue why this discrepancy exists……..some of it can probably be explained by contaminants affecting the spec readings, but definitely doesn’t explain the whole thing,

BCA protein quantitation

I guess the verdict is still out exactly how well the Qubit Protein Assay works. Now that I’m looking into the details, looks like the Qubit Protein Assay (Q33211) that we have is incompatible with RIPA buffer, which is likely the buffer people had tried using it with in the past. But, the BCA assay is perfectly compatible with RIPA buffer, so we still do it reasonably often. But, to make sure everyone in the lab is on the same page, we may as well look at some real data and see how I view it. This will be based on some data that Olivia S had generated a month or so ago.

Link to the code below:
https://colab.research.google.com/drive/16jIz0Qq_jvb7R1s-fIJW804ffahZDHPG?usp=sharing

Here’s what the raw values of the standard curve looked like:

The value of the background is shown as the datapoint on the y-axis. Clearly, the lowest standard dilution loses linearity from the rest of the values, likely b/c a larger fraction of it is due to background signal. Accordingly, if we subtract out the background value from it, we might be able to salvage use of that dilution. That is indeed the case.

Using those values, I’ve now fit a linear model. This will allow us to predict the concentration of a protein sample based on its absorbance, relative to the standard curve. The points and line in red denote the values calculated based on this linear model. The real standard curve values do straddle it pretty well, which is good.

Cool, so what do the experimental samples of unknown concentration look like based on this model? Since Olivia did half-log dilutions of these, we can actually see how the predicted values change based on the dilution tested and its resulting absorbance value.

So for all of the samples, the 33-fold dilution sample was erroneous. Otherwise, 10 uL of undiluted sample in 190 uL BCA working solution, or 10 uL of 3- or 10-fold dilutions of that lysate in 190 uL BCA working solution, gave values that were relatively similar. Taking the geometric mean of those, we calculated values of 1184, 1894, and 2339 ng / uL for these lysates. Seems reasonable enough.

There we go. A primer on the data analysis portion of the BCA assay, or anything else that requires using dilutions of a standard of known concentration to determine the likely concentrations of unknown samples.

Example Barcoded Variant Library Counts

As part of the AVE-ETS, we had been discussing barcoded variant libraries. I don’t quite remember the context, but I think I suggested we look at some real sequencing data of a barcoded variant library, and I offered to dig up the PTEN VAMP-Seq library data. Of course, I had other things to do in the month following this statement so I didn’t look for those files until the long weekend immediately prior to the next meeting. My original plan was to just find this blog post [https://www.matreyeklab.com/simulating-sampling-during-recombination/1175/] and say we use that, but then I realized that those data tables were for variant frequencies, and not barcode counts. All of my old data from my postdoc was on flash drives in the office at work, but I didn’t feel like making the commute in just to for that. I decided to try to find and re-process the raw data uploaded to GEO / SRA.

First, a number of steps that aren’t explicitly coded in this markdown file.

  1. I downloaded the files from the relevant GEO sites. The first dataset from the NatGenet paper can be found here [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE108727]. The second dataset from the Genome Med paper where we published on a “fill-in” library we created to reintroduce some missing variants can be found here [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE159469].
  2. I counted the barcodes using the original method we had used, which was just having Enrich2 do it, with a minimum quality score filter of 30.
  3. I imported the data into R to do some analyses here
library(tidyverse)
## Warning: package 'ggplot2' was built under R version 4.2.3

## Warning: package 'tidyr' was built under R version 4.2.3

## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.2     ✔ readr     2.1.4
## ✔ forcats   1.0.0     ✔ stringr   1.5.0
## ✔ ggplot2   3.5.1     ✔ tibble    3.2.1
## ✔ lubridate 1.9.2     ✔ tidyr     1.3.1
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
theme_set(theme_bw())
theme_update(panel.grid.minor = element_blank())
first_1 <- read.delim(file = "Data/SRR6437841.tsv.gz", sep = "\t")
first_2 <- read.delim(file = "Data/SRR6437842.tsv.gz", sep = "\t")

first <- merge(first_1, first_2, by = "X")
first$count = rowMeans(first[,c("count.x","count.y")])

ggplot() + scale_x_log10() + #scale_y_log10() +
  geom_histogram(data = first %>% filter(count > 3), aes(x = count)) + geom_vline(xintercept = 100)
## As a rough but effective approach, I like to look at the histogram of read counts to identify where the relative minima between the population containing counts of 1 (largely erroneous barcodes from sequencing error) and the next non-zero population (assuming the sample was sequenced to enough depth). For this sample, since it was sequenced so deeply, this is around a count of 100.

first_filtered <- first %>% filter(count > 100)

#first_key <- read.delim(file = "Data/GSE108727_PTEN_barcodeInsertAssignments.tsv", sep = "\t", header = F)
first_key <- read.delim(file = "Data/first_key.tsv", sep = "\t", header = F)
colnames(first_key) <- c("X","variant")

first_df <- merge(first_filtered, first_key, by = "X", all.x = T)

First_lib_histogram <- ggplot() + scale_x_log10() + #scale_y_log10() +
  geom_histogram(data = first_df, aes(x = count), fill = "grey90", bins = 50) +
  geom_histogram(data = first_df %>% filter(!is.na(variant)), aes(x = count), bins = 50) + 
  geom_vline(xintercept = 100) + 
  labs(x = "Num of reads", y = "Count", title = "Lib1: Grey, all barcodes; Black, subassembled variants") +
  theme(panel.grid.minor = element_blank()) + 
  NULL; First_lib_histogram
ggsave(file = "Output/First_lib_histogram.pdf", First_lib_histogram, height = 4, width = 5)

Some notes for the above plot. The grey bars of the histogram denote counts for all barcodes that were observed. The black bars are barcodes that were linked to a particular PTEN coding variant via PacBio subassembly.

Now, let’s look at the next library.

second_1 <- read.delim(file = "Data/SRR12818211.tsv.gz", sep = "\t")
second_2 <- read.delim(file = "Data/SRR12818212.tsv.gz", sep = "\t")

second <- merge(second_1, second_2, by = "X")
second$count = rowMeans(second[,c("count.x","count.y")])

ggplot() + scale_x_log10() + #scale_y_log10() +
  geom_histogram(data = second %>% filter(count > 1), aes(x = count)) + geom_vline(xintercept = 10)
## As a rough but effective approach, I like to look at the histogram of read counts to identify where the relative minima between the population containing counts of 1 (largely erroneous barcodes from sequencing error) and the next non-zero population (assuming the sample was sequenced to enough depth). For this sample, it's around 10.

second_filtered <- second %>% filter(count > 10)

second_key <- read.delim(file = "Data/Second_key.tsv", sep = "\t", header = T)
colnames(second_key)[2] <- "variant"

second_df <- merge(second_filtered, second_key, by = "X", all.x = T)

Second_lib_histogram  <- ggplot() + scale_x_log10() + #scale_y_log10() +
  geom_histogram(data = second_df, aes(x = count), fill = "grey90") +
  geom_histogram(data = second_df %>% filter(!is.na(variant)), aes(x = count)) + 
  geom_vline(xintercept = 100) +
  labs(x = "Num of reads", y = "Count", title = "Lib2: Grey, all barcodes; Black, subassembled variants") +
  theme(panel.grid.minor = element_blank()) + 
  NULL; Second_lib_histogram
ggsave(file = "Output/Second_lib_histogram.pdf", Second_lib_histogram, height = 4, width = 5)

## I have no idea why it looks bimodal , btw. Probably a prob with library mixing.
write.table(file = "Output/First_df.tsv", first_df, quote = F, row.names = F)
write.table(file = "Output/Second_df.tsv", second_df, quote = F, row.names = F)

For anyone that wants to test this, the github repo for the above scripts and data can be found here: https://github.com/MatreyekLab/Barcodes

Barcoding the epilepsy vector

We have a redesigned attB vector purposefully made to carry and barcode a bunch of epilepsy-associated membrane protein genes (and to a lesser extent, cytoplasmic and secreted protein genes). We’ll eventually need to make a number of barcoded libraries from it, so we’ve been figuring out the kinks of barcoding at a rare-cutter site. I’m realizing that if I want things to scale (whether it’s across labs, or even in the lab), it probably makes sense to make things as easy for the next person to pick up as possible. So, I’m showing my work in terms of how barcoding can be analyzed with this vector.

Alright, so to QC the barcoding vector, we’re trying two things. An initial plasmidsaurus run (quick turnaround, hundreds to low thousands of reads, sufficient to estimate unbarcoded contamination), and then a submission to Genewiz/Azenta/(Formerly Brooks Life Sciences) 2x250nt miSeq Amp-EZ service (2 week turnaround, hundreds of thousands of reads, likely enough to fully analyze small barcoded libraries). I’ll talk about the plasmidsaurus reads on another day.

Today is figuring out how many barcodes seem to exist per prep with the Amp-EZ data.

First was pairing the reads.

% pear -f BC-L062-G2_R1_001.fastq.gz -r BC-L062-G2_R2_001.fastq.gz -o L062_G2
Next was treating the sequence next to the barcode as adapters to identify the barcodes themselves.
% cutadapt -a ATAAGATCTGGTCCTCTGATCCGA...CTATCGGTAACGCATTCGCC -o G2_linked.fastq L062_G2.assembled.fastq

% sh Tally_sequences.sh G2_linked.fastq G2_linked_tally.csv

At this point, I have a csv file, which I imported into R since i'm more nimble there. (Obviously one can do something similar in Python). if the adapters were indeed there, then the resulting read is returned as a 20nt sequence of the barcode. If the adapters weren't there (eg. if the plasmid was unbarcoded, or if it had so many errors in the read that it wasn't identified as the adapter), then it was returned as the full sequence. Thus, I can then subset for reads that were 20nt (barcoded) vs those that were not (unbarcoded), resulting in this following histogram based on that designation.

For AEZ035_G1, 83% of the reads were barcoded, whereas 17% of reads were not. This included 11.5% of the reads being the unbarcoded template. While we can get a bit fancier with things like error-correcting the barcodes (eg. to accommodate for things like sequencing error, which is presumed to result in low counts stemming from true barcodes a small hamming-distance away with much higher counts), for today’s purpose, I’m just going to use a minimum threshold value, such as 7, to distinguish likely true barcodes from the likely erroneous non-barcodes in the “barcoded” subset. This yields a plot like so:

Great, so based on this relatively simple analysis scheme, it seems to indicate that there are ~ 5,400 unique barcodes in this sample. We also repeated this process in another independently derived sample. What does that look like?

For this independent barcoding attempt, we got 92% of all reads being barcoded reads, and the remaining 8% something else (with 3.5% being clearly unbarcoded plasmid)

Based on the same analysis scheme, there are ~ 5,100 unique barcodes in this sample. So despite the slight different in the number of unbarcoded samples using Nidhi’s Gibson barcoding protocol, the total number of unique barcodes seemed to be similar.

Finally, since we have two independent barcoding attempts with a highly diverse oligo (N20 – so a potential diversity of 1.1 trillion barcodes), we would expect to get very little overlap in barcodes between these two dataset. Does that actually play out? Well, here are the actual results: G1 barcoded library only: 5439, G2 barcoded library only: 5095, and the number of barcodes found in both: 34. So, pretty non-overlapping… so that’s good! The vast majority of these overlapping barcodes had reasonably high counts in the G2 library, but counts barely above the threshold filter (9, 10, 11, 12) in the G1 library. Thus, these are likely due to sequencing errors, that can be corrected either by being more stringent, or by taking a more involved barcode error-correcting scheme (perhaps to come in a future blog post).

Plasmidsaurus decision tree

Plasmidsaurus whole-plasmid nanopore sequencing is a fantastic service. As of today (3/25/25), we’ve sent 1357 samples into them(!!). And they’ve essentially been worth every penny. But there are a bunch of different reasons to use them, and I wanted to make sure everyone in the lab was on the same page in terms of reasons that are well justified vs those that are more arguable, so I made this following decision tree (also, this codifies lab policies of sequencing plasmids from unverified sources before working with them). For anyone in the lab; let me know if you think we should make any specific changes to it (I tried to remember what we discussed in lab meeting, but I may have forgotten something).

Pymol figures

I end up having to Google search the commands the relevant commands every time I need to make publication-quality figures in Pymol, so I’m just going to note them here to save myself some time.

  1. set bg_rgb, white

    This sets the background to white. Usually safe bet for any image meant for a normal presentation (ie. slides with white backgrounds), or a publication (where it’s text and images against a white page).

  2. set ray_trace_mode, 1

    This allows it to have the “black outline” representations, which I think look a little nicer than the default.

  3. ray [Some number, usually between 900 and 1500]

    This makes a static fully-rendered image that is much higher quality than the fast render in the interactive interface.

See above for an example of what a resulting image looks like.

When iCasp9 doesn’t kill

iCasp9 as a negative selection cassette is amazing. Targeted protein dimerization with AP1903 / Rimiducid is super clean and potent, and the speed of its effect is a cell culturist’s dream (cells floating off the plate in 2 hrs!). It really works.

But when there are enough datapoints, sometimes it doesn’t. I have three recorded instances of email discussions with people that have mentioned it not working in their cells. First was Jeff in Nov 2020 with MEFs. Then Ben in June 2021 with K562s. And Vahid in July 2021 with different MEFs. Very well possible there’s one or two more in there I missed with my search terms.

Reading those emails, it’s clear that I had already put some thought into this (even if I can’t remember doing so), so I may as well copy-paste what some them were:

1) Could iCasp9 not work in murine cells, due to the potential species-based sequence differences in downstream targets? Answer seems to be no, as a quick google search yields a paper that says “Moreover, recent studies demonstrated that iPSCs of various origin including murine, non-human primate and human cells, effectively undergo apoptosis upon the induction of iCasp9 by [Rimiducid]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7177583/

Separately, after the K562s (human-derived cells) came into the picture:

This is actually the second time this subject has come up for me this week; earlier, I had a collaborator working in MEF cells note that they were seeing slightly increased but still quite incomplete cell death. That really made me start thinking about the mechanism of iCasp9-based killing, which is chemical dimerization and activation of caspase 9, which then presumably cleaves caspases 3 and 7 to then start cleaving the actual targets actually causing apoptosis. So this is really starting to make me think / realize that perhaps those downstream switches aren’t always available to be turned on, depending on the cellular context. In their case, I wondered whether the human caspase 9 may not recognize the binding / substrate motif in murine caspase 3 or 7. In yours, perhaps K562’s are deficient in one (or both?) of those downstream caspases?

Now for the most recent time, which happened in the lab rather than by email: It was recently brought up that there is a particular landing pad line (HEK293T G417A1) which we sometimes use, that apparently has poor negative selection. John and another student each separately noticed it. Just so I could see it in a controlled, side-by-side experiment, I asked John if he’d be willing to do that experiment, and the effect was convincing.

So after enough attempts and inadvertently collecting datapoints, we see the cases where things did not go the way we expected. Perhaps all of these cases share a common underlying mechanism, or perhaps they all have unique ones; we probably won’t ever know. But there are also some potentially interesting perspective shifts (eg. a tool existing only for a singular practical purpose morphing into a potential biological readout), along with the practical implications (ie. if you are having issues with negative selection, you are not alone).

This is the post I will refer people to when they ask about this phenomenon (or what cell types they may wish to avoid if they want to use this feature).

Spec comparisons

Well, I was going to talk about some of these experiments during lab meeting, but why make a Powerpoint or Google CoLab link people won’t follow when I can write it as a blog post.

Regardless, we’ve recently been looking at how our various possible methods of spectrophotometry compare.

  1. Amazon Spec” purchased for $235 back in November 2020.
  2. ThermoFisher Nanodrop in a departmental common room (I don’t actually know what kind as I’ve never used it)
  3. BioTek Syngergy plate reader, either… A) with 200uL of bacteria pipetted into a flat-bottom 96-well plate, or B) using their “BioCell”, which is a $290 cuvette that fits onto one of their adapter plates. I mistakenly label this one as “BioCube” in the plots, but they probably should have just named it that in the first place so I don’t feel too bad.

To test the methods, Olivia sampled bacterial optical densities while a batch of e.coli were growing out to make competent cells. Thus, the different densities in the subsequent data will correspond to different timepoints of the same culture growing out. Each time point was measured with all four methods.

Well, all of the methods correlated pretty well, so no method was intrinsically problematic. I’m not sure if the settings for any automated calculation of absorbance values, but the BioCell numbers were just off by an order of magnitude (The BioCell data also had a clear outlier). The Amazon spec and Nanodrop generally gave similar values, although the nanodrop gave slightly higher numbers, comparatively.

The plate reader option was also perfectly fine, although it required more back-end math to convert the absorbance values to actual optical density. This is also not the raw data, as the media only absorbance has to be collected and subtracted to yield the below graph.

Rather than try to figure out the path length and try to calculate the formula, I just used the above dataset to create a calibration for “nanodrop-esque optical density”. (Note: There was a second independently collected set of data I added for this analysis). Here, the goal was to actually use the raw values from the plate reader output, so people could do the conversion calculation on the fly.

Say you have a particular nanodrop-esque A600 of 0.5 in mind. The formula to convert to plate reader units is 0.524 * [nanodrop value] + 0.123, or in this case, 0.385. Checks out with the linear model line shown above.

Or, if you already have raw platereader values and want to convert to nanodrop-esque values, the formula here is 1.79 * [biotekp value] – 0.2 to get the converted value. Here, let’s pretend we have an absorbance value of 0.3, which calculates to a nanodrop-esque value of 0.338. So perhaps that’s a decent raw plate reader value to get with nearly grown bacterial cultures during the chemically competent cell generation process.

Lastly, it’s worth noting how surprisingly large dynamic range there seems to be for spec readings of bacterial cultures. It’s likely largely because we’re used to handling either mid-to-late log phase growth or saturated / stationary cultures, but we’re used to dealing with values in the 0.2 to 1.2 range, although the log-scale plots above suggest that we can be detecting cultures reasonably well within the 0.01 to 0.1 range as well.

HEK cell small molecule toxicities

I’ve now done a *bunch* of kill curves with HEK cells in various forms (WT HEK cells, single or double landing pad cells). Here’s a compendium of observed toxicity of serial dilutions of various small molecules in HEK cells not engineered to be resistant in any way. (This is mostly for my own reference, when I’m in the TC room and need to check on some optimal concentrations).