I still intend to get around to posting most if not all landing pad plasmids to Addgene, but it’s taken me forever to get around to it. Once those are there, the plasmid sequences will obviously be publicly available. In the mean time, I figure I’d post some of the most common plasmid maps here, so other people can benefit form them (or I don’t have to send specific emails to each person who asks for them). So here are some of the most popular, published plasmids, in GenBank (.gb) format.
We do a lot of molecular cloning in the lab. Standard practice in the workflow is to make your own home-made chemically competent NEB 10-beta bacteria to be used fresh the day of the transformation. This has worked surprisingly well, and we have made ~ 350 different plasmid constructs in the first ~ 600 days). Each time you do the transformation, it’s important to include a positive control (we use 40 ng of attB-mCherry plasmid) to make sure the transformation step was performing properly (to help interpret / troubleshoot what may have gone wrong if you had few / zero colonies in your actual molecular cloning transformation plates). I’ve done this enough times now to know what is “normal”. Thus, especially for new members in the lab, please reference this plot to see how your own transformation results compare to how it has worked for me, where I typically get slightly more than 10,000 transformants (Note: you may get numbers better than me, which is a good thing!).
Positive selection for recombined selection is a pretty useful technique when working with the HEK 293T landing pads, but people might not know what concentrations are best. Well, here is that info, all in one place. In each case, the antibiotic resistance gene had been placed after an IRES element in an attB recombination plasmid, and was linked to mCherry using a 2A stop-start sequence. Thus, the Y axis is showing how well the mCherry+ cells were enriched at various concentrations of antibiotic.
Consistent with the above plots, I generally suggest people use 1 ug/mL Puro, 10 ug/mL Blast, 100 ug/mL Hygro, and 100 ug/mL Zeocin for selections.
Almost everybody hates doing Western blots since they’re so labor intensive (and somewhat finicky), but they are undeniably useful and will forever have a place in molecular biology / biomedical research. We’re currently putting together a manuscript where we express and test variants of ACE2, which requires some Western blotting quantitation. Since I’m about to do some of the quantitation now, I figured I’d just record the steps I do it so trainees in the lab have a basic set of instructions they can follow in the future.
This already assumes you have a “.tif” file of western blot. While I someday hope to do fluorescent westerns, this will likely be a chemiluminescent image. Hopefully this isn’t a scan of a chemiluminescent image captured on film, b/c film sux. So you presumably took it on a Chemidoc or an equivalent piece of equipment. Who knows; maybe I finally found the time to finish / standardize my chemiluminescent capture using a standard mirrorless camera procedure (unlikely). I digress; all that matters right now is you already have such an image file ready to quantitate.
My favorite method for Western blot quantitation is to use “Image Lab” from BioRad. You know, I like BioRad. And I definitely like the fact that they provide this software for free. Anyway, download and install it as we’ll be using it.
Once you have it installed, start it up and open your image file of interest. The screen will probably look something like this:
First off, stop lying to yourself. By default, the imagine is going to be auto-scaled so that the darkest grey values in your image get turned to black, while the lightest grey values get turned white. But in actuality, the image you see is not going to be the “raw” range of your image, so you may as well turn off the auto-scaling so you see your image for what it really is. Thus, press that “Image Transform” button…
… and get a screen that looks like below:
See where those low and high values are? Awful. Set the low value to 0 and the high value to the max (65535).
And now the image looks like this, which is great, since this is what the actual greyscale values in your image actually look like.
OK, now we can actually start quantitating the bands in the plot. First off, don’t expect your western blot to look 100% clean. Lysates are messy, and you can’t always get pure, discrete bands. Sure, some of the lower-sized bands may be degradation products that happened when you accidentally left the lysates out of ice (you should avoid that, of course). Then again, you may have done everything perfectly bench-wise, and the lower-sized bands may be because you’re overexpressing a transgene, and proteins naturally get degraded, and overexpressed proteins may tax the normal machinery and get degraded more obviously. I say the best thing is to acknowledge that happens, show all your results / work, and make the more educated interpretations that you can. Regardless, in that above plot, we’re going to try to quantitate the density of the highest molecular weight band, since that should be the full-length protein. To do that, first select the “Lane and bands” button on the left.
I then press the “Manual” lane annotation button.
In this case, I know I have 11 lanes, so I enter that and get something that looks like this:
Clearly that’s wrong, so grab the handles on the edges and move the grid such that it actually falls more-or-less on the actual lanes.
Sure the overall grid is pretty good, but maybe it’s not perfect for every single individual lane. The program also lets you adjust those. To do that, click off the “Resize frame” button on the left so it’s no longer blue…
And then adjust the individual lanes so they fit the entire band as your human eyes see them, resulting in an adjust grid that looks like this:
Nice. Now go to the left and select the tab at the top that says “Bands”, and then click on the button that says “Add”.
Once you do that, start clicking on the bands you want to quantitate across all of the lanes. You may have to grab the dotted magenta lines in each lane to adjust them so that the actual band is within them (and presumably somewhere near the solid magenta line which should be somewhere in between them). This is what it looks like after I do that:
It’s good to check how well the bands are being seen by the program. Go to the top and press the “Lane profile” button. It should give you a density plot. This is also the window where you can do background subtraction. Find a number that seems sensible (in this case, a disk size of 20 mm seems reasonable), and make sure you hit the “apply to all lanes” button so it propagates this across lanes. While I’m only showing the picture for lane 5, it’s probably worth scanning across the lanes to make sure the settings are sensible.
Now with those settings fine, close out, and then click on “Analysis table” at the top. Once that is open, go to the bottom and click on “lane statistics”. These should be the numbers you’re looking for.
Now export the statistics (either pressing the “copy analysis table to clipboard” button and pasting in an spreadsheet you want to use, or the “export analysis table to spreadsheet”). The number you’ll be looking to analyze will be those in the “Adj. Total Band Vol” column.
Note: Now that I’m doing this, the “standard curve” button is now ringing a bell. I’m fairly certain that in my PhD work, when I ran a ton of Western blots or just straight up protein gels stained with coomassie, that I would run dilutions of lysates / proteins to make a standard curve of known proteins amounts that I could calibrate the densitometry against. We obviously didn’t do that here, since we didn’t have the space, so these numbers aren’t going to be quite as accurate if we had done so. Still, getting some actual numbers we can compare across replicates is still a major step up than not quantitating and having everything be even more subjective.
At some point, I was chatting with Melissa Chiasson about plasmid DNA yields, and she mentioned that her current boss had suggested using terrific broth instead of Luria broth for growing transformed bacteria. I think both of us were skeptical at first, but she later shared data with me showing that DNA from e.coli grown in TB had actually given her better yield. I thus decided it was worth trying myself to see if I could reproduce it in my lab.
There are two general types of plasmids we tend to propagate a lot in my lab. attB recombination vectors, for expressing transgenes within the landing pad, and also lentiviral vectors of the “pLenti” variety, which play a number of different roles including new landing pad cell line generation and pseudovirus reporter assays.
I first did side-by-side preps of the same attB plasmids grown in TB or LB, and TB-grown cultures yielded attB plasmid DNA concentrations that were slightly, albeit consistently worse. But I eventually I tested some lentiviral vector plasmids and finally saw the increase in yield from TB that I had been hoping for. Relaying this to Melissa, she noted she had been doing transformations with (presumably unrelated sets of) lentiviral vectors herself, so these observations had been consistent after all.
Thus, if you get any attB or pLenti plasmids from me, you should probably grow them in LB (attB plasmids) and TB (pLenti plasmids), respectively, to maximize the amount of DNA yields you get back for your efforts.
I try to be as deliberate as I can be about designing my synthetic protein-coding constructs. While I’ve largely viewed splicing as an unnecessary complication and have thus left it out of my constructs (though, who knows; maybe transgenes would express better with splicing, such as supposedly happens with the chicken ß-actin intron present in the pCAGGS promoter/5’UTR combo), there’s still a very real possibility that some of my constructs encode cryptic splice sites that could be affecting expression. In a recent conversation with Melissa Chiasson (perhaps my favorite person to talk syn-bio shop with), she noted that there is actually a way to use SpliceAI to predict splicing signals in synthetic constructs, with a nice correspondence here showing what the outputs mean. Below is my attempt to get this to work in my hands:
First is installing Splice AI, following the instructions here.
I started by making a new virtual environment in anaconda:
$ conda create --name spliceai
I then activated the environment with:
$ conda activate spliceai
Cool, next is installing spliceai. Installing tensorflow the way I have written below got me around some errors that came up.
$ conda install -c bioconda spliceai
$ conda install tensorflow -c anaconda
OK, while trying to run the spliceAI custom sequence script, I got an error along the line of “Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized…”. I followed the instructions here, and typed this out into my virtualenv:
$ conda install nomkl
Alright, so that was fixed, but there was another error about no config or something (“UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually….”). So I got around that by writing a new flag into the load_module() function based on this response here.
OK, so after that (not so uncommon) struggling with dependencies and flags, I’ve gotten things to work. Here’s the result when I feed it a construct from the days when I was messing around with consensus splicing signals early during my postdoc. In this case, it’s a transcript that encodes the human beta actin cDNA, with its third intron added back in. It’s also fused to mCherry, found on the C-terminal end.
And well, that checks out! The known intron is clearly observed in the plot. The rest of Actin looks pretty clean, while there seems to be some low-level splicing signals within mCherry. That said, the fact that they’re in the wrong order probably means it isn’t really splicing, and I’m guessing the signals are weak and far away enough that there isn’t much cross-splicing with the actin intron.
Oh, and now for good measure, here’s the intron found in transcripts made from the pCAGGS vectors, with this transcript belonging to this plasmid from Addgene encoding codon optimized T7 polymerase.
Nice. Now to start incorporating it into analyzing some of the constructs I commonly use in my research…
I’ve gone over how to make mutations of a plasmid before (since that’s the simplest molecular cloning process I could think of), but there will be many times we will need to make a new construct (via 2-part Gibson) by shuttling a DNA sequence from one plasmid to another. Here’s a tutorial describing how I do that.
First, open the maps for the two relevant plasmids on Benchling. Today it will be “G871B_pcDNA3-ACE2-SunTag-His6” serving as the backbone, and “A49172_pOPINE_GFP_nanobody” serving as the insert. The goal here will be to replace the ACE2 ectodomain in G871B with the GFP nanobody encoded by the latter construct.
Next, duplicate the map for the backbone vector. Thus, find the plasmid in the plasmid under the “Projects” tab in Benchling, right click on it to open up more options, and select “Copy to…”.
Then, make a copy into whatever directory you want to keep things organized. Today, I’ll be copying it into a directory called “Cell_surface_labeling”. A very annoying thing here is that Benchling doesn’t automatically open the copy, so if you start making edits to the plasmid map you already have open on the screen, you’ll be making edits to the *original*. Thus, make sure you go to the directory you had just located, and open duplicated file (it should start with “Copy of …”). Once open, rename the file. At this point, the plasmid won’t have a unique identifier yet, so I typically just temporarily prefix the previous identifier with “X” until I’m ready to give it it’s actual identifier. To rename the file, go to the encircled “i” icon on the far right of the screen, second from the bottom icon (the graduation cap). After you enter the new name (“XG871B_pcDNA3-Nanobody[GFP-pOPINE]-SunTag-His6” in this case), make sure you hit “Update Information” or else it will not save. Hurrah, you’ve successfully made your starting campus for in-silico designing your future construct.
Cool, now to the plasmid map with your insert, select the part you want, and do “command C” to copy it.
Now select the part of the DNA you want to replace. For this construct, we’re replacing the ACE2 ectodomain, but we want this nanobody to be secreted, so we need to keep the signal peptide. I didn’t already have the signal peptide annotated, so I’m going to have to do it now. When in doubt, one of the easiest things to do is to consult Uniprot, which usually has things like the signal peptide already annotated. Apparently it’s the first 17 amino acids, which *should* correspond to “MSSSSWLLLSLVAVTAA”. For whatever reason, the first 17 amino acids of the existing ACE2 sequence is “MSSSSWLLLSLVAVTTA”, so there’s a A>T mutation. It’s probably not a big deal, so I’ll leave it as it is. That said, to make my construct, I now want to select the sequence I want to replace, like so:
As long as you still have the nanobody sequence copied, you can now “Command V” paste it in place of this selection. The end result should look like so:
Great, so now I have the nanobody behind the ACE2 signal peptide, but before the GCN4 peptide repeats that are part of the Sun-tag. Everything looks like it’s still in frame, so no obvious screw-ups. Now time to plan the primers. I generally start by designing the primers to amplify the insert, shooting for enough nts to give me a Tm of slightly under 60*C. I also generally put in the ~ 17nt of homology appended onto these primers, thus. The fwd primer would look like this.
Same thing for the reverse primer, and it would look like this (make sure to take the reverse complement).
To recap, the fwd primer seq should be “ttgctgtTactactgctgttcaactggtggaaagcggc” and the reverse should be “ccgttggatccggtaccagagctcaccgtcacctgagt. Cool. Next, we need to come up with the primers for amplifying the vector. It could be worth checking to see if we have any existing primers that sit in the *perfect* spot, but for most construct, that’s likely not the case. Thus, design forward and reverse primers that overlap the homology segments, and have Tm’s slightly under 60*C. The forward primer will look like this:
And the reverse primer like this:
Hmmm. I just realized the Kozak sequence isn’t a consensus one. I should probably fix that at some point. But, that’s beyond the scope of this post. So again, to recap, the fwd and rev primers for the vector are “ggtaccggatccaacggtcc” and “agcagtagtAacagcaacaaggctg”. But now you’re ready to order the oligos, so go ahead and do this. As a pro-tip, I like to organize my primer order work-areas like this:
Why? Well, the left part can be copy-pasted into the “MatreyekLab_Primer_Inventory” google sheet, while the middle section can be copy-pasted into the template for ordering primers from ThermoFisher. The right part can be copy-pasted into the “MatreyekLab_Hifi_Reactions” google sheet in anticipation of setting up the physical reaction tubes once the primers come in.
Illumina sequencing of barcoded amplicons is going to be large factor in the work we’ll be doing. Going to a completely new place, I now need to explain to people how it works. To keep this post simpler, I’m skipping all of the landing pad details (hopefully you already understand it). Let’s just start with what was genomically integrated after the recombination reaction. In the case of the PTEN library, it looks like this:
As you can see, the barcode is the blue “NNNNNNNNNNNNNNNNNN” region at the bottom of the above image. We can’t directly sequence it with primers directly flanking it, since those sequences will also be present in the unexpressed plasmid sequences likely contaminating our genomic DNA. Thus, we first have to create an amplicon containing our barcode of interest, but spanning the recombination junction (“Recomb jxn” above).
For the PTEN library, the barcode is located in back of the EGFP-PTEN ORF, so we have to amplify across the whole thing. We did this by using a forward primer located in the landing pad prior to recombination (such as KAM499 found in the Tet-inducible promoter and shown in red as the “Forward “Primer), as well as a reverse primer located behind the barcode associated with the PTEN coding region (KAM501 in this case).
The above is a simplistic representation. The forward primer / KAM499 is indeed just the sequence needed for hybridization, since we can’t going to do anything else with this end of the amplicon. On the other hand, we’ll add some more sequence to the reverse primer to help us with the next steps. In this case, this is a nucleotide sequence that wasn’t present before, so that we can amplify this specific amplicon in the next step. The actual amplicon will thus look something like below:
OK, so the above amplicon is huge; too big to form efficient cluster generation. Thus, we’ll now use that amplicon to make a much smaller, Illumina compatible second amplicon. We’ll also include an index of degenerate nucleotides “nnnnnnnn” that can be used to distinguish different amplicons from each other when we mix multiples samples together before doing the actual sequencing step. This second amplicon looks something like this:
This time, the forward primer has both the hybridizing portion (in the blue-purple) as well as one of the cluster generators, shown as that light blue. At the complete other end, you can see the “nnnnnnnn” index sequence in indigo, followed by the other cluster generator sequence in orange. Please note: Here, the index is a KNOWN sequence, like GTAGCTAC, or GATCGAGC. It’s just that each sample will have a different KNOWN sequence,. so it was simpler just to denote it with “n”s for the purpose of this explanation.
There’s more in the above map though. We’ll likely want to do paired sequencing of the barcode, so we’ll give the Illumina sequencer two read primers: read 1 (in red) which reads through the barcode in the forward direction, as well as read 2 (in green), which goes through the barcode in the reverse direction. We will also need to sequence the index, though we’ve tended to only do that with one primer. This primer is Index 1, and is colored in yellow.
For the actual primer sequences and everything, look at Supplementary Table 7 of my 2018 Nature Genetics paper.
I work with proteins, so I’ve done Western blots throughout my career. Originally that meant using film and developers, and later with imagers. Imagers are way better than having to deal with film, so as soon as I knew I was going to start up a lab, I started looking at various imagers and quoting them out. Even the most basic imagers with chemiluminescent capabilities quoted in the $24-27k range. But then it dawned on me….. are these imagers nothing but kind of old cameras with a light-proof chassis and dedicated acquisition and analysis software? During my stint in Seattle, I dabbled with taking some long-exposure photography of stars in my parents back yard. Perhaps I could do something similar for taking images of blots?
I had bought an Olympus E-PM2 16.1MP mirrorless camera for $320 back in 2014. While I used it a decent amount at first, I eventually stopped using it as often as I started using my smartphone for quicker snaps, while using Anna’s Nikon DSLR with an old telephoto lens for more long-distance pictures. So, with the E-PM2 now not doing much at home, I figured I’d bring it in and try it with this. I cut out a hole in the top of a cardboard box I could stick the camera into. I dug up the intervalometer I had used for those long-exposure photos of the sky. Nidhi had been doing some western blots recently, and had kept her initial attempts in the fridge, which was good since I could just grab one of those membranes instead of running and transferring a gel just for this. I kept it in some anti beta-actin HRP antibody I recently blot, washed it, and exposed.
Above is something like a 5 minute exposure. So my cardboard box wasn’t perfectly around the sides, so there’s a decent amount of light bleeding in. I had the blot lifted up within the blot on a metal pedestal (some heat-blocks that weren’t being used), so the blot itself is actually pretty free from being affected by the bleed-over light. Notably, the beta-actin bands are blue! Which makes sense, as if you’ve ever mixed bleach with luminol, you see a flash of blue light. Furthermore, if you google “hrp luminol nm”, you see that the reaction should emit 425nm light (which is in the Indiga / violet range). Notably; this would be a difference between my regular use Olympus camera, which is a color camera, with cameras you’d normally encounter on equipment like fluorescent cameras, which are normally black-and-white.
I had actually been playing around a bit with image analysis in python over the last week or so (to potentially boot up a automated image analysis pipeline). That work reminded by that color images are a mixture of red-green-blue. Thus, I figured I could isolate the actual signal I cared about (the chemiluminescent bands) from the rest of the image by keeping signal in the blue channel but not the others. So I wrote a short python script using the scikit-image, matplotlit, and numpy libraries and ran code to isolate only the blue image and convert it to greyscale, and to invert it so the bands would appear dark against a white background.
To be honest, the above picture isn’t the first ~ 5-minute exposure I mentioned and showed earlier. Knowing this seemed to be working, I started playing around with another aspect that I thought should be possible, which was combining the values from multiple exposures to make an ensemble composition. The reason for this being that a single large exposure might saturate the detector, making you lose quantitation at the darkest parts of the band. I figured why couldn’t one just take a bunch of shorter exposures and add them up in-silico? So I took five one-minute exposures. The above image is the inverted first image (with an exposure of one minute).
And the above image here is what it looks like if I make an ensemble plot from 5 separate 1-minute exposures. With it now effectively being a “longer exposure” (due to the combining of data in silico), the signal over the background has been improved, with no risk of over-saturating any of the detectors.
So while I’m sure there are many suboptimal parts of what I did (for example, the color camera may have less sensitivity for looking at chemiluminescent signals), it still seemed to have worked pretty well. And it was essentially free since I had already had all of the equipment sitting around unused (and would have cost < $400 if I had to buy them just for this). And also gave me a chance to look under the hood of this a bit, practice some python-based image analysis, and prove to myself that I was right.
Anna was using the Nikon microscope a couple of days ago, and noticed some level of bleed-over of mCherry fluorescence into the CY5 filter-set channel. I previously played around with making a bespoke script for figuring out the optimal fluorescent protein : laser : detector combinations on the two flow cytometers we routinely use at the core here, and figured I could modify the script to understand how our existing Nikon filter sets work with the various fluorescent proteins we use here. It took me a couple hours, but I made a script posted on my GitHub. Here’s a figure that script produces:
Based on the filter sets we currently have, looks like it may be near impossible to avoid bleedover of a near infra-red FP fluorescence into the red channel. Also, this seems to reproduce / explain the effect Anna was seeing with mCherry fluorescence bleedover into the NIR channel. Seems this problem doesn’t happen with mScarlet.I, so maybe I’ll slowly shift toward using that FP more.