Designing Primers for Targeted Mutagenesis

Now that my lab is fully equipped, I’m taking on rotation students. Unfortunately, with the pandemic, it’s harder to have one-on-one meetings where I can sit down and walk the new students through every method. Furthermore, why repeat teaching the same thing to multiple students when I can just make an initial written record that everyone can reference and just ask me questions about? Thus, here’s my instructional tutorial on how I design primers in the lab.

First, it’s good to start out by making a new benchling file for whatever you’re trying to engineer. If you’re just making a missense mutation, then you can start out by copying the map for the plasmid you’re going to use as a template. Today, we’ll be mutating a plasmid called “G619C_AttB_hTrim-hCPSF6(301-358)-IRES-mCherry-P2A-PuroR” to encode the F321N mutation the CPSF6 region. This should abrogate the binding of this peptide to the HIV capsid protein. Eventually every plasmid in the lab gets a unique identifier based on the order it gets created (this is the GXXXX name). Since we haven’t actually started making this plasmid yet, I usually just stick an “X” in front of the name of the new file, to signify that it’s *planned* to be a new plasmid, with G619C being used as the template. Furthermore, I write in the mutation that I’m planning to make in it. Thus, this new plasmid map is now temporarily being called “XG619C_AttB_hTrim-hCPSF6(301-358)-F321N-IRES-mCherry-P2A-PuroR”

That’s what the overall plasmid looks like. We’ll be mutating a few nucleotides in the 4,000 nt area of the plasmid.

I’ve now zoomed into the part of the plasmid we actually want to mutate. The residue is Phe321 in the full length CPSF6 protein, but in the case of this Trim-fusion, it’s actually residue 344.

I next like to “write in” the mutation I want to make, as this 1) makes everything easier, and 2) is part of the goal of making a new map that now incorporates that mutation. Thus. I’ve now replaced the first two T’s of the Phe codon “TTT” with two A’s, making the “AAT” codon which encodes Asn (see the image above)

Next is planning the primers. So there are a few ways one could design primers to make the mutation. I like to create a pair of overlapping (~ 17 nt), inverse primers, where one of the primers encodes the new mutation in it. PCR amplification with these primers should result in a single “around-the-circle” amplicon, where there is ~ 17 nt of homology on the terminal ends. These ends can then be brought together and closed using Gibson assembly.

So first to design the forward primer. This is the primer that will go [5’end] –[17 nt homology] — [mutated codon] — [primer binding region] — [3’end]. So the first step is to figure out the primer binding region.

In a cloning scheme like this, I like to start selecting the nucleotides directly 3′ of the codon to be mutated, and select enough nucleotides such that the melting temperature is ~ 55*C. In actuality, the melting temperature will be slightly higher, since 1) we will end up having 17 nt of matching sequence 5′ of the mutated codon, and 2) the 3rd nt in the codon, T, will actually be matching as well.

Now that I’ve determined how long I need that 3′ binding region to be, I select the entire set of nucleotides I want in my full primer. In this case, this ended up being a primer 36 nt in length (see below).

Since this is the forward primer, I can just copy the “sense” version of this sequence of nucleotides.

OK, so next to design the reverse primer. This is simpler, since it’s literally just a series of nucleotides going in the antisense orientation directly 5′ of the codon (as it’s shown in the sense-stranded, plasmid map). I shoot for ~ 55*C to 60*C, usually just doing a little bit under 60*C.

Since this is the reverse primer, we want the REVERSE COMPLEMENT of what we see on the plasmid map.

Voila, we now have the two primers we need. We just now need to order these oligos (we order from ThermoFisher, since it’s the cheapest option at CWRU, and can then perform the standard MatreyekLab cloning workflow).

Using prior data to optimize the future

As of this posting, we’ve cloned 176 constructs in the lab. I’ve kept pretty meticulous notes about what standard protocol we’ve used each time, how many clones we’ve screened, and how many clones had DNA where the intended insertions / deletions / mutations were present. With this data, I wondered whether I could take a quick retrospective look on my observed success / failure rates to see if I could use to see if my basic workflow / pipeline was optimized to maximize benefit (ie. getting the recombinant DNA we want) while limiting cost (ie. Time, effort, $$$ for reagents and services). I particularly focused on 2-part Gibsons, since that’s the workhorse approach utilized for most molecular cloning in the lab.

First, here’s a density distribution reflecting reaction-based success rates (X number of correct clones in Y number of total screened clones, or X / Y = success rate).

I then randomly repeatedly sampled N-times from that distribution, ranging from N-values 1 through 5, effectively pretending that I was screening 1 clone, 2 clones … up to 5 clones for each PCR + Gibson reaction we were performing. Since 1 good clone is really all you need, for each sampling of N clones, I checked whether any of them were a success (giving that reaction a value of “1”) or a whether all of them failed (giving that reaction a value of “0”). I repeated this process 100 times, and counted the sum of “1” and “0” values, and divided by 100 to get an overall success rate. I repeated this process 50 overall times to get a sense of the variability of outcome with each condition. Here are the results:

We screen 3 clones per reaction in our standard protocol, and I think that’s a pretty good number. We capture at least 1 successful clone 3/4 of the time. Sure, maybe we increase how often we get the correct clone on the first pass if we instead screen 4 or 5 clones at at time, but the extra effort / time / cost doesn’t really seem worth it, especially since it’s totally possible to screen a larger number on a second pass for those though-but-worth-it clones. Some of those reactions are also going to be ones that are just bad, period, and need to be re-started from the beginning (perhaps even by designing new primers), which is a screening hill that certainly isn’t worth dying on.

Miniprep efficiency

The SARS CoV-2 pandemic -caused research ramp-down period was a weird time for me / the lab. I sent Sarah to work from home for 10 or so weeks, meaning I had to do the lab work myself if I wanted to make any progress on any of the existing grant-work, or for any of the SARS CoV-2 research I was trying to boot up. This has resulted in some VERY long weeks over the last few months, as I was really trying to do everything at that point. Cognizant of this, I even started timing myself doing some of the more routine / mundane tasks, to see if I could try to maximize my efficiency. Perhaps the most consistent / predictable of the tasks were minipreps. In particular, I was curious whether doing more minipreps simultaneously saved me time in the long run.

So short answer was yes. 24 is a very comfortable / logical number for me (I just fill up my mini-centrifuge, and the result is divisible by three so easy for processing as complete 8-strip PCR tubes for Sanger later on), and I consistently processed those in about an hour. Dong fewer would be somewhat less efficiency, though sometimes you have to do that if you’re in a rush to get some particular clone of recombinant DNA plasmid. Then again, doing more than 24 — while somewhat exhausting — does save me some time overall. Thus, I found out that was a worthwhile strategy to plan for during that period.

That said, I’m very glad to have Sarah back in the lab helping me with some of the wet-lab work again. Not only does it save me time, but also saves me focus; I’ve gotten pretty good at multi-tasking, but I still do hit a limit in terms of the number of DIFFERENT things I can do / think about at the same time.

Modeling bacterial growth

I do a lot of molecular cloning, which means a lot of transformations of chemically competent e.coli. Using 50 uL of purchased competent bacteria would cost about $10 per transformation, which would be an AWFUL waste of money, especially with this being a highly recurring expense in the lab. I had never made my own competent cells before, so I had to figure this out shortly after starting my lab. It took a couple of days of dedicated effort, but it ended up being quite simple (I’ll link to my protocol a bit later on). Though my frozen stocks ended up working fine, I became quite used to creating fresh cells every time I need to do a transformation. The critical step here is taking a saturated overnight starter culture, and diluting it so you can harvest a larger volume of log-phase bacteria some short time later. A range of ODs [optical density here defined as absorbance at 600 nm] work, though I like to use bacteria at an OD around 0.2. I had gotten pretty good at being able to eyeball when a culture was ready for harvesting (for LB in a 250 mL flask, I found this was right when I started seeing turbidity), but I figured there was a better way to know when it’s worth sampling and harvesting.

I started keeping good notes about 1) the starting density of my prep culture (OD of the overnight culture divided by the dilution factor), 2) the amount of time I left the prep culture growing, and 3) the final OD the prep culture. I converted everything into cell density which is a bit more intuitive than OD (I found 1 OD[A600] of my bacteria roughly corresponded to 5e5 bacteria per mL), and worked in those units from there on out. Knowing bacteria exhibit exponential growth, I log base-10 transformed the counts. Much like the increasing number of COVID-19 deaths experienced by the US from early March through early April, exponential growth becomes linear in log-transformed space. I figured I could thus estimate the growth of my prep culture of competent cells by making a multi-variate linear model, where the final density of the bacteria was dependent on the starting bacterial density and how long I left it growing. I figured the lag-phase from taking the saturated culture and sticking it into cold-LB would end up being a constant in the model. Here’s my dataset, and here’s my R Markdown analysis script. My linear model seemed to perform pretty well, as you can see in the below plot. As of writing this, the Pearson’s r was 0.98.

The aforementioned analysis script has a final chunk that allows you to input the starting OD of your starter culture, and assuming a 1000-fold dilution, tells you how long you likely need to wait to hit the right OD of your prep culture. Then again. I don’t think anyone really wants to enter this info into a computer every time they want to set up a culture, so I made a handy little “look-up plot”, shown below, where a lab member could just look at their starter culture OD on the x-axis, choose the dilution they want to do (staying 2x within 1000-fold since I don’t know if smaller dilutions can affect bacterial competency), and figure out when they need to be back to harvest (or at least stick the culture on ice). I’ve now printed this plot out and left it by my bacterial shaker-incubator.

I’m still much more of a wet-lab scientist than a computational one. That said, god damn do I still think the moderate amount of computational work I can do is still empowering.

Gibson / IVA success rates

I only learned about Gibson when I started my postdoc, and it completely changed how I approached science. In some experiments with Ethan when I was in the lab, I was blown away when we realized that you don’t even need Gibson mix to piece a plasmid back together; this is something we were exploring to try to figure out if we could come up with an easier & more economical library generation workflow. I was disappointed but equally blown away when I realized numerous people had repeatedly “discovered” this fact in the literature already; the most memorable of the names given to it was IVA, or In-Vitro Assembly. Ethan had tried some experiments, and had said it worked roughly as well as with Gibson. Of course, I can’t recall exactly what his experiment was at this point (Although probably a 1-piece, DNA recircularization reaction, since this was in the context of inverse PCR-based library building, after all). So the take away I had was that it was a possible avenue for molecular cloning in the future.

We’ve done a fair amount of molecular cloning in the lab already, creating ~ 60 constructs in the first 4 months since Sarah joined. I forgot exactly the circumstances, but something was right where it made sense to try some cloning where we didn’t add in Gibson mix. I was still able to get a number of intended constructs on that first try, so I stuck to not adding Gibson mix for a few more panels of constructs. I’ve been trying to keep very organized with my molecular cloning pipelines and inventories, which included keeping track of how often each set of mol cloning reactions yielded correctly pieced-together constructs. I’ve taken this data, and broken it down based on two variables: whether it was a 1- or 2-part DNA combination (I hardly ever try more than 2 in a single reaction, for simplicities’ sake, and also because properly combined cloning intermediates may still be useful down the line, anyway), and whether Gibson mix was added or not. Here’s the current results:

Note: This is a *stacked* smoothed histogram. Essentially, the only real way to look at this data is consider the width of a given color across the range of the x-axis, relative to its thickness in other portions.

So this was extremely informative. Some points
1) I’m willing to screen at least 4 colonies for a construct I really want. Thus, I’m counting a success rate > 0.25 as being a “successful” attempt at cloning a construct. In the above plot, that means any area above the dotted red line. Thus, 1-part DNA recircularizations have pretty decent success rates, since the area of the colored curve above the red dotted like >> the area below it. Sure, Gibson mix helps, but it’s not a night-and-day difference.
2) 2-part DNA combinations are a completely different story,. Lack of Gibson means that I have just as many failed attempts at cloning something as successful attempts. Those are not great odds. Adding Gibson mix makes a big difference here, since it definitely pushes things in favor of a good outcome. Thus, I will ALWAYS be adding GIbson mix before attempting any 2-part DNA combinations.

Other notes: I’m using home-grown NEB 10-beta cells, which give me pretty decent transformation rates (high-efficiency 1-part recircularization reactions can definitely yield many hundreds of colonies on the plate from a successful attempt), so there have been relatively few plates where I literally have ZERO colonies, where I’m more likely to have a few colonies that are just hard-to-remove residual template DNA).

Plasmid Lineages

Recombinant DNA work is integral to what we’re doing here, so I’ve become extremely organized with keeping track of the constructs we are building. This includes having a record of how sequences from two constructs were stitched together to create a new construct. Here’s a network map showing how one or more different plasmid sequences were combined to create each new construct.

[The series of letters and numbers prefixed with G (for Gibson) are unique identifiers I started giving new constructs when it became clear partway through my postdoc that I was going to need a better way of tracking everything I was building. Those prefixed with A are constructs obtained through addgene. Those prefixed with R are important constructs I had built before this tracking system, where I had to start giving them identifiers retroactively.]

HEK293Ts with melanin

I think synthetic biology is really cool, and I like playing around with recombinant DNA elements so I can see how well they work in my own hands. If they work OK, then I just let that knowledge stew in the back of my brain until I can eventually figure out a use for it. Reading this paper by Martin Fussenegger made me realize just how easy it is to make cultured cells express melanin. Here was my first foray in creating melanin in HEK cells by overexpressing tyrosinase/

Cells pelleted in the tubes on the left are expressing tyrosinase. The cells pelleted in the tubes on the right are not.

Doesn’t quite work well enough to use as a general reporter (it’s really hard to tell in a cell monolayer, and only becomes noticeable as colonies of cells or in a pellet, like above), but still kind of fun to see. Let’s see if I find an eventual use for this in some future work.