Trimming and tabulating fastq reads

Anna has been testing her transposon sequencing pipeline, and needing some help processing some of her Illumina reads. In short, she needed to remove sequenced invariant transposon region (essentially a 5′ adapter sequence), trim the remaining (hopefully genomic) sequence to a reasonable 40nt, and then tabulate the reads since there were likely going to be duplicates in there that don’t need to be considered independently. Here is what I did.

# For removing the adapter and trimming the reads down, I used a program called cutadapt. Here's information for it, as well as how I installed and used it below.
# https://cutadapt.readthedocs.io/en/stable/installation.html
# https://bioconda.github.io/

## Run the commands below in Bash (they tell conda where else to look for the program)
$ conda config --add channels defaults
$ conda config --add channels bioconda
$ conda config --add channels conda-forge
$ conda config --set channel_priority strict

## Since my laptop uses an M1 processor
$ CONDA_SUBDIR=osx-64 conda create -n cutadaptenv cutadapt

## Activate the conda environment
$ conda activate cutadaptenv

## Now trying this for the actual transposon sequencing files
$ cutadapt -g AGAATGCATGCGTCAATTTTACGCAGACTATCTTTGTAGGGTTAA -l 40 -o sample1_trimmed.fastq sample1.assembled.fastq

This should have created a file called “sample1_trimmed.fastq”. OK, next is tabulating the reads that are there. I used a program called 2fast2q for this.

## I liked to do this in the same cutadaptenv environment, so in case it was deactivated, here I am activating it again.
$ conda activate cutadaptenv

## Installing with pip, which is easy.
$ pip install fast2q

## Now running it on the actual file. I think you have to already be in the directory with the file you want (since you don't specify the file in the command).
$ python -m fast2q -c --mo EC --m 2

## Note: the "python -m fast2q -c" is to run it on the command line, rather than the graphical interface. "--mo EC" is to run it in the Extract and Count mode. "--m 2" is to allow 2 nucleotides of mismatches.

Nanopore denovo assembly

Plasmidsaurus is great, but it looks like some additional companies aside from Primordium are trying to develop a “nanopore for plasmid sequencing” service. We just tried the Plasmid-EZ service from Genewiz / Azenta(?), partially b/c there’s daily pickup from a dropbox on our campus. At first glance, the results were rather mixed. Read numbers seemed decent, but 4 of the 8 plasmid submissions didn’t yield a consensus sequence. Instead, all we were given were the “raw” fastq files. To even see whether these reads were useful or not, i had to figure out how to derive my own consensus sequence from the raw fastq files.

After some googling and a failed attempt or two, I ended up using a program called “flye”. here’s what I did to install and use it, following the instructions here.

## Make a conda environment for this purpose
$ Conda create -n flye

## Pretty easy to install with the below command
$ Conda install flye

## It didn't like my file in a path that had spaces (darn google drive default naming), so I ended up dragging my files into a folder on my desktop. Just based on the above two commands, you should already be able to run it on your file, like so:
$ flye --nano-raw J202B.fastq --out-dir assembled

This worked for all four of those fastq files returned without consensus sequences. Two worked perfectly and gave sequences of the expected size, The remaining two returned consensus sequences that were 2-times as large as the expected plasmid. Looking at the read lengths, these plasmids did show a few reads that were twice as long as expected. That said, those reads being in there didn’t make the program return that doubly-long consensus sequence, as it still made that consensus seq even after the long reads were filtered out (I did this with fastq-filter; “pip install fastq-filter”, “fastq-filter -l 500 -L 9000 -o J203G_filtered.fastq J203G.fastq.gz”). So ya, still haven’t figured out why this happened and if it’s real or not, but even a potentially incorrectly assembled consensus read was helpful, as I could import it into Benchling and align it with my expected sequence, and see if there were any errors.

After this experience, I’ve come to better appreciate how well Plasmidsaurus is run (and how good their pipeline for returning data, is). We’ll probably try the Genewiz Plasmid-EZ another couple times, but so far, in terms of quality of the service, it doesn’t seem as good.

Plasmidsaurus fasta standardizer

I really like plasmidsaurus, and it’s an integral part of our molecular cloning pipeline. That said, I’ve found analyzing the resulting consensus fasta file to be somewhat cumbersome, since where they inevitable start their sequence string in the fasta file is rather arbitrary (which, I don’t blame them for at all, since these are circular plasmids with no particular starting nucleotide, and every plasmid they’re getting is unique), and obviously doesn’t match where my sequence starts on my plasmid map in Benchling.

For the longest time (the past year?) I dealt with each file / analysis individually, where I would either 1) reindex my plasmid map on Benchling to match up with how the Plasmidsaurus fasta file is aligning, or 2) Manually copy-pasting sequence in the Plasmidsaurus fasta file, after seeing hwo things match up after aligning.

Anyway, I got tired of doing this, so I wrote a Python script that standardizes things. This will still require some up-front work in 1) Running the script on each plasmidsaurus file, and 2) Making sure all of our plasmid maps in Benchling start at the “right” location, but I still think it will be easier than what I’ve been doing.

1) Reordering the plasmid map.

I wrote the script so that it reordering the Plasmidsaurus fasta file based on the junction between the stop codon of the AmpR gene, and the sequence directly after it. Thus, you’ll have to reindex your Benchling plasmid map so it exhibits that same break at that junction point. Thus, if your plasmid has AmpR in the forward direction, it should look like so on the 5′ end of your sequence:

And like this on the 3′ end of your sequence:

While if AmpR is in the reverse direction, it should look like this on the 5′ end of your sequence:

And like this on the 3′ end of your map:

Easy ‘nuf.

2) Running the Python script on Plasmidsauru fasta file.

The python script can be found at this GitHub link: https://github.com/MatreyekLab/Sequence_design_and_analysis/tree/main/Plasmidsaurus_fasta_reordering

If you’re in my lab (and have access to the lab Google Drive), you don’t have to go to the GitHub repo. Instead, it will already be in the “[…additional_text_here…]/_MatreyekLab/Data/Plasmidsaurus” directory.

Open up Terminal, and go to that directory. Then type in “python3 Plasmidsaurus_fasta_standardizer.py”. Before hitting return to run, you’ll have to tell it which file to perform this on. Because of the highly nested structure of how the actual data is stored, it will probably be easier just to navigate to the relevant folder in Finder, and then drag the intended file into the Terminal window. The absolute path of where the file sits in your directory will be copied, so the command will now look something like “python3 Plasmidsaurus_fasta_standardizer.py /[…additional_text_here…]/_MatreyekLab/Data/Plasmidsaurus/PSRS033/Matreyek_f6f_results/Matreyek_f6f_5_G1131C.fasta”

It will make a new fasta file suffixed with “_reordered” (such as “Matreyek_f6f_1_G1118A_reordered.fasta”), which you can now easily use for alignment in Benchling.

Note: Currently, the script only works for ampicillin resistant plasmids, since that’s somewhere between 95 to 99% of all of the plasmids that we use in the lab. That said, plasmidsaurus sequencing of the rare KanR plasmid won’t work with this method. Perhaps one day I’ll update the script for also working with KanR plasmids (ie. the first time I need to run plasmidsaurus data analysis on a KanR plasmid, haha).

Network drive for file storage

I’m so tired of being jerked around by various cloud storage services. Institutional Google Drive was unlimited storage, but now the University has capped it at 100GB (how uselessly small…). I believe the institutional Box account was also unlimited, but apparently it is now 1 TB. Which, I mean, is better than 100GB, but also the Box interface is painfully awful and IMO is not useful for anything other than “cold storage” of files, where 1 TB isn’t going to cut it. Regardless, I solved this problem for actively used shared file storage for my lab by purchasing 2TB of GoogleDrive cloud storage for $100 a year (out of my personal bank account). Still doesn’t solve the “cold storage” problem for microscopy data, which can easily run much more than a terabyte.

Well, I have an iMac at work that is hooked up to a landline, and I have plenty of external hard drives, so I’m going to try to allow one of those external hard drives be discoverable on the network, and see if that works as a way to store our microscopy data. To access this network (on a mac, at least), follow these instructions:

  1. If you’re not directly hooked up to the landline on campus, you’ll need to VPN in. This is done through FortiClient. This will require DUO authentication (so check your phone to accept).
  2. If you’re now connected to the VPN, you can now try to load that external network drive. to do this, hit Command + K while in the MacOS Finder (or go to the top menu and hit Go > Connect to Server…) and then enter the following:
    smb://[See the IP address the Matreyeklab_Overview_Googledoc]/MLab_5TB
  3. You might have to put in a username and password. This will be the standard lab username and password (refer to the Matreyeklab_Overview_Googledoc if you’ve forgotten, but you must have memorized this by now…)
  4. Voila! You should now have a Finder window for the external hard drive, which will (at the least) have a folder called “Microscopy”.
  5. Now what if you want to do things on the Terminal, since you’re a power user (perhaps aspiring power user?). In that case, load up a new Terminal instance, back out of your own account directory into one of the root directories of your computer (ie. do “cd ../..) and then go into the “Volumes” directory, and if you’ve accessed the network drive like described above, you should now be able to go into the external hard drive directory off the network (ie. do “cd MLab_5TB), and start doing what you need to do there.

Yay, problem solved. Obviously the above instructions are only for lab members. If you run into problems, LMK, and I’ll try to help troubleshoot.

Various links / references, mostly for my own use:
https://support.apple.com/guide/mac-help/set-up-file-sharing-on-mac-mh17131/13.0/mac/13.0
https://support.apple.com/guide/mac-help/servers-shared-computers-connect-mac-mchlp3015/13.0/mac/13.0
https://support.apple.com/guide/mac-help/connect-mac-shared-computers-servers-mchlp1140/mac
https://www.makeuseof.com/how-to-set-up-access-network-drive-mac/

NIH grant expenditures

At least in my SOM-based department, one’s general status is supposedly correlated with the amount of indirect costs they bring in. It looks like I’m currently capped at 4 desks for my personnel, and it’s not clear if it’s worth trying to expand here. Regardless, this is apparently the directs / indirects space I’m operating in with my current setup.

What does this come out to in terms of indirect costs generated per month? Here’s the plot based on my budget reports, below:

So, essentially at my steady state (which I’ll probably be at for at least the next 2.5 years), the lab is generating ~ $20k a month in indirects for the department, amounting to about ~ $5k per desk per month.

Ah, the business of SOM-based academic research. Well, I’m nothing if not transparent.

Submitted DNA amounts and reads returned

In this previous post, I showed how many reads we’ve gotten from our Plasmidsaurus and AMP-EZ submissions. Well, now’s also time to see whether the amount of DNA that we gave correlated with the number of reads we got back.

Submissions to Plasmidsaurus. Red vertical line denotes the minimum value asked for submission (>= 10uL at 30 ng/uL). Blue line is a linear model based on the datapoints.

As you can see above, since this is miniprepped DNA, it’s usually quite easy to reach the 300 ng needed for submission. One time, when we submitted closer to 200ng, it worked perfectly fine. One other time, when we submitted ~ 100ng, it did not, albeit this was not plasmid DNA and instead was a PCR product, so it’s an outlier for that reason as well.

Submissions to Genewiz / Azenta AMP-EZ. Red vertical line is the minimum amount of DNA asked for, while the horizontal red line is the number of reads they “guarantee” returned. Blue line is a linear model based on the data.

This is the more important graph though, since all of our AMP-EZ submissions are from gel extracted PCR amplifications, and it can be quite difficult to do it in such a way that we have the 500 ng of total qubitted DNA available for submission. Well, turns out that it’s probably not all that important for us to hit 500 ng of DNA, since it’s worked perfectly fine in our attempts between 200 and 500 ng. I imagine people in my lab will simultaneously be happy (knowing they don’t have to hit 500 ng) and sad (knowing they had spent a bunch of extra effort in the past unnecessarily trying to reach that number) seeing the above data, but hey, it’s good to finally know this and better late than never!

Flow cytometry compensation

So I tend to use fluorescent protein combinations that are not spectrally overlapping (eg. BFP, GFP, mCherry, miRFP670), so that circumvents the need for any compensation (at least on the flow cytometer configurations that we normally use). That being said, I apparently started using mScarlet-I in some of our vectors, and there is some bleedover into the green channel if it’s bright enough.

These cells only express mScarlet-I, and yet green signal is seen when red MFI values > 10^4… booo……

Well, that’s annoying, but the concept of compensation seems pretty straightforward to me, so I figure we can also do it post hoc if necessary. The idea here is to take the value of red fluorescence, multiply that value to a fraction (< 1) constant value representing the amount of bleedover that is happening into the second channel, and then subtracting this product from the original amount of green fluorescence to make the compensated green measurement.

To actually work with the data, I exported the above cell measurements shown in the Flowjo plot above as a csv and imported it into R. Very easy to execute the above formula, but how does one figure out the relevant constant value that should be used for this particular type of bleedover? Well, I wrote a for-loop testing values from 0.0010 to 0.1, and saw whether the adjusted values now resulted in a straight horizontal line with ~ zero slope (since then, regardless of red fluorescence, green fluorescence would be unchanged).

Now as that value becomes too large, then more will be subtracted than should be, resulting in an inverse relationship between red and green fluorescence. To make my life easier to find the best value, I took the absolute value of the resulting slope in the points, which pointed me to a value of 0.0046 as the minima, for mScarlet-I red fluorescence bleeding over into my green channel on this particular flow cytometer with these particular settings.

Great, so what does the data actually look like once I compensate for this bleedover? Well, with this control data, this is the before and after (on a random subset of 1000 datapoints)

Hurrah. Crisis averted. Assuming we now have sample with both actual green and red fluorescence (previously confounded by the red to green bleedover from mScarlet), we can presumably now analyze that data in peace.

Just for fun, here’s a couple of additional samples and their before and after this compensation is performed.

First, here are cells that express both EGFP and mScarlet-I at high levels. You can see that the compensation does almost nothing. This makes sense, since the bleedover is contributing such a small total percentage to the total green signal (EGFP itself is contributing most of the signal), that removing that small portion is almost imperceptible.
Here’s a sample that’s a far better example. Here, there’s a bunch of mScarlet-I positive cells (as well as some intermediates), and a smattering of lightly EGFP positive cells throughout. But aside from the shape of the mScarlet-I positive, GFP negative population changing from a 45-degree line to a circular cloud, the overall effects aren’t huge. Still, even that is useful though, b/c if one didn’t look at this scatterplot (and know about the concept of bleedover and compensation), one might interpret that slight uptick in green fluorescence in that aforementioned population as a real biologically meaningful difference.

Workday accounting

Rather facetiously got a suggestion to keep track of how my workdays are spent, but that did prompt me to start keeping track since I have gotten into the phase of my job where I’m feeling somewhat burdened by non-research responsibilities and I like having data in hand. As I’ve noted on my other website, my workdays are now largely constrained by daycare hours. Thus, I do have pretty limited hours in a day to get everything done, requiring a fair amount prioritization; doing one things often means not doing something else.

I’ll sporadically hit “run” on my analysis script and the below plot will update. The n values are currently pretty small, but I plan to keep doing this indefinitely.

Keys for the above plot:
Red dashes are mean values across all days. Gray dots are values for individual days.
Research_internal” denotes activities that directly impact my research group (eg. meetings with personnel, data analysis, benchwork).
Research_external” denotes research activities that don’t have to do with my group (eg. Science-centric meetings with other faculty, emails to people requesting reagents).
Administrative_internal” denotes general paperwork (eg. Filling out my annual performance reviews)
Seminar_director” denotes work related to running the immunology portion of the Dept seminar series (eg. More emails…)
Postdoc_affairs” denotes work related to trying to manage postdoc affairs for the dept (and in some ways, by extension, the SOM).
Other_service” denotes other service activities for the school (eg. Corresponding with CWRU undergrads not in my group).

But I can break down some of these activities further. For example, for the the “Research_internal” section where I’m handling things directly related to my research lab, it can be further broken down as follows:

Most of the categories here are self explanatory. “DNA_construct_stuff” is planning out primers or checking plasmid associated sequencing reads. “Labwork” is mostly tissue culture, since I think that’s where my direct efforts are most valuable (in contrast to using a DNA extraction kit, for example). “Literature” is either doing literature searches or reading papers.

And, well, since so much time seems to be spent writing emails nowadays, this how much time I spend writing emails each day (note: I do all internal communication with lab members via Slack, so this is mostly administrative matters):

Codon cheat sheet

Like many people, I have an amino acid / codon cheat sheet posted around my desk that I can look at whenever I need to quickly design a missense mutation into a construct, or get the sense of the relative size differences between two different amino acid side chains. Well, I recently scribbled on the one I had hanging on my desk from when I started, so I had to replace it. But, I took a little time to customize it with information I would find the most useful (eg. reminding me which amino acids were encoded by 6 codons instead of the usual 2 or 4, which is the most frequent codon per amino acid that isn’t ridiculously GC rich). It’s meant for double-sided printing.