COVID-19 PubSeq Uploading Data (part 3)
Table of Contents
1 Introduction
In this document we explain how to upload data into COVID-19 PubSeq. This can happen through a web page, or through a command line script. We'll also show how to parametrize uploads by using templates. The procedure is much easier than with other repositories and can be fully automated. Once uploaded you can use our export API to prepare for other repositories.
2 Uploading data
The COVID-19 PubSeq allows you to upload your SARS-Cov-2 strains to a public resource for global comparisons. A recompute of the pangenome gets triggered on upload. Read the ABOUT page for more information.
3 Step 1: Upload sequence
To upload a sequence in the web upload page hit the browse button and select the FASTA file on your local hard disk.
We start with an assembled or mapped sequence in FASTA format. The PubSeq uploader contains a QC step which checks whether it is a likely SARS-CoV-2 sequence. While PubSeq deduplicates sequences and never overwrites metadata, you may still want to check whether your data already is in the system by querying some metadata as described in Query metadata with SPARQL or by simply downloading and checking one of the files on the download page. We find GenBank MT536190.1 has not been included yet. A FASTA text file can be downloaded to your local disk and uploaded through our web upload page. Make sure the file does not include any HTML!
Note: we currently only allow FASTA uploads. In the near future we'll allow for uploading raw sequence files. This is important for creating an improved pangenome.
4 Step 2: Add metadata
The web upload page contains fields for adding metadata. Metadata is not only important for attribution, is also important for analysis. The metadata is available for queries, see Query metadata with SPARQL, and can be used to annotate variations of the virus in different ways.
A number of fields are obligatory: sample id, date, location,
technology and authors. The others are optional, but it is valuable to
enter them when information is available. Metadata is defined in this
schema. From this schema we generate the input form. Note that
optional fields have a question mark in the type
. You can add
metadata yourself, btw, because this is a public resource! See also
Modify metadata for more information.
To get more information about a field click on the question mark on the web form. Here we add some extra information.
4.1 Obligatory fields
4.1.1 Sample ID (sample_id)
This is a string field that defines a unique sample identifier by the submitter. In addition to sample_id we also have host_id, provider and submitter_sample_id where host is the host the sample came from, provider sample is the institution id and submitter is the submitting individual id. host_id is important when multiple sequences come from the same host. Make sure not to have spaces in the sample_id.
Here we add the GenBank ID MT536190.1.
4.1.2 Collection date
Estimated collection date. The GenBank page says April 6, 2020.
4.1.3 Collection location
A search on wikidata says Los Angeles is https://www.wikidata.org/entity/Q65
4.1.4 Sequencing technology
GenBank entry says Illumina, so we can fill that in
4.1.5 Authors
GenBank entry says 'Lamers,S., Nolan,D.J., Rose,R., Cross,S., Moraga Amador,D., Yang,T., Caruso,L., Navia,W., Von Borstel,L., Hui Zhou,X., Freehan,A. and Garcia-Diaz,J.', so we can fill that in.
4.2 Optional fields
All other fields are optional. But let's see what we can add.
4.2.1 Host information
Sadly, not much is known about the host from GenBank. A little sleuthing renders an interesting paper by some of the authors titled SARS-CoV-2 is consistent across multiple samples and methodologies which dates after the sample, but has no reference other than that the raw data came from the SRA database, so it probably does not describe this particular sample. We don't know what this strain of SARS-Cov-2 did to the person and what the person was like (say age group).
4.2.2 Collecting institution
We can fill that in.
4.2.3 Specimen source
We have that: nasopharyngeal swab
4.2.4 Source database accession
Genbank which is http://identifiers.org/insdc/MT536190.1#sequence. Note we plug in our own identifier MT536190.1.
4.2.5 Strain name
SARS-CoV-2/human/USA/LA-BIE-070/2020
5 Step 3: Submit to COVID-19 PubSeq
Once you have the sequence and the metadata together, hit the 'Add to Pangenome' button. The data will be checked, submitted and the workflows should kick in!
5.1 Trouble shooting
We got an error saying: {"stem": "http://www.wikidata.org/entity/",… which means that our location field was not formed correctly! After fixing it to look like http://www.wikidata.org/entity/Q65 (note http instead on https and entity instead of wiki) the submission went through. Reload the page (it won't empty the fields) to re-enable the submit button.
6 Step 4: Check output
The current pipeline takes 5.5 hours to complete! Once it completes the updated data can be checked on the DOWNLOAD page. After completion of above output this SPARQL query shows some of the metadata we put in.
7 Bulk sequence uploader
Above steps require a manual upload of one sequence with metadata. What if you have a number of sequences you want to upload in bulk? For this we have a command line version of the uploader that can directly submit to COVID-19 PubSeq. It accepts a FASTA sequence file an associated metadata in YAML format. The YAML matches the web form and gets validated from the same schema looks. The YAML that you need to create/generate for your samples looks like
A minimal example of metadata looks like
id: placeholder license: license_type: http://creativecommons.org/licenses/by/4.0/ host: host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606 sample: sample_id: XX collection_date: "2020-01-01" collection_location: http://www.wikidata.org/entity/Q148 virus: virus_species: http://purl.obolibrary.org/obo/NCBITaxon_2697049 technology: sample_sequencing_technology: [http://www.ebi.ac.uk/efo/EFO_0008632] submitter: authors: [John Doe]
a more elaborate example (note most fields are optional) may look like
id: placeholder host: host_id: XX1 host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606 host_sex: http://purl.obolibrary.org/obo/PATO_0000384 host_age: 20 host_age_unit: http://purl.obolibrary.org/obo/UO_0000036 host_health_status: http://purl.obolibrary.org/obo/NCIT_C25269 host_treatment: Process in which the act is intended to modify or alter host status (Compounds) host_vaccination: [vaccines1,vaccine2] ethnicity: http://purl.obolibrary.org/obo/HANCESTRO_0010 additional_host_information: Optional free text field for additional information sample: sample_id: Id of the sample as defined by the submitter collector_name: Name of the person that took the sample collecting_institution: Institute that was responsible of sampling specimen_source: [http://purl.obolibrary.org/obo/NCIT_C155831,http://purl.obolibrary.org/obo/NCIT_C155835] collection_date: "2020-01-01" collection_location: http://www.wikidata.org/entity/Q148 sample_storage_conditions: frozen specimen source_database_accession: [http://identifiers.org/insdc/LC522350.1#sequence] additional_collection_information: Optional free text field for additional information virus: virus_species: http://purl.obolibrary.org/obo/NCBITaxon_2697049 virus_strain: SARS-CoV-2/human/CHN/HS_8/2020 technology: sample_sequencing_technology: [http://www.ebi.ac.uk/efo/EFO_0009173,http://www.ebi.ac.uk/efo/EFO_0009173] alignment_protocol: Protocol used for assembly sequencing_coverage: [70.0, 100.0] additional_technology_information: Optional free text field for additional information submitter: authors: [John Doe, Joe Boe, Jonny Oe] submitter_name: [John Doe] submitter_address: John Doe's address originating_lab: John Doe kitchen lab_address: John Doe's address provider: XXX1 submitter_sample_id: XXX2 publication: PMID00001113 submitter_orcid: [https://orcid.org/0000-0000-0000-0000,https://orcid.org/0000-0000-0000-0001] additional_submitter_information: Optional free text field for additional information
more metadata is yummy. Yummydata is useful to a wider community. Note that many of the terms in above example are URIs, such as host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606. We use web ontologies for these to make the data less ambiguous and more FAIR. Check out the option fields as defined in the schema. If it is not listed a little bit of web searching may be required or contact us.
7.1 Run the uploader (CLI)
Installing with pip you should be able to run
bh20sequploader sequence.fasta metadata.yaml
Alternatively the script can be installed from github. Run on the command line
python3 bh20sequploader/main.py example/sequence.fasta example/maximum_metadata_example.yaml
after installing dependencies (also described in INSTALL with the GNU
Guix package manager). The --help
shows
Entering sequence uploader usage: main.py [-h] [--validate] [--skip-qc] [--trusted] metadata sequence_p1 [sequence_p2] Upload SARS-CoV-19 sequences for analysis positional arguments: metadata sequence metadata json sequence_p1 sequence FASTA/FASTQ sequence_p2 sequence FASTQ pair optional arguments: -h, --help show this help message and exit --validate Dry run, validate only --skip-qc Skip local qc check --trusted Trust local validation and add directly to validated project
The web interface using this exact same script so it should just work (TM).
7.2 Example: uploading bulk GenBank sequences
We also use above script to bulk upload GenBank sequences with a FASTA and YAML extractor specific for GenBank. This means that the steps we took above for uploading a GenBank sequence are already automated.
The steps are: from the
bh20-seq-resource/scripts/download_genbank_data/
directory using the
from_genbank_to_fasta_and_yaml.py script:
python3 from_genbank_to_fasta_and_yaml.py dir_fasta_and_yaml=~/bh20-seq-resource/scripts/download_genbank_data/fasta_and_yaml ls $dir_fasta_and_yaml/*.yaml | while read path_code_yaml; do path_code_fasta=${path_code_yaml%.*}.fasta bh20-seq-uploader --skip-qc $path_code_yaml $path_code_fasta done
7.3 Example: preparing metadata
Usually, metadata are available in a tabular format, such as spreadsheets. As an example, we provide a script esr_samples.py to show you how to parse your metadata in YAML files ready for the upload. To execute the script, go in the ~bh20-seq-resource/scripts/esr_samples and execute
python3 esr_samples.py
You will find the YAML files in the `yaml` folder which will be created in the same directory.
In the example we use Python pandas to read the spreadsheet into a
tabular structure. Next we use a template.yaml file that gets filled
in by esr_samples.py
so we get a metadata YAML file for each sample.
Next run the earlier CLI uploader for each YAML and FASTA combination. It can't be much easier than this. For ESR we uploaded a batch of 600 sequences this way. See example.