From fbbec51e604964d18ab72cbf0ac24b102ecc0376 Mon Sep 17 00:00:00 2001 From: Pjotr Prins Date: Fri, 6 Nov 2020 07:45:10 +0000 Subject: Working on upload --- doc/INSTALL.md | 5 + doc/blog/using-covid-19-pubseq-part3.html | 261 +++++++++++++++++++----------- doc/blog/using-covid-19-pubseq-part3.org | 161 +++++++++++------- 3 files changed, 277 insertions(+), 150 deletions(-) diff --git a/doc/INSTALL.md b/doc/INSTALL.md index 0180a4b..96cf1d4 100644 --- a/doc/INSTALL.md +++ b/doc/INSTALL.md @@ -68,6 +68,11 @@ penguin2:~/iwrk/opensource/code/vg/bh20-seq-resource$ env GUIX_PACKAGE_PATH=~/i Note: see above on GUIX_PACKAGE_PATH. +## Run the tests + + guix package -i python-requests python-pandas python-jinja2 python -p ~/opt/python-dev + . ~/opt/python-dev/etc/profile + ## Run Virtuoso-ose diff --git a/doc/blog/using-covid-19-pubseq-part3.html b/doc/blog/using-covid-19-pubseq-part3.html index 788c1d2..b49830b 100644 --- a/doc/blog/using-covid-19-pubseq-part3.html +++ b/doc/blog/using-covid-19-pubseq-part3.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
- ++In this document we explain how to upload data into COVID-19 PubSeq. +This can happen through a web page, or through a command line +script. We'll also show how to parametrize uploads by using templates. +The procedure is much easier than with other repositories and can be +fully automated. Once uploaded you can use our export API to prepare +for other repositories. +
+The COVID-19 PubSeq allows you to upload your SARS-Cov-2 strains to a public resource for global comparisons. A recompute of the pangenome @@ -278,9 +292,9 @@ gets triggered on upload. Read the ABOUT page for more inf
To upload a sequence in the web upload page hit the browse button and select the FASTA file on your local hard disk. @@ -307,9 +321,9 @@ an improved pangenome.
The web upload page contains fields for adding metadata. Metadata is not only important for attribution, is also important for @@ -334,13 +348,13 @@ the web form. Here we add some extra information.
This is a string field that defines a unique sample identifier by the submitter. In addition to sample_id we also have host_id, @@ -357,18 +371,18 @@ Here we add the GenBank ID MT536190.1.
Estimated collection date. The GenBank page says April 6, 2020.
A search on wikidata says Los Angeles is https://www.wikidata.org/entity/Q65 @@ -376,18 +390,18 @@ A search on wikidata says Los Angeles is
GenBank entry says Illumina, so we can fill that in
GenBank entry says 'Lamers,S., Nolan,D.J., Rose,R., Cross,S., Moraga Amador,D., Yang,T., Caruso,L., Navia,W., Von Borstel,L., Hui Zhou,X., @@ -397,17 +411,17 @@ Freehan,A. and Garcia-Diaz,J.', so we can fill that in.
All other fields are optional. But let's see what we can add.
Sadly, not much is known about the host from GenBank. A little sleuthing renders an interesting paper by some of the authors titled @@ -420,27 +434,27 @@ did to the person and what the person was like (say age group).
We can fill that in.
We have that: nasopharyngeal swab
Genbank which is http://identifiers.org/insdc/MT536190.1#sequence. Note we plug in our own identifier MT536190.1. @@ -448,9 +462,9 @@ Note we plug in our own identifier MT536190.1.
SARS-CoV-2/human/USA/LA-BIE-070/2020
@@ -459,9 +473,9 @@ SARS-CoV-2/human/USA/LA-BIE-070/2020Once you have the sequence and the metadata together, hit the 'Add to Pangenome' button. The data will be checked, @@ -470,9 +484,9 @@ submitted and the workflows should kick in!
We got an error saying: {"stem": "http://www.wikidata.org/entity/",… which means that our location field was not formed correctly! After @@ -485,9 +499,9 @@ submit button.
The current pipeline takes 5.5 hours to complete! Once it completes the updated data can be checked on the DOWNLOAD page. After completion @@ -497,9 +511,9 @@ in.
Above steps require a manual upload of one sequence with metadata.
What if you have a number of sequences you want to upload in bulk?
@@ -510,6 +524,39 @@ the web form and gets validated from the same
+id: placeholder
+
+license:
+ license_type: http://creativecommons.org/licenses/by/4.0/
+
+host:
+ host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606
+
+sample:
+ sample_id: XX
+ collection_date: "2020-01-01"
+ collection_location: http://www.wikidata.org/entity/Q148
+
+virus:
+ virus_species: http://purl.obolibrary.org/obo/NCBITaxon_2697049
+
+technology:
+ sample_sequencing_technology: [http://www.ebi.ac.uk/efo/EFO_0008632]
+
+submitter:
+ authors: [John Doe]
+
+
+a more elaborate example (note most fields are optional) may look like +
+id: placeholder @@ -559,11 +606,20 @@ submitter: additional_submitter_information: Optional free text field for additional information
+more metadata is yummy. Yummydata is useful to a wider community. Note +that many of the terms in above example are URIs, such as +host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606. We use +web ontologies for these to make the data less ambiguous and more +FAIR. Check out the option fields as defined in the schema. If it is not listed +a little bit of web searching may be required or contact us. +
Installing with pip you should be able to run @@ -574,7 +630,6 @@ bh20sequploader sequence.fasta metadata.yaml -
Alternatively the script can be installed from github. Run on the command line @@ -617,9 +672,9 @@ The web interface using this exact same script so it should just work
We also use above script to bulk upload GenBank sequences with a FASTA
and YAML extractor specific for GenBank. This means that the steps we
@@ -645,14 +700,15 @@ ls $dir_fasta_and_yaml/*.yaml |
-
-Usually, metadata are available in tabular format, like spreadsheets. As an example, we provide a script
-esr_samples.py to show you how to parse
-your metadata in YAML files ready for the upload. To execute the script, go in the ~bh20-seq-resource/scripts/esr_samples
-and execute
+Usually, metadata are available in a tabular format, such as
+spreadsheets. As an example, we provide a script esr_samples.py to
+show you how to parse your metadata in YAML files ready for the
+upload. To execute the script, go in the
+~bh20-seq-resource/scripts/esr_samples and execute
-You will find the YAML files in the `yaml` folder which will be created in the same directory.
+You will find the YAML files in the `yaml` folder which will be
+created in the same directory.
+
+In the example we use Python pandas to read the spreadsheet into a
+tabular structure. Next we use a template.yaml file that gets filled
+in by
+Next run the earlier CLI uploader for each YAML and FASTA combination.
+It can't be much easier than this. For ESR we uploaded a batch of 600
+sequences this way. See example.
6.3 Example: preparing metadata
-7.3 Example: preparing metadata
+esr_samples.py
so we get a metadata YAML file for each sample.
+
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-10-27 Tue 06:43.
+
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-11-05 Thu 07:27.