From 264be797c55aaff6eb9639d5a15d9081e2256253 Mon Sep 17 00:00:00 2001 From: Pjotr Prins Date: Sat, 30 May 2020 18:13:48 -0500 Subject: BLOG --- doc/blog/using-covid-19-pubseq-part2.html | 394 ++++++++++++++++++++++++++++++ doc/blog/using-covid-19-pubseq-part3.html | 296 +++++++++++++++------- doc/blog/using-covid-19-pubseq-part3.org | 116 +++++++-- doc/blog/using-covid-19-pubseq-part4.html | 266 ++++++++++++++++++++ doc/blog/using-covid-19-pubseq-part4.org | 3 +- doc/blog/using-covid-19-pubseq-part5.html | 277 +++++++++++++++++++++ doc/blog/using-covid-19-pubseq-part5.org | 17 +- 7 files changed, 1260 insertions(+), 109 deletions(-) create mode 100644 doc/blog/using-covid-19-pubseq-part2.html create mode 100644 doc/blog/using-covid-19-pubseq-part4.html create mode 100644 doc/blog/using-covid-19-pubseq-part5.html (limited to 'doc') diff --git a/doc/blog/using-covid-19-pubseq-part2.html b/doc/blog/using-covid-19-pubseq-part2.html new file mode 100644 index 0000000..c047441 --- /dev/null +++ b/doc/blog/using-covid-19-pubseq-part2.html @@ -0,0 +1,394 @@ + + + + + + + +COVID-19 PubSeq (part 2) + + + + + + + +
+ UP + | + HOME +
+

COVID-19 PubSeq (part 2)

+
+

Table of Contents

+ +
+

+As part of the COVID-19 Biohackathon 2020 we formed a working group to +create a COVID-19 Public Sequence Resource (COVID-19 PubSeq) for +Corona virus sequences. The general idea is to create a repository +that has a low barrier to entry for uploading sequence data using best +practices. I.e., data published with a creative commons 4.0 (CC-4.0) +license with metadata using state-of-the art standards and, perhaps +most importantly, providing standardised workflows that get triggered +on upload, so that results are immediately available in standardised +data formats. +

+ +
+

1 Finding output of workflows

+
+

+As part of the COVID-19 Biohackathon 2020 we formed a working group to +create a COVID-19 Public Sequence Resource (COVID-19 PubSeq) for +Corona virus sequences. The general idea is to create a repository +that has a low barrier to entry for uploading sequence data using best +practices. I.e., data published with a creative commons 4.0 (CC-4.0) +license with metadata using state-of-the art standards and, perhaps +most importantly, providing standardised workflows that get triggered +on upload, so that results are immediately available in standardised +data formats. +

+
+
+ +
+

2 Introduction

+
+

+We are using Arvados to run common workflow language (CWL) pipelines. +The most recent output is on display on a web page (with time stamp) +and a full list is generated here. It is nice to start up, but for +most users we need a dedicated and themed results page. People don't +want to wade through thousands of output files! +

+
+
+ +
+

3 The Arvados file interface

+
+

+Arvados has the web server, but it also has a REST API and associated +command line tools. We are already using the API to upload data. If +you follow the pip or ../INSTALL.md GNU Guix instructions for +installing Arvados API you'll find the following command line tools +(also documented here): +

+ + + + +++ ++ + + + + + + + + + + + + + + + + + + + + + + +
CommandDescription
arv-lslist files in Arvados
arv-putupload a file to Arvados
arv-getget a textual representation of Arvados objects from the command line. The output can be limited to a subset of the object’s fields. This command can be used with only the knowledge of an object’s UUID
+ +

+Now, this is a public instance so we can use the tokens from +the uploader. +

+ +
+

+export ARVADOSAPIHOST='lugli.arvadosapi.com' +export ARVADOSAPITOKEN='2fbebpmbo3rw3x05ueu2i6nx70zhrsb1p22ycu3ry34m4x4462' +arv-ls lugli-4zz18-z513nlpqm03hpca +

+ +
+ +

+will list all files (the UUID we got from the Arvados results page). To +get the UUID of the files +

+ +
+

+curl https://lugli.arvadosapi.com/arvados/v1/config | jq .Users.AnonymousUserToken +env ARVADOSAPITOKEN=5o42qdxpxp5cj15jqjf7vnxx5xduhm4ret703suuoa3ivfglfh \ + arv-get lugli-4zz18-z513nlpqm03hpca +

+ +
+ +

+and fetch one listed JSON file chunk001_bin4000.schematic.json with +its listed UUID: +

+ +
+arv-get 2be6af7b4741f2a5c5f8ff2bc6152d73+1955623+Ab9ad65d7fe958a053b3a57d545839de18290843a@5ed7f3c5
+
+
+
+ +
+

4 Using the Arvados API

+
+
+
+
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-05-30 Sat 11:50
. +
+ + diff --git a/doc/blog/using-covid-19-pubseq-part3.html b/doc/blog/using-covid-19-pubseq-part3.html index 4132784..91879b0 100644 --- a/doc/blog/using-covid-19-pubseq-part3.html +++ b/doc/blog/using-covid-19-pubseq-part3.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + COVID-19 PubSeq Uploading Data (part 3) @@ -248,64 +248,62 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Uploading Data

-
-

-Work in progress! -

-
-
-
-

2 Introduction

-
+ +
+

1 Uploading Data

+

The COVID-19 PubSeq allows you to upload your SARS-Cov-2 strains to a -public resource for global comparisons. Compute it triggered on -upload. Read the ABOUT page for more information. +public resource for global comparisons. A recompute of the pangenome +gets triggered on upload. Read the ABOUT page for more information.

-
-

3 Step 1: Upload sequence

-
+
+

2 Step 1: Upload sequence

+

To upload a sequence in the web upload page hit the browse button and select the FASTA file on your local hard disk. @@ -332,9 +330,9 @@ an improved pangenome.

-
-

4 Step 2: Add metadata

-
+
+

3 Step 2: Add metadata

+

The web upload page contains fields for adding metadata. Metadata is not only important for attribution, is also important for @@ -348,7 +346,7 @@ A number of fields are obligatory: sample id, date, location, technology and authors. The others are optional, but it is valuable to enter them when information is available. Metadata is defined in this schema. From this schema we generate the input form. Note that -opitional fields have a question mark in the type. You can add +optional fields have a question mark in the type. You can add metadata yourself, btw, because this is a public resource! See also Modify metadata for more information.

@@ -359,13 +357,13 @@ the web form. Here we add some extra information.

-
-

4.1 Obligatory fields

-
+
+

3.1 Obligatory fields

+
-
-

4.1.1 Sample ID (sampleid)

-
+
+

3.1.1 Sample ID (sampleid)

+

This is a string field that defines a unique sample identifier by the submitter. In addition to sampleid we also have hostid, @@ -382,37 +380,37 @@ Here we add the GenBank ID MT536190.1.

-
-

4.1.2 Collection date

-
+
+

3.1.2 Collection date

+

Estimated collection date. The GenBank page says April 6, 2020.

-
-

4.1.3 Collection location

-
+
+

3.1.3 Collection location

+

-A search on wikidata says Los Angelos is +A search on wikidata says Los Angeles is https://www.wikidata.org/entity/Q65

-
-

4.1.4 Sequencing technology

-
+
+

3.1.4 Sequencing technology

+

GenBank entry says Illumina, so we can fill that in

-
-

4.1.5 Authors

-
+
+

3.1.5 Authors

+

GenBank entry says 'Lamers,S., Nolan,D.J., Rose,R., Cross,S., Moraga Amador,D., Yang,T., Caruso,L., Navia,W., Von Borstel,L., Hui Zhou,X., @@ -422,17 +420,17 @@ Freehan,A. and Garcia-Diaz,J.', so we can fill that in.

-
-

4.2 Optional fields

-
+
+

3.2 Optional fields

+

All other fields are optional. But let's see what we can add.

-
-

4.2.1 Host information

-
+
+

3.2.1 Host information

+

Sadly, not much is known about the host from GenBank. A little sleuthing renders an interesting paper by some of the authors titled @@ -445,27 +443,27 @@ did to the person and what the person was like (say age group).

-
-

4.2.2 Collecting institution

-
+
+

3.2.2 Collecting institution

+

We can fill that in.

-
-

4.2.3 Specimen source

-
+
+

3.2.3 Specimen source

+

We have that: nasopharyngeal swab

-
-

4.2.4 Source database accession

-
+
+

3.2.4 Source database accession

+

Genbank which is http://identifiers.org/insdc/MT536190.1#sequence. Note we plug in our own identifier MT536190.1. @@ -473,9 +471,9 @@ Note we plug in our own identifier MT536190.1.

-
-

4.2.5 Strain name

-
+
+

3.2.5 Strain name

+

SARS-CoV-2/human/USA/LA-BIE-070/2020

@@ -484,20 +482,36 @@ SARS-CoV-2/human/USA/LA-BIE-070/2020
-
-

5 Step 3: Submit to COVID-19 PubSeq

-
+
+

4 Step 3: Submit to COVID-19 PubSeq

+

Once you have the sequence and the metadata together, hit the 'Add to Pangenome' button. The data will be checked, submitted and the workflows should kick in!

+ + +
+

4.1 Trouble shooting

+
+

+We got an error saying: {"stem": "http://www.wikidata.org/entity/",… +which means that our location field was not formed correctly! After +fixing it to look like http://www.wikidata.org/entity/Q65 (note http +instead on https and entity instead of wiki) the submission went +through. Reload the page (it won't empty the fields) to re-enable the +submit button. +

+
+
-
-

6 Step 4: Check output

-
+ +
+

5 Step 4: Check output

+

The current pipeline takes 5.5 hours to complete! Once it completes the updated data can be checked on the DOWNLOAD page. After completion @@ -505,24 +519,122 @@ of above output this +

6 Bulk sequence uploader

+
+

+Above steps require a manual upload of one sequence with metadata. +What if you have a number of sequences you want to upload in bulk? +For this we have a command line version of the uploader that can +directly submit to COVID-19 PubSeq. It accepts a FASTA sequence +file an associated metadata in YAML format. The YAML matches +the web form and gets validated from the same schema looks. The YAML +that you need to create/generate for your samples looks like +

+ +
+
id: placeholder
+
+host:
+    host_id: XX1
+    host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606
+    host_sex: http://purl.obolibrary.org/obo/PATO_0000384
+    host_age: 20
+    host_age_unit: http://purl.obolibrary.org/obo/UO_0000036
+    host_health_status: http://purl.obolibrary.org/obo/NCIT_C25269
+    host_treatment: Process in which the act is intended to modify or alter host status (Compounds)
+    host_vaccination: [vaccines1,vaccine2]
+    ethnicity: http://purl.obolibrary.org/obo/HANCESTRO_0010
+    additional_host_information: Optional free text field for additional information
+
+sample:
+    sample_id: Id of the sample as defined by the submitter
+    collector_name: Name of the person that took the sample
+    collecting_institution: Institute that was responsible of sampling
+    specimen_source: [http://purl.obolibrary.org/obo/NCIT_C155831,http://purl.obolibrary.org/obo/NCIT_C155835]
+    collection_date: "2020-01-01"
+    collection_location: http://www.wikidata.org/entity/Q148
+    sample_storage_conditions: frozen specimen
+    source_database_accession: [http://identifiers.org/insdc/LC522350.1#sequence]
+    additional_collection_information: Optional free text field for additional information
+
+virus:
+    virus_species: http://purl.obolibrary.org/obo/NCBITaxon_2697049
+    virus_strain: SARS-CoV-2/human/CHN/HS_8/2020
+
+technology:
+    sample_sequencing_technology: [http://www.ebi.ac.uk/efo/EFO_0009173,http://www.ebi.ac.uk/efo/EFO_0009173]
+    sequence_assembly_method: Protocol used for assembly
+    sequencing_coverage: [70.0, 100.0]
+    additional_technology_information: Optional free text field for additional information
+
+submitter:
+    authors: [John Doe, Joe Boe, Jonny Oe]
+    submitter_name: [John Doe]
+    submitter_address: John Doe's address
+    originating_lab: John Doe kitchen
+    lab_address: John Doe's address
+    provider_sample_id: XXX1
+    submitter_sample_id: XXX2
+    publication: PMID00001113
+    submitter_orcid: [https://orcid.org/0000-0000-0000-0000,https://orcid.org/0000-0000-0000-0001]
+    additional_submitter_information: Optional free text field for additional information
+
+
+
-
-

6.1 Trouble shooting

+
+

6.1 Run the uploader (CLI)

-We got an error saying: {"stem": "http://www.wikidata.org/entity/",… -which means that our location field was not formed correctly! After -fixing it to look like http://www.wikidata.org/entity/Q65 (note http -instead on https and entity instead of wiki) the submission went -through. Reload the page (it won't empty the fields) to re-enable the -submit button. +Installing with pip you should be +able to run +

+ +
+bh20sequploader sequence.fasta metadata.yaml
+
+ + + +

+Alternatively the script can be installed from github. Run on the +command line +

+ +
+python3 bh20sequploader/main.py example/sequence.fasta example/maximum_metadata_example.yaml
+
+ + +

+after installing dependencies (also described in INSTALL with the GNU +Guix package manager). +

+ +

+The web interface using this exact same script so it should just work +(TM). +

+
+
+ +
+

6.2 Example: uploading bulk GenBank sequences

+
+

+We also use above script to bulk upload GenBank sequences with a FASTA +and YAML extractor specific for GenBank. This means that the steps we +took above for uploading a GenBank sequence are already automated.

-
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-05-30 Sat 10:44
. +
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-05-30 Sat 18:12
.
diff --git a/doc/blog/using-covid-19-pubseq-part3.org b/doc/blog/using-covid-19-pubseq-part3.org index 4dd3078..03f37ab 100644 --- a/doc/blog/using-covid-19-pubseq-part3.org +++ b/doc/blog/using-covid-19-pubseq-part3.org @@ -6,26 +6,26 @@ #+HTML_HEAD: -* Uploading Data -/Work in progress!/ * Table of Contents :TOC:noexport: - [[#uploading-data][Uploading Data]] - - [[#introduction][Introduction]] - [[#step-1-upload-sequence][Step 1: Upload sequence]] - [[#step-2-add-metadata][Step 2: Add metadata]] - [[#obligatory-fields][Obligatory fields]] - [[#optional-fields][Optional fields]] - [[#step-3-submit-to-covid-19-pubseq][Step 3: Submit to COVID-19 PubSeq]] - - [[#step-4-check-output][Step 4: Check output]] - [[#trouble-shooting][Trouble shooting]] + - [[#step-4-check-output][Step 4: Check output]] + - [[#bulk-sequence-uploader][Bulk sequence uploader]] + - [[#run-the-uploader-cli][Run the uploader (CLI)]] + - [[#example-uploading-bulk-genbank-sequences][Example: uploading bulk GenBank sequences]] -* Introduction +* Uploading Data The COVID-19 PubSeq allows you to upload your SARS-Cov-2 strains to a -public resource for global comparisons. Compute it triggered on -upload. Read the [[./about][ABOUT]] page for more information. +public resource for global comparisons. A recompute of the pangenome +gets triggered on upload. Read the [[./about][ABOUT]] page for more information. * Step 1: Upload sequence @@ -59,7 +59,7 @@ A number of fields are obligatory: sample id, date, location, technology and authors. The others are optional, but it is valuable to enter them when information is available. Metadata is defined in this [[https://github.com/arvados/bh20-seq-resource/blob/master/bh20sequploader/bh20seq-schema.yml][schema]]. From this schema we generate the input form. Note that -opitional fields have a question mark in the ~type~. You can add +optional fields have a question mark in the ~type~. You can add metadata yourself, btw, because this is a public resource! See also [[./blog?id=using-covid-19-pubseq-part5][Modify metadata]] for more information. @@ -86,7 +86,7 @@ Estimated collection date. The GenBank page says April 6, 2020. *** Collection location -A search on wikidata says Los Angelos is +A search on wikidata says Los Angeles is https://www.wikidata.org/entity/Q65 *** Sequencing technology @@ -136,12 +136,6 @@ Once you have the sequence and the metadata together, hit the 'Add to Pangenome' button. The data will be checked, submitted and the workflows should kick in! -* Step 4: Check output - -The current pipeline takes 5.5 hours to complete! Once it completes -the updated data can be checked on the [[./download][DOWNLOAD]] page. After completion -of above output this [[http://sparql.genenetwork.org/sparql/?default-graph-uri=&query=PREFIX+pubseq%3A+%3Chttp%3A%2F%2Fbiohackathon.org%2Fbh20-seq-schema%23MainSchema%2F%3E%0D%0APREFIX+sio%3A+%3Chttp%3A%2F%2Fsemanticscience.org%2Fresource%2F%3E%0D%0Aselect+distinct+%3Fsample+%3Fp+%3Fo%0D%0A%7B%0D%0A+++%3Fsample+sio%3ASIO_000115+%22MT536190.1%22+.%0D%0A+++%3Fsample+%3Fp+%3Fo+.%0D%0A%7D&format=text%2Fhtml&timeout=0&debug=on&run=+Run+Query+][SPARQL query]] shows some of the metadata we put -in. ** Trouble shooting @@ -151,3 +145,95 @@ fixing it to look like http://www.wikidata.org/entity/Q65 (note http instead on https and entity instead of wiki) the submission went through. Reload the page (it won't empty the fields) to re-enable the submit button. + + +* Step 4: Check output + +The current pipeline takes 5.5 hours to complete! Once it completes +the updated data can be checked on the [[./download][DOWNLOAD]] page. After completion +of above output this [[http://sparql.genenetwork.org/sparql/?default-graph-uri=&query=PREFIX+pubseq%3A+%3Chttp%3A%2F%2Fbiohackathon.org%2Fbh20-seq-schema%23MainSchema%2F%3E%0D%0APREFIX+sio%3A+%3Chttp%3A%2F%2Fsemanticscience.org%2Fresource%2F%3E%0D%0Aselect+distinct+%3Fsample+%3Fp+%3Fo%0D%0A%7B%0D%0A+++%3Fsample+sio%3ASIO_000115+%22MT536190.1%22+.%0D%0A+++%3Fsample+%3Fp+%3Fo+.%0D%0A%7D&format=text%2Fhtml&timeout=0&debug=on&run=+Run+Query+][SPARQL query]] shows some of the metadata we put +in. + +* Bulk sequence uploader + +Above steps require a manual upload of one sequence with metadata. +What if you have a number of sequences you want to upload in bulk? +For this we have a command line version of the uploader that can +directly submit to COVID-19 PubSeq. It accepts a FASTA sequence +file an associated metadata in [[https://github.com/arvados/bh20-seq-resource/blob/master/example/maximum_metadata_example.yaml][YAML]] format. The YAML matches +the web form and gets validated from the same [[https://github.com/arvados/bh20-seq-resource/blob/master/bh20sequploader/bh20seq-schema.yml][schema]] looks. The YAML +that you need to create/generate for your samples looks like + +#+begin_src json +id: placeholder + +host: + host_id: XX1 + host_species: http://purl.obolibrary.org/obo/NCBITaxon_9606 + host_sex: http://purl.obolibrary.org/obo/PATO_0000384 + host_age: 20 + host_age_unit: http://purl.obolibrary.org/obo/UO_0000036 + host_health_status: http://purl.obolibrary.org/obo/NCIT_C25269 + host_treatment: Process in which the act is intended to modify or alter host status (Compounds) + host_vaccination: [vaccines1,vaccine2] + ethnicity: http://purl.obolibrary.org/obo/HANCESTRO_0010 + additional_host_information: Optional free text field for additional information + +sample: + sample_id: Id of the sample as defined by the submitter + collector_name: Name of the person that took the sample + collecting_institution: Institute that was responsible of sampling + specimen_source: [http://purl.obolibrary.org/obo/NCIT_C155831,http://purl.obolibrary.org/obo/NCIT_C155835] + collection_date: "2020-01-01" + collection_location: http://www.wikidata.org/entity/Q148 + sample_storage_conditions: frozen specimen + source_database_accession: [http://identifiers.org/insdc/LC522350.1#sequence] + additional_collection_information: Optional free text field for additional information + +virus: + virus_species: http://purl.obolibrary.org/obo/NCBITaxon_2697049 + virus_strain: SARS-CoV-2/human/CHN/HS_8/2020 + +technology: + sample_sequencing_technology: [http://www.ebi.ac.uk/efo/EFO_0009173,http://www.ebi.ac.uk/efo/EFO_0009173] + sequence_assembly_method: Protocol used for assembly + sequencing_coverage: [70.0, 100.0] + additional_technology_information: Optional free text field for additional information + +submitter: + authors: [John Doe, Joe Boe, Jonny Oe] + submitter_name: [John Doe] + submitter_address: John Doe's address + originating_lab: John Doe kitchen + lab_address: John Doe's address + provider_sample_id: XXX1 + submitter_sample_id: XXX2 + publication: PMID00001113 + submitter_orcid: [https://orcid.org/0000-0000-0000-0000,https://orcid.org/0000-0000-0000-0001] + additional_submitter_information: Optional free text field for additional information +#+end_src + +** Run the uploader (CLI) + +Installing with pip you should be +able to run + +: bh20sequploader sequence.fasta metadata.yaml + + +Alternatively the script can be installed from [[https://github.com/arvados/bh20-seq-resource#installation][github]]. Run on the +command line + +: python3 bh20sequploader/main.py example/sequence.fasta example/maximum_metadata_example.yaml + +after installing dependencies (also described in [[https://github.com/arvados/bh20-seq-resource/blob/master/doc/INSTALL.md][INSTALL]] with the GNU +Guix package manager). + +The web interface using this exact same script so it should just work +(TM). + +** Example: uploading bulk GenBank sequences + +We also use above script to bulk upload GenBank sequences with a [[https://github.com/arvados/bh20-seq-resource/blob/master/scripts/from_genbank_to_fasta_and_yaml.py][FASTA +and YAML]] extractor specific for GenBank. This means that the steps we +took above for uploading a GenBank sequence are already automated. diff --git a/doc/blog/using-covid-19-pubseq-part4.html b/doc/blog/using-covid-19-pubseq-part4.html new file mode 100644 index 0000000..67d299e --- /dev/null +++ b/doc/blog/using-covid-19-pubseq-part4.html @@ -0,0 +1,266 @@ + + + + + + + + + + + + + + +
+
+

Table of Contents

+ +
+
+

1 Modify Workflow

+
+

+Work in progress! +

+
+
+
+
+
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-05-30 Sat 11:52
. +
+ + diff --git a/doc/blog/using-covid-19-pubseq-part4.org b/doc/blog/using-covid-19-pubseq-part4.org index c147ba3..58a1f56 100644 --- a/doc/blog/using-covid-19-pubseq-part4.org +++ b/doc/blog/using-covid-19-pubseq-part4.org @@ -1,2 +1,3 @@ -/Work in progress!/ +* Modify Workflow +/Work in progress!/ diff --git a/doc/blog/using-covid-19-pubseq-part5.html b/doc/blog/using-covid-19-pubseq-part5.html new file mode 100644 index 0000000..30a3f83 --- /dev/null +++ b/doc/blog/using-covid-19-pubseq-part5.html @@ -0,0 +1,277 @@ + + + + + + + + + + + + + + +
+
+

Table of Contents

+ +
+
+

1 Modify Metadata

+
+

+The public sequence resource uses multiple data formats listed on the +DOWNLOAD page. One of the most exciting features is the full support +for RDF and semantic web/linked data ontologies. This technology +allows for querying data in unprescribed ways - that is, you can +formulate your own queries without dealing with a preset model of that +data (so typical of CSV files and SQL tables). Examples of exploring +data are listed here. +

+ +

+In this BLOG we are going to look at the metadata entered on the +COVID-19 PubSeq website (or command line client). +

+
+
+
+
+
Created by Pjotr Prins (pjotr.public768 at thebird 'dot' nl) using Emacs org-mode and a healthy dose of Lisp!
Modified 2020-05-30 Sat 11:59
. +
+ + diff --git a/doc/blog/using-covid-19-pubseq-part5.org b/doc/blog/using-covid-19-pubseq-part5.org index c147ba3..8d7504e 100644 --- a/doc/blog/using-covid-19-pubseq-part5.org +++ b/doc/blog/using-covid-19-pubseq-part5.org @@ -1,2 +1,17 @@ -/Work in progress!/ +* Modify Metadata +The public sequence resource uses multiple data formats listed on the +[[./download][DOWNLOAD]] page. One of the most exciting features is the full support +for RDF and semantic web/linked data ontologies. This technology +allows for querying data in unprescribed ways - that is, you can +formulate your own queries without dealing with a preset model of that +data (so typical of CSV files and SQL tables). Examples of exploring +data are listed [[./blog?id=using-covid-19-pubseq-part1][here]]. + +In this BLOG we are going to look at the metadata entered on the +[[./][COVID-19 PubSeq]] website (or command line client). It is important to +understand that you and us can change that information. + +* What is the schema? + +* How is the website generated? -- cgit v1.2.3