aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md8
-rw-r--r--bh20simplewebuploader/main.py6
-rw-r--r--bh20simplewebuploader/templates/form.html264
-rw-r--r--paper/paper.bib16
-rw-r--r--paper/paper.md174
5 files changed, 362 insertions, 106 deletions
diff --git a/README.md b/README.md
index 8a5a6dd..db4fe52 100644
--- a/README.md
+++ b/README.md
@@ -59,7 +59,7 @@ sudo apt install -y virtualenv git libcurl4-openssl-dev build-essential python3-
pip3 install --user git+https://github.com/arvados/bh20-seq-resource.git@master
```
-3. **Make sure the tool is on your `PATH`.** THe `pip3` command will install the uploader in `.local/bin` inside your home directory. Your shell may not know to look for commands there by default. To fix this for the terminal you currently have open, run:
+3. **Make sure the tool is on your `PATH`.** The `pip3` command will install the uploader in `.local/bin` inside your home directory. Your shell may not know to look for commands there by default. To fix this for the terminal you currently have open, run:
```sh
export PATH=$PATH:$HOME/.local/bin
@@ -126,10 +126,10 @@ For running/developing the uploader with GNU Guix see [INSTALL.md](./doc/INSTALL
# Usage
-Run the uploader with a FASTA file and accompanying metadata file in [JSON-LD format](https://json-ld.org/):
+Run the uploader with a FASTA or FASTQ file and accompanying metadata file in JSON or YAML:
```sh
-bh20-seq-uploader example/sequence.fasta example/metadata.json
+bh20-seq-uploader example/sequence.fasta example/metadata.yaml
```
## Workflow for Generating a Pangenome
@@ -174,7 +174,7 @@ pip3 install gunicorn
gunicorn bh20simplewebuploader.main:app
```
-This runs on [http://127.0.0.1:8000/](http://127.0.0.1:8000/) by default, but can be adjusted with various [gunicorn options](http://docs.gunicorn.org/en/latest/run.html#commonly-used-arguments)
+This runs on [http://127.0.0.1:8000/](http://127.0.0.1:8000/) by default, but can be adjusted with various [gunicorn options](http://docs.gunicorn.org/en/latest/run.html#commonly-used-arguments).
diff --git a/bh20simplewebuploader/main.py b/bh20simplewebuploader/main.py
index bfc7762..383ef84 100644
--- a/bh20simplewebuploader/main.py
+++ b/bh20simplewebuploader/main.py
@@ -184,15 +184,17 @@ def receive_files():
# We're going to work in one directory per request
dest_dir = tempfile.mkdtemp()
+ # The uploader will happily accept a FASTQ with this name
fasta_dest = os.path.join(dest_dir, 'fasta.fa')
metadata_dest = os.path.join(dest_dir, 'metadata.json')
try:
if 'fasta' not in request.files:
return (render_template('error.html',
- error_message="You did not include a FASTA file."), 403)
+ error_message="You did not include a FASTA or FASTQ file."), 403)
try:
with open(fasta_dest, 'wb') as out_stream:
- copy_with_limit(request.files.get('fasta').stream, out_stream)
+ # Use a plausible file size limit for a little FASTQ
+ copy_with_limit(request.files.get('fasta').stream, out_stream, limit=50*1024*1024)
except FileTooBigError as e:
# Delegate to the 413 error handler
return handle_large_file(e)
diff --git a/bh20simplewebuploader/templates/form.html b/bh20simplewebuploader/templates/form.html
index 2934a7c..afae4c7 100644
--- a/bh20simplewebuploader/templates/form.html
+++ b/bh20simplewebuploader/templates/form.html
@@ -1,95 +1,217 @@
<!DOCTYPE html>
<html>
+ <style>
+ hr {
+ margin: auto 0;
+ }
+
+ body {
+ color: #101010;
+ }
+
+ h1, h4 {
+ font-family: 'Roboto Slab', serif;
+ }
+
+ h1 {
+ text-align: center;
+ }
+
+ p {
+ color: #505050;
+ font-style: italic;
+ }
+
+ p, form {
+ font-family: 'Raleway', sans-serif;
+ line-height: 1.5;
+ }
+
+ form h4 {
+ text-transform: 'uppercase';
+ }
+
+ .intro, form {
+ padding: 20px;
+ }
+
+ .intro {
+ margin: 0 auto;
+ padding: 20px;
+ }
+
+ .grid-container {
+ display: grid;
+ grid-template-columns: repeat(4, 1fr);
+ grid-template-rows: auto;
+ row-gap:5px;
+ grid-template-areas:
+ "a a b b"
+ "a a c c"
+ "a a d d"
+ "e e e e"
+ "f f f f";
+ grid-auto-flow: column;
+ }
+
+ .intro {
+ grid-area: a;
+ }
+
+ .fasta-file-select {
+ grid-area: b;
+ }
+
+ .metadata {
+ grid-area: c;
+ }
+
+ #metadata_upload_form_spot {
+ grid-area: d;
+ }
+
+ #metadata_fill_form_spot {
+ grid-area: e;
+ }
+
+ #metadata_fill_form {
+ column-count: 4;
+ margin-top: 0.5em;
+ column-width: 250px;
+ }
+
+ .record {
+ display: flex;
+ flex-direction: column;
+ border: solid 1px #808080;
+ padding: 1em;
+ background: #F8F8F8;
+ margin-bottom: 1em;
+ }
+
+ .record label {
+ font-size: small;
+ margin-top: 10px;
+ }
+
+ .submit {
+ grid-area: f;
+ width: 17em;
+ justify-self: center;
+ }
+
+ @media only screen and (max-device-width: 480px) {
+ .grid-container {
+ display: flex;
+ flex-direction: column;
+ }
+ }
+ </style>
+
<head>
<meta charset="UTF-8">
+ <link href="https://fonts.googleapis.com/css2?family=Raleway:wght@500&family=Roboto+Slab&display=swap" rel="stylesheet">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Simple Web Uploader for Public SARS-CoV-2 Sequence Resource</title>
</head>
<body>
<h1>Simple Web Uploader for Public SARS-CoV-2 Sequence Resource</h1>
<hr>
- <p>
- This tool can be used to upload sequenced genomes of SARS-CoV-2 samples to the <a href="https://workbench.lugli.arvadosapi.com/collections/lugli-4zz18-z513nlpqm03hpca">Public SARS-CoV-2 Sequence Resource</a>. Your uploaded sequence will automatically be processed and incorporated into the public pangenome.
- </p>
- <hr>
- <form action="/submit" method="POST" enctype="multipart/form-data" id="main_form">
- <label for="fasta">Select FASTA file for assembled genome (max 1MB):</label>
- <br>
- <input type="file" id="fasta" name="fasta" accept=".fa,.fasta,.fna" required>
- <br>
-
- <label>Select metadata submission method:</label>
- <br>
- <input type="radio" id="metadata_upload" name="metadata_type" value="upload" onchange="setMode()" checked required>
- <label for="metadata_upload">Upload metadata file</label>
- <br>
- <input type="radio" id="metadata_form" name="metadata_type" value="fill" onchange="setMode()" required>
- <label for="metadata_form">Fill in metadata manually</label>
- <br>
-
- <div id="metadata_upload_form_spot">
- <div id="metadata_upload_form">
- <label for="metadata">Select JSON or YAML metadata file following <a href="https://github.com/arvados/bh20-seq-resource/blob/master/bh20sequploader/bh20seq-schema.yml" target="_blank">this schema</a> (<a href="https://github.com/arvados/bh20-seq-resource/blob/master/example/metadata.yaml" target="_blank">Example 1</a>, <a href="https://github.com/arvados/bh20-seq-resource/blob/master/example/minimal_example.yaml" target="_blank">Example 2</a>, max 1MB):</label>
+ <section>
+ <form action="/submit" method="POST" enctype="multipart/form-data" id="main_form" class="grid-container">
+ <p class="intro">
+ This tool can be used to upload sequenced genomes of SARS-CoV-2 samples to the <a href="https://workbench.lugli.arvadosapi.com/collections/lugli-4zz18-z513nlpqm03hpca">Public SARS-CoV-2 Sequence Resource</a>. Your uploaded sequence will automatically be processed and incorporated into the public pangenome.
+ </p>
+ <div class="fasta-file-select">
+ <label for="fasta">Select FASTA file of assembled genome, or FASTQ of reads (max 50MB):</label>
+ <br>
+ <input type="file" id="fasta" name="fasta" accept=".fa,.fasta,.fna,.fq" required>
+ <br>
+ </div>
+
+ <div class="metadata">
+ <label>Select metadata submission method:</label>
<br>
- <input type="file" id="metadata" name="metadata" accept=".json,.yml,.yaml" required>
+ <input type="radio" id="metadata_upload" name="metadata_type" value="upload" onchange="setMode()" checked required>
+ <label for="metadata_upload">Upload metadata file</label>
+ <input type="radio" id="metadata_form" name="metadata_type" value="fill" onchange="setMode()" required>
+ <label for="metadata_form">Fill in metadata manually</label>
<br>
</div>
- </div>
-
- <div id="metadata_fill_form_spot">
- <div id="metadata_fill_form">
- {% for record in fields %}
+
+ <div id="metadata_upload_form_spot">
+ <div id="metadata_upload_form">
+ <label for="metadata">Select JSON or YAML metadata file following <a href="https://github.com/arvados/bh20-seq-resource/blob/master/bh20sequploader/bh20seq-schema.yml" target="_blank">this schema</a> (<a href="https://github.com/arvados/bh20-seq-resource/blob/master/example/metadata.yaml" target="_blank">Example 1</a>, <a href="https://github.com/arvados/bh20-seq-resource/blob/master/example/minimal_example.yaml" target="_blank">Example 2</a>, max 1MB):</label>
+ <br>
+ <input type="file" id="metadata" name="metadata" accept=".json,.yml,.yaml" required>
+ <br>
+ </div>
+ </div>
+
+ <div id="metadata_fill_form_spot">
+ <div id="metadata_fill_form">
+ {{ record }}
+ {% for record in fields %}
+
{% if 'heading' in record %}
- <h4>{{ record['heading'] }}</h4>
+ {% if loop.index > 1 %}
+ </div>
+ {% endif %}
+ <div class="record">
+ <h4>{{ record['heading'] }}</h4>
{% else %}
- <label for="{{ record['id'] }}">
- {{ record['label'] }}
- {{ "*" if record['required'] else "" }}
- {% if 'ref_url' in record %}
- <a href="{{ record['ref_url'] }}" title="More Info" target="_blank">?</a>
- {% endif %}
- </label>
- <br>
- <input type="{{ record['type'] }}" id="{{ record['id'] }}" name="{{ record['id'] }}" {{ "required" if record['required'] else "" }}>
- <br>
+ <label for="{{ record['id'] }}">
+ {{ record['label'] }}
+ {{ "*" if record['required'] else "" }}
+ {% if 'ref_url' in record %}
+ <a href="{{ record['ref_url'] }}" title="More Info" target="_blank">?</a>
+ {% endif %}
+ </label>
+ <input type="{{ record['type'] }}" id="{{ record['id'] }}" name="{{ record['id'] }}" {{ "required" if record['required'] else "" }}>
{% endif %}
+ {% if loop.index == loop.length %}
+ </div>
+ {% endif %}
{% endfor %}
</div>
- </div>
-
- <input type="submit" value="Add to Pangenome">
- </form>
+ </div>
+
+
+ <input class="submit" type="submit" value="Add to Pangenome">
+ </form>
+ </section>
<hr>
<small><a href="https://github.com/arvados/bh20-seq-resource">Source</a> &middot; Made for <a href="https://github.com/virtual-biohackathons/covid-19-bh20">COVID-19-BH20</a></small>
+
<script type="text/javascript">
- let uploadForm = document.getElementById('metadata_upload_form')
- let uploadFormSpot = document.getElementById('metadata_upload_form_spot')
- let fillForm = document.getElementById('metadata_fill_form')
- let fillFormSpot = document.getElementById('metadata_fill_form_spot')
-
- function setUploadMode() {
- // Make the upload form the one in use
- uploadFormSpot.appendChild(uploadForm)
- fillFormSpot.removeChild(fillForm)
- }
-
- function setFillMode() {
- // Make the fillable form the one in use
- uploadFormSpot.removeChild(uploadForm)
- fillFormSpot.appendChild(fillForm)
- }
-
- function setMode() {
- // Pick mode based on radio
- if (document.getElementById('metadata_upload').checked) {
- setUploadMode()
- } else {
- setFillMode()
- }
- }
-
- // Start in mode appropriate to selected form item
- setMode()
+ let uploadForm = document.getElementById('metadata_upload_form')
+ let uploadFormSpot = document.getElementById('metadata_upload_form_spot')
+ let fillForm = document.getElementById('metadata_fill_form')
+ let fillFormSpot = document.getElementById('metadata_fill_form_spot')
+
+ function setUploadMode() {
+ // Make the upload form the one in use
+ uploadFormSpot.appendChild(uploadForm)
+ fillFormSpot.removeChild(fillForm)
+ }
+
+ function setFillMode() {
+ // Make the fillable form the one in use
+ uploadFormSpot.removeChild(uploadForm)
+ fillFormSpot.appendChild(fillForm)
+ }
+
+ function setMode() {
+ // Pick mode based on radio
+ if (document.getElementById('metadata_upload').checked) {
+ setUploadMode()
+ } else {
+ setFillMode()
+ }
+ }
+
+ // Start in mode appropriate to selected form item
+ setMode()
</script>
</body>
</html>
diff --git a/paper/paper.bib b/paper/paper.bib
index e69de29..bcb9c0b 100644
--- a/paper/paper.bib
+++ b/paper/paper.bib
@@ -0,0 +1,16 @@
+@book{CWL,
+title = "Common Workflow Language, v1.0",
+abstract = "The Common Workflow Language (CWL) is an informal, multi-vendor working group consisting of various organizations and individuals that have an interest in portability of data analysis workflows. Our goal is to create specifications that enable data scientists to describe analysis tools and workflows that are powerful, easy to use, portable, and support reproducibility.CWL builds on technologies such as JSON-LD and Avro for data modeling and Docker for portable runtime environments. CWL is designed to express workflows for data-intensive science, such as Bioinformatics, Medical Imaging, Chemistry, Physics, and Astronomy.This is v1.0 of the CWL tool and workflow specification, released on 2016-07-08",
+keywords = "cwl, workflow, specification",
+author = "Brad Chapman and John Chilton and Michael Heuer and Andrey Kartashov and Dan Leehr and Herv{\'e} M{\'e}nager and Maya Nedeljkovich and Matt Scales and Stian Soiland-Reyes and Luka Stojanovic",
+editor = "Peter Amstutz and Crusoe, {Michael R.} and Nebojša Tijanić",
+note = "Specification, product of the Common Workflow Language working group. http://www.commonwl.org/v1.0/",
+year = "2016",
+month = "7",
+day = "8",
+doi = "10.6084/m9.figshare.3115156.v2",
+language = "English",
+publisher = "figshare",
+address = "United States",
+
+} \ No newline at end of file
diff --git a/paper/paper.md b/paper/paper.md
index caa9903..7bd18c8 100644
--- a/paper/paper.md
+++ b/paper/paper.md
@@ -1,8 +1,9 @@
---
-title: 'Public Sequence Resource for COVID-19'
+title: 'CPSR: COVID-19 Public Sequence Resource'
+title_short: 'CPSR: COVID-19 Public Sequence Resource'
tags:
- Sequencing
- - COVID
+ - COVID-19
authors:
- name: Pjotr Prins
orcid: 0000-0002-8021-9162
@@ -19,22 +20,42 @@ authors:
- name: Erik Garrison
orcid: 0000
affiliation: 5
- - name: Michael Crusoe
- orcid: 0000
- affiliation: 6
+ - name: Michael R. Crusoe
+ orcid: 0000-0002-2961-9670
+ affiliation: 6, 2
- name: Rutger Vos
orcid: 0000
affiliation: 7
- - Michael Heuer
- orcid: 0000
+ - name: Michael Heuer
+ orcid: 0000-0002-9052-6000
affiliation: 8
-
+ - name: Adam M Novak
+ orcid: 0000-0001-5828-047X
+ affiliation: 5
+ - name: Alex Kanitz
+ orcid: 0000
+ affiliation: 10
+ - name: Jerven Bolleman
+ orcid: 0000
+ affiliation: 11
+ - name: Joep de Ligt
+ orcid: 0000
+ affiliation: 12
affiliations:
- name: Department of Genetics, Genomics and Informatics, The University of Tennessee Health Science Center, Memphis, TN, USA.
index: 1
- name: Curii, Boston, USA
index: 2
+ - name: UC Santa Cruz Genomics Institute, University of California, Santa Cruz, CA 95064, USA.
+ index: 5
+ - name: Department of Computer Science, Faculty of Sciences, Vrije Universiteit Amsterdam, The Netherlands
+ index: 6
+ - name: RISE Lab, University of California Berkeley, Berkeley, CA, USA.
+ index: 8
date: 11 April 2020
+event: COVID2020
+group: Public Sequence Uploader
+authors_short: Pjotr Prins & Peter Amstutz \emph{et al.}
bibliography: paper.bib
---
@@ -49,13 +70,48 @@ pasting above link (or yours) with
https://github.com/biohackrxiv/bhxiv-gen-pdf
+Note that author order will change!
+
-->
# Introduction
-As part of the one week COVID-19 Biohackathion 2020, we formed a
-working group on creating a public sequence resource for Corona virus.
-
+As part of the COVID-19 Biohackathion 2020 we formed a working
+group to create a COVID-19 Public Sequence Resource (CPSR) for
+Corona virus sequences. The general idea was to create a
+repository that has a low barrier to entry for uploading sequence
+data using best practices. I.e., data published with a creative
+commons 4.0 (CC-4.0) license with metadata using state-of-the art
+standards and, perhaps most importantly, providing standardized
+workflows that get triggered on upload, so that results are
+immediately available in standardized data formats.
+
+Existing data repositories for viral data include GISAID, EBI ENA
+and NCBI. These repositories allow for free sharing of data, but
+do not add value in terms of running immediate
+computations. Also, GISAID, at this point, has the most complete
+collection of genetic sequence data of influenza viruses and
+related clinical and epidemiological data through its
+database. But, due to a restricted license, data submitted to
+GISAID can not be used for online web services and on-the-fly
+computation. In addition GISAID registration which can take weeks
+and, painfully, forces users to download sequences one at a time
+to do any type of analysis. In our opinion this does not fit a
+pandemic scenario where fast turnaround times are key and data
+analysis has to be agile.
+
+We managed to create a useful sequence uploader utility within
+one week by leveraging existing technologies, such as the Arvados
+Cloud platform [@Arvados], the Common Workflow Langauge (CWL)
+[@CWL], Docker images built with Debian packages, and the many
+free and open source software packages that are available for
+bioinformatics.
+
+The source code for the CLI uploader and web uploader can be
+found [here](https://github.com/arvados/bh20-seq-resource)
+(FIXME: we'll have a full page). The CWL workflow definitions can
+be found [here](https://github.com/hpobio-lab/viral-analysis) and
+on CWL hub (FIXME).
<!--
@@ -73,38 +129,98 @@ working group on creating a public sequence resource for Corona virus.
## Cloud computing backend
-Peter, Pjotr, MichaelC
+The development of CPSR was accelerated by using the Arvados
+Cloud platform. Arvados is an open source platform for managing,
+processing, and sharing genomic and other large scientific and
+biomedical data. The Arvados instance was deployed on Amazon AWS
+for testing and development and a project was created that
+allows for uploading data.
-## A command-line sequence uploader
+## Sequence uploader
-Peter, Pjotr
+We wrote a Python-based uploader that authenticates with Arvados
+using a token. Data gets validated for being a FASTA sequence,
+FASTQ raw data and/or metadata in the form of JSON LD that gets
+validated against a schema. The uploader can be used
+from a command line or using a simple web interface.
-## Metadata uploader
+## Creating a Pangenome
-With Thomas
+### FASTA to GFA workflow
-## FASTA to GFA workflow
+The first workflow (1) we implemented was a FASTA to Graphical
+Fragment Assembly (GFA) Format conversion. When someone uploads a
+sequence in FASTA format it gets combined with all known viral
+sequences in our storage to generate a pangenome or variation
+graph (VG). The full pangenome is made available as a
+downloadable GFA file together with a visualisation (Figure 1).
-Michael Heuer
+### FASTQ to GFA workflow
-## BAM to GFA workflow
+In the next step we introduced a workflow (2) that takes raw
+sequence data in fastq format and converts that into FASTA.
+This FASTA file, in turn, gets fed to workflow (1) to generate
+the pangenome.
-Tazro & Erik
+## Creating linked data workflow
-## Phylogeny app
+We created a workflow (3) that takes GFA and turns that into
+RDF. Together with the metadata at upload time a single RDF
+resource is compiled that can be linked against external
+resources such as Uniprot and Wikidata. The generated RDF file
+can be hosted in any triple store and queried using SPARQL.
-With Rutger
+## Creating a Phylogeny workflow
-## RDF app
+WIP
-Jerven?
-
-## EBI app
-
-?
+## Other workflows?
# Discussion
-Future work...
+CPSR is a data repository with computational pipelines that will
+persist during pandemics. Unlike other data repositories for
+Sars-COV-2 we created a repository that immediately computes the
+pangenome of all available data and presents that in useful
+formats for futher analysis, including visualisations, GFA and
+RDF. Code and data are available and written using best practises
+and state-of-the-art standards. CPSR can be deployed by anyone,
+anywhere.
+
+CPSR is designed to abide by FAIR data principles (expand...)
+
+CPSR is primed with viral data coming from repositories that have
+no sharing restrictions. The metadata includes relevant
+attribution to uploaders. Some institutes have already committed
+to uploading their data to CPSR first so as to warrant sharing
+for computation.
+
+CPSR is currently running on an Arvados cluster in the cloud. To
+ascertain the service remains running we will source money from
+project during pandemics. The workflows are written in CWL which
+means they can be deployed on any infrastructure that runs
+CWL. One of the advantages of the CC-4.0 license is that we make
+available all uploaded sequence and meta data, as well as
+results, online to anyone. So the data can be mirrored by any
+party. This guarantees the data will live on.
+
+<!-- Future work... -->
+
+We aim to add more workflows to CPSR, for example to prepare
+sequence data for submitting in other public repositories, such
+as EBI ENA and GISAID. This will allow researchers to share data
+in multiple systems without pain, circumventing current sharing
+restrictions.
+
+# Acknowledgements
+
+We thank the COVID-19 BioHackathon 2020 and ELIXIR for creating a
+unique event that triggered many collaborations. We thank Curii
+Corporation for their financial support for creating and running
+Arvados instances. We thank Amazon AWS for their financial
+support to run COVID-19 workflows. We also want to thank the
+other working groups in the BioHackathon who generously
+contributed onthologies, workflows and software.
+
# References