about summary refs log tree commit diff
diff options
context:
space:
mode:
-rw-r--r--README.md21
-rw-r--r--bh20seqanalyzer/main.py148
-rw-r--r--bh20sequploader/bh20seq-schema.yml89
-rw-r--r--bh20sequploader/main.py16
-rw-r--r--bh20sequploader/qc_metadata.py6
-rw-r--r--doc/DEVELOPMENT.md7
-rw-r--r--doc/INSTALL.md31
-rw-r--r--example/metadata.yaml49
-rw-r--r--example/minimal_example.yaml14
-rw-r--r--paper/paper.bib0
-rw-r--r--paper/paper.md110
11 files changed, 401 insertions, 90 deletions
diff --git a/README.md b/README.md
index 1448f4c..1c6f239 100644
--- a/README.md
+++ b/README.md
@@ -122,19 +122,7 @@ It should print some instructions about how to use the uploader.
 
 ## Installation with GNU Guix
 
-Another way to install this tool is inside a [GNU Guix Environment](https://guix.gnu.org/manual/en/html_node/Invoking-guix-environment.html), which can handle installing dependencies for you even when you don't have root access on an Ubuntu system.
-
-1. **Set up and enter a container with the necessary dependencies.** After installing Guix as `~/opt/guix/bin/guix`, run:
-
-```sh
-~/opt/guix/bin/guix environment -C guix --ad-hoc git python openssl python-pycurl nss-certs
-```
-   
-2. **Install the tool.** From there you can follow the [user installation instructions](#installation-with-pip3---user). In brief:
-
-```sh
-pip3 install --user git+https://github.com/arvados/bh20-seq-resource.git@master
-```
+For running/developing the uploader with GNU Guix see [INSTALL.md](./doc/INSTALL.md)
 
 # Usage
 
@@ -148,7 +136,7 @@ bh20-seq-uploader example/sequence.fasta example/metadata.json
 
 All these uploaded sequences are being fed into a workflow to generate a [pangenome](https://academic.oup.com/bib/article/19/1/118/2566735) for the virus. You can replicate this workflow yourself.
 
-Get your SARS-CoV-2 sequences from GenBank in `seqs.fa`, and then run:
+An example is to get your SARS-CoV-2 sequences from GenBank in `seqs.fa`, and then run a series of commands
 
 ```sh
 minimap2 -cx asm20 -X seqs.fa seqs.fa >seqs.paf
@@ -157,6 +145,7 @@ odgi build -g seqs.gfa -s -o seqs.odgi
 odgi viz -i seqs.odgi -o seqs.png -x 4000 -y 500 -R -P 5
 ```
 
-For more information on building pangenome models, [see this wiki page](https://github.com/virtual-biohackathons/covid-19-bh20/wiki/Pangenome#pangenome-model-from-available-genomes).
-
+Here we convert such a pipeline into the Common Workflow Language (CWL) and
+sources can be found [here](https://github.com/hpobio-lab/viral-analysis/tree/master/cwl/pangenome-generate).
 
+For more information on building pangenome models, [see this wiki page](https://github.com/virtual-biohackathons/covid-19-bh20/wiki/Pangenome#pangenome-model-from-available-genomes).
diff --git a/bh20seqanalyzer/main.py b/bh20seqanalyzer/main.py
index 78e32c9..2030c1e 100644
--- a/bh20seqanalyzer/main.py
+++ b/bh20seqanalyzer/main.py
@@ -13,21 +13,30 @@ logging.basicConfig(format="[%(asctime)s] %(levelname)s %(message)s", datefmt="%
                     level=logging.INFO)
 logging.getLogger("googleapiclient.discovery").setLevel(logging.WARN)
 
-def validate_upload(api, collection, validated_project):
+def validate_upload(api, collection, validated_project,
+                    fastq_project, fastq_workflow_uuid):
     col = arvados.collection.Collection(collection["uuid"])
 
     # validate the collection here.  Check metadata, etc.
     valid = True
 
-    if "sequence.fasta" not in col:
-        valid = False
-        logging.warn("Upload '%s' missing sequence.fasta", collection["name"])
     if "metadata.yaml" not in col:
         logging.warn("Upload '%s' missing metadata.yaml", collection["name"])
         valid = False
     else:
         metadata_content = ruamel.yaml.round_trip_load(col.open("metadata.yaml"))
-        valid = qc_metadata(metadata_content) and valid
+        #valid = qc_metadata(metadata_content) and valid
+        if not valid:
+            logging.warn("Failed metadata qc")
+
+    if valid:
+        if "sequence.fasta" not in col:
+            if "reads.fastq" in col:
+                start_fastq_to_fasta(api, collection, fastq_project, fastq_workflow_uuid)
+                return False
+            else:
+                valid = False
+                logging.warn("Upload '%s' missing sequence.fasta", collection["name"])
 
     dup = api.collections().list(filters=[["owner_uuid", "=", validated_project],
                                           ["portable_data_hash", "=", col.portable_data_hash()]]).execute()
@@ -39,7 +48,9 @@ def validate_upload(api, collection, validated_project):
     if valid:
         logging.info("Added '%s' to validated sequences" % collection["name"])
         # Move it to the "validated" project to be included in the next analysis
-        api.collections().update(uuid=collection["uuid"], body={"owner_uuid": validated_project}).execute()
+        api.collections().update(uuid=collection["uuid"], body={
+            "owner_uuid": validated_project,
+            "name": "%s (%s)" % (collection["name"], time.asctime(time.gmtime()))}).execute()
     else:
         # It is invalid, delete it.
         logging.warn("Deleting '%s'" % collection["name"])
@@ -47,28 +58,15 @@ def validate_upload(api, collection, validated_project):
 
     return valid
 
-def start_analysis(api,
-                   analysis_project,
-                   workflow_uuid,
-                   validated_project):
 
+def run_workflow(api, parent_project, workflow_uuid, name, inputobj):
     project = api.groups().create(body={
         "group_class": "project",
-        "name": "Pangenome analysis",
-        "owner_uuid": analysis_project,
+        "name": name,
+        "owner_uuid": parent_project,
     }, ensure_unique_name=True).execute()
 
-    validated = arvados.util.list_all(api.collections().list, filters=[["owner_uuid", "=", validated_project]])
-
     with tempfile.NamedTemporaryFile() as tmp:
-        inputobj = {
-            "inputReads": []
-        }
-        for v in validated:
-            inputobj["inputReads"].append({
-                "class": "File",
-                "location": "keep:%s/sequence.fasta" % v["portable_data_hash"]
-            })
         tmp.write(json.dumps(inputobj, indent=2).encode('utf-8'))
         tmp.flush()
         cmd = ["arvados-cwl-runner",
@@ -83,32 +81,102 @@ def start_analysis(api,
     if comp.returncode != 0:
         logging.error(comp.stderr.decode('utf-8'))
 
+    return project
+
+
+def start_fastq_to_fasta(api, collection,
+                         analysis_project,
+                         fastq_workflow_uuid):
+    newproject = run_workflow(api, analysis_project, fastq_workflow_uuid, "FASTQ to FASTA", {
+        "fastq_forward": {
+            "class": "File",
+            "location": "keep:%s/reads.fastq" % collection["portable_data_hash"]
+        },
+        "metadata": {
+            "class": "File",
+            "location": "keep:%s/metadata.yaml" % collection["portable_data_hash"]
+        },
+        "ref_fasta": {
+            "class": "File",
+            "location": "keep:ffef6a3b77e5e04f8f62a7b6f67264d1+556/SARS-CoV2-NC_045512.2.fasta"
+        }
+    })
+    api.collections().update(uuid=collection["uuid"],
+                             body={"owner_uuid": newproject["uuid"]}).execute()
+
+def start_pangenome_analysis(api,
+                             analysis_project,
+                             pangenome_workflow_uuid,
+                             validated_project):
+    validated = arvados.util.list_all(api.collections().list, filters=[["owner_uuid", "=", validated_project]])
+    inputobj = {
+        "inputReads": [],
+        "metadata": [],
+        "subjects": []
+    }
+    for v in validated:
+        inputobj["inputReads"].append({
+            "class": "File",
+            "location": "keep:%s/sequence.fasta" % v["portable_data_hash"]
+        })
+        inputobj["metadata"].append({
+            "class": "File",
+            "location": "keep:%s/metadata.yaml" % v["portable_data_hash"]
+        })
+        inputobj["subjects"].append("keep:%s/sequence.fasta" % v["portable_data_hash"])
+    run_workflow(api, analysis_project, pangenome_workflow_uuid, "Pangenome analysis", inputobj)
+
+
+def get_workflow_output_from_project(api, uuid):
+    cr = api.container_requests().list(filters=[['owner_uuid', '=', uuid],
+                                                ["requesting_container_uuid", "=", None]]).execute()
+    if cr["items"] and cr["items"][0]["output_uuid"]:
+        return cr["items"][0]
+    else:
+        return None
+
 
 def copy_most_recent_result(api, analysis_project, latest_result_uuid):
     most_recent_analysis = api.groups().list(filters=[['owner_uuid', '=', analysis_project]],
                                                   order="created_at desc", limit=1).execute()
     for m in most_recent_analysis["items"]:
-        cr = api.container_requests().list(filters=[['owner_uuid', '=', m["uuid"]],
-                                                    ["requesting_container_uuid", "=", None]]).execute()
-        if cr["items"] and cr["items"][0]["output_uuid"]:
-            wf = cr["items"][0]
+        wf = get_workflow_output_from_project(api, m["uuid"])
+        if wf:
             src = api.collections().get(uuid=wf["output_uuid"]).execute()
             dst = api.collections().get(uuid=latest_result_uuid).execute()
             if src["portable_data_hash"] != dst["portable_data_hash"]:
                 logging.info("Copying latest result from '%s' to %s", m["name"], latest_result_uuid)
                 api.collections().update(uuid=latest_result_uuid,
                                          body={"manifest_text": src["manifest_text"],
-                                               "description": "latest result from %s %s" % (m["name"], wf["uuid"])}).execute()
+                                               "description": "Result from %s %s" % (m["name"], wf["uuid"])}).execute()
             break
 
 
+def move_fastq_to_fasta_results(api, analysis_project, uploader_project):
+    projects = api.groups().list(filters=[['owner_uuid', '=', analysis_project],
+                                          ["properties.moved_output", "!=", True]],
+                                 order="created_at desc",).execute()
+    for p in projects["items"]:
+        wf = get_workflow_output_from_project(api, p["uuid"])
+        if wf:
+            logging.info("Moving completed fastq2fasta result %s back to uploader project", wf["output_uuid"])
+            api.collections().update(uuid=wf["output_uuid"],
+                                     body={"owner_uuid": uploader_project}).execute()
+            p["properties"]["moved_output"] = True
+            api.groups().update(uuid=p["uuid"], body={"properties": p["properties"]}).execute()
+
+
 def main():
     parser = argparse.ArgumentParser(description='Analyze collections uploaded to a project')
     parser.add_argument('--uploader-project', type=str, default='lugli-j7d0g-n5clictpuvwk8aa', help='')
-    parser.add_argument('--analysis-project', type=str, default='lugli-j7d0g-y4k4uswcqi3ku56', help='')
+    parser.add_argument('--pangenome-analysis-project', type=str, default='lugli-j7d0g-y4k4uswcqi3ku56', help='')
+    parser.add_argument('--fastq-project', type=str, default='lugli-j7d0g-xcjxp4oox2u1w8u', help='')
     parser.add_argument('--validated-project', type=str, default='lugli-j7d0g-5ct8p1i1wrgyjvp', help='')
-    parser.add_argument('--workflow-uuid', type=str, default='lugli-7fd4e-mqfu9y3ofnpnho1', help='')
-    parser.add_argument('--latest-result-uuid', type=str, default='lugli-4zz18-z513nlpqm03hpca', help='')
+
+    parser.add_argument('--pangenome-workflow-uuid', type=str, default='lugli-7fd4e-mqfu9y3ofnpnho1', help='')
+    parser.add_argument('--fastq-workflow-uuid', type=str, default='lugli-7fd4e-2zp9q4jo5xpif9y', help='')
+
+    parser.add_argument('--latest-result-collection', type=str, default='lugli-4zz18-z513nlpqm03hpca', help='')
     args = parser.parse_args()
 
     api = arvados.api()
@@ -116,16 +184,24 @@ def main():
     logging.info("Starting up, monitoring %s for uploads" % (args.uploader_project))
 
     while True:
+        move_fastq_to_fasta_results(api, args.fastq_project, args.uploader_project)
+
         new_collections = api.collections().list(filters=[['owner_uuid', '=', args.uploader_project]]).execute()
         at_least_one_new_valid_seq = False
         for c in new_collections["items"]:
-            at_least_one_new_valid_seq = validate_upload(api, c, args.validated_project) or at_least_one_new_valid_seq
+            at_least_one_new_valid_seq = validate_upload(api, c,
+                                                         args.validated_project,
+                                                         args.fastq_project,
+                                                         args.fastq_workflow_uuid) or at_least_one_new_valid_seq
 
         if at_least_one_new_valid_seq:
-            start_analysis(api, args.analysis_project,
-                           args.workflow_uuid,
-                           args.validated_project)
+            start_pangenome_analysis(api,
+                                     args.pangenome_analysis_project,
+                                     args.pangenome_workflow_uuid,
+                                     args.validated_project)
 
-        copy_most_recent_result(api, args.analysis_project, args.latest_result_uuid)
+        copy_most_recent_result(api,
+                                args.pangenome_analysis_project,
+                                args.latest_result_collection)
 
-        time.sleep(10)
+        time.sleep(15)
diff --git a/bh20sequploader/bh20seq-schema.yml b/bh20sequploader/bh20seq-schema.yml
index 6e0973a..5c962d1 100644
--- a/bh20sequploader/bh20seq-schema.yml
+++ b/bh20sequploader/bh20seq-schema.yml
@@ -1,36 +1,89 @@
+$base: http://biohackathon.org/bh20-seq-schema
+$namespaces:
+  sch: https://schema.org/
+  efo: http://www.ebi.ac.uk/efo/
+  obo: http://purl.obolibrary.org/obo/
 $graph:
 
-- name: sampleInformationSchema
+- name: hostSchema
   type: record
   fields:
-    location: string
-    host: string
-    sequenceTechnology: string
-    assemblyMethod: string
+    host_species:
+        type: string
+        jsonldPredicate:
+          _id: http://www.ebi.ac.uk/efo/EFO_0000532
+    host_id: string
+    host_common_name: string?
+    host_sex: string?
+    host_age: int?
+    host_age_unit: string?
+    host_health_status: string?
+    host_treatment:
+      type: string?
+      jsonldPredicate:
+          _id: http://www.ebi.ac.uk/efo/EFO_0000727
+    additional_host_information: string?
 
-- name: InstituteInformationSchema
+- name: sampleSchema
   type: record
   fields:
-    OriginatingLab: string
-    SubmittingLab: string
+    collector_name: string
+    collecting_institution: string
+    specimen_source: string?
+    collection_date: string?
+    collection_location:
+      type: string?
+      jsonldPredicate:
+        _id: https://schema.org/fromLocation
+    sample_storage_conditions: string?
+    additional_collection_information: string?
 
-- name: SubmitterInformationSchema
+- name: virusSchema
   type: record
   fields:
-    Submitter: string
-    submissionDate: string
+    virus_species: string?
+    virus_strain: string?
 
-- name: VirusDetailSchema
+- name: technologySchema
   type: record
   fields:
-    VirusName: string
-    AccessionId: string
+    sample_sequencing_technology:
+      type: string
+      jsonldPredicate:
+        _id: http://www.ebi.ac.uk/efo/EFO_0000532
+    sequence_assembly_method:
+      type: string?
+      jsonldPredicate:
+        _id: http://www.ebi.ac.uk/efo/EFO_0002699
+    sequencing_coverage:
+      type: string?
+      jsonldPredicate:
+        _id: http://purl.obolibrary.org/obo/FLU_0000848
+
+- name: submitterSchema
+  type: record
+  fields:
+    submitter_name: string
+    submitter_address: string?
+    originating_lab: string
+    lab_address: string?
+    provider_sample_id: string?
+    submitter_sample_id: string?
+    authors: string?
+    submitter_id: string?
 
 - name: MainSchema
   type: record
   documentRoot: true
   fields:
-    sampleInformation: sampleInformationSchema
-    InstituteInformation: InstituteInformationSchema
-    SubmitterInformation: SubmitterInformationSchema
-    VirusDetail: VirusDetailSchema
+    host: hostSchema
+    sample: sampleSchema
+    virus: virusSchema?
+    technology: technologySchema
+    submitter: submitterSchema
+    sequencefile:
+      doc: The subject (eg the fasta/fastq file) that this metadata describes
+      type: string?
+      jsonldPredicate:
+        _id: "@id"
+        _type: "@id"
diff --git a/bh20sequploader/main.py b/bh20sequploader/main.py
index 8b8fefe..bf74ea5 100644
--- a/bh20sequploader/main.py
+++ b/bh20sequploader/main.py
@@ -6,7 +6,7 @@ import json
 import urllib.request
 import socket
 import getpass
-from .qc_metadata import qc_metadata
+import qc_metadata
 
 ARVADOS_API_HOST='lugli.arvadosapi.com'
 ARVADOS_API_TOKEN='2fbebpmbo3rw3x05ueu2i6nx70zhrsb1p22ycu3ry34m4x4462'
@@ -20,12 +20,18 @@ def main():
 
     api = arvados.api(host=ARVADOS_API_HOST, token=ARVADOS_API_TOKEN, insecure=True)
 
-    qc_metadata(args.metadata.name)
+    if not qc_metadata(args.metadata.name):
+        print("Failed metadata qc")
+        exit(1)
 
     col = arvados.collection.Collection(api_client=api)
 
-    print("Reading FASTA")
-    with col.open("sequence.fasta", "w") as f:
+    if args.sequence.name.endswith("fasta") or args.sequence.name.endswith("fa"):
+        target = "sequence.fasta"
+    elif args.sequence.name.endswith("fastq") or args.sequence.name.endswith("fq"):
+        target = "reads.fastq"
+
+    with col.open(target, "w") as f:
         r = args.sequence.read(65536)
         print(r[0:20])
         while r:
@@ -52,5 +58,7 @@ def main():
                  (properties['upload_user'], properties['upload_ip']),
                  properties=properties, ensure_unique_name=True)
 
+    print("Done")
+
 if __name__ == "__main__":
     main()
diff --git a/bh20sequploader/qc_metadata.py b/bh20sequploader/qc_metadata.py
index 78b31b2..ebe4dfc 100644
--- a/bh20sequploader/qc_metadata.py
+++ b/bh20sequploader/qc_metadata.py
@@ -1,6 +1,7 @@
 import schema_salad.schema
 import logging
 import pkg_resources
+import logging
 
 def qc_metadata(metadatafile):
     schema_resource = pkg_resources.resource_stream(__name__, "bh20seq-schema.yml")
@@ -17,5 +18,6 @@ def qc_metadata(metadatafile):
     try:
         doc, metadata = schema_salad.schema.load_and_validate(document_loader, avsc_names, metadatafile, True)
         return True
-    except:
-        return False
+    except Exception as e:
+        logging.warn(e)
+    return False
diff --git a/doc/DEVELOPMENT.md b/doc/DEVELOPMENT.md
new file mode 100644
index 0000000..98d8de4
--- /dev/null
+++ b/doc/DEVELOPMENT.md
@@ -0,0 +1,7 @@
+# Development
+
+## Upload resume
+
+When data files get large we may want to implement resume,
+like put does. See
+[/sdk/python/arvados/commands/put.py](https://dev.arvados.org/projects/arvados/repository/revisions/master/entry/sdk/python/arvados/commands/put.py)
diff --git a/doc/INSTALL.md b/doc/INSTALL.md
new file mode 100644
index 0000000..c5c486c
--- /dev/null
+++ b/doc/INSTALL.md
@@ -0,0 +1,31 @@
+# INSTALLATION
+
+Other options for running this tool.
+
+## GNU Guix
+
+Another way to install this tool is inside a [GNU Guix Environment](https://guix.gnu.org/manual/en/html_node/Invoking-guix-environment.html), which can handle installing dependencies for you even when you don't have root access on an Ubuntu system.
+
+1. **Set up and enter a container with the necessary dependencies.** After installing Guix as `~/opt/guix/bin/guix`, run:
+
+```sh
+~/opt/guix/bin/guix environment -C guix --ad-hoc git python openssl python-pycurl nss-certs
+```
+
+2. **Install the tool.** From there you can follow the [user installation instructions](#installation-with-pip3---user). In brief:
+
+```sh
+pip3 install --user schema-salad  arvados-python-client
+```
+
+Pip installed the following modules
+
+```
+arvados-python-client-2.0.1 ciso8601-2.1.3 future-0.18.2 google-api-python-client-1.6.7 httplib2-0.17.1 oauth2client-4.1.3 pyasn1-0.4.8 pyasn1-modules-0.2.8 rsa-4.0 ruamel.yaml-0.15.77 six-1.14.0 uritemplate-3.0.1 ws4py-0.5.1
+```
+
+3. Run the tool directly with
+
+```sh
+~/opt/guix/bin/guix environment guix --ad-hoc git python openssl python-pycurl nss-certs -- python3 bh20sequploader/main.py
+```
diff --git a/example/metadata.yaml b/example/metadata.yaml
index 587d0be..41ff93e 100644
--- a/example/metadata.yaml
+++ b/example/metadata.yaml
@@ -1,17 +1,38 @@
-sampleInformation:
-  location: "USA"
-  host : "Homo Sapiens"
-  sequenceTechnology: "Sanger"
-  assemblyMethod: "CLC Genomics"
+host:
+    host_id: XX1
+    host_species: string
+    host_common_name: string
+    host_sex: string
+    host_age: 20
+    host_age_unit: string
+    host_health_status: string
+    host_treatment: string
+    additional_host_information: string
 
-InstituteInformation:
-  OriginatingLab: "Erik's kitchen"
-  SubmittingLab: "National Institute for Viral Disease Control and Prevention, China CDC"
+sample:
+    collector_name: XXX
+    collecting_institution: XXX
+    specimen_source: XXX
+    collection_date: XXX
+    collection_location: XXX
+    sample_storage_conditions: XXX
+    additional_collection_information: XXX
 
-SubmitterInformation:
-  Submitter: "National Institute for Viral Disease Control and Prevention, China CDC"
-  submissionDate: "04-04-2020"
+virus:
+    virus_species: XX
+    virus_strain: XX
 
-VirusDetail:
-  VirusName: "hCoV-19/USA/identifer/2020"
-  AccessionId: "EPI_ISL_Random"
+technology:
+    sample_sequencing_technology: XX
+    sequence_assembly_method: XX
+    sequencing_coverage: 70x
+
+submitter:
+    submitter_name: tester
+    submitter_address: testerAdd
+    originating_lab: testLab
+    lab_address: labAdd
+    provider_sample_id: string
+    submitter_sample_id: string
+    authors: testAuthor
+    submitter_id: X12
diff --git a/example/minimal_example.yaml b/example/minimal_example.yaml
new file mode 100644
index 0000000..201b080
--- /dev/null
+++ b/example/minimal_example.yaml
@@ -0,0 +1,14 @@
+host:
+    host_id: XX
+    host_species: string
+
+sample:
+    collector_name: XXX
+    collecting_institution: XXX
+
+technology:
+    sample_sequencing_technology: XX
+
+submitter:
+    submitter_name: tester
+    originating_lab: testLab
\ No newline at end of file
diff --git a/paper/paper.bib b/paper/paper.bib
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/paper/paper.bib
diff --git a/paper/paper.md b/paper/paper.md
new file mode 100644
index 0000000..caa9903
--- /dev/null
+++ b/paper/paper.md
@@ -0,0 +1,110 @@
+---
+title: 'Public Sequence Resource for COVID-19'
+tags:
+  - Sequencing
+  - COVID
+authors:
+  - name: Pjotr Prins
+    orcid: 0000-0002-8021-9162
+    affiliation: 1
+  - name: Peter Amstutz
+    orcid: 0000
+    affiliation: 2
+  - name: Tazro Ohta
+    orcid: 0000
+    affiliation: 3
+  - name: Thomas Liener
+    orcid: 0000
+    affiliation: 4
+  - name: Erik Garrison
+    orcid: 0000
+    affiliation: 5
+  - name: Michael Crusoe
+    orcid: 0000
+    affiliation: 6
+  - name: Rutger Vos
+    orcid: 0000
+    affiliation: 7
+  - Michael Heuer
+    orcid: 0000
+    affiliation: 8
+
+affiliations:
+  - name: Department of Genetics, Genomics and Informatics, The University of Tennessee Health Science Center, Memphis, TN, USA.
+    index: 1
+  - name: Curii, Boston, USA
+    index: 2
+date: 11 April 2020
+bibliography: paper.bib
+---
+
+<!--
+
+The paper.md, bibtex and figure file can be found in this repo:
+
+  https://github.com/arvados/bh20-seq-resource
+
+To modify, please clone the repo. You can generate PDF of the paper by
+pasting above link (or yours) with
+
+  https://github.com/biohackrxiv/bhxiv-gen-pdf
+
+-->
+
+# Introduction
+
+As part of the one week COVID-19 Biohackathion 2020, we formed a
+working group on creating a public sequence resource for Corona virus.
+
+
+<!--
+
+    RESULTS!
+
+    For each section below
+
+    State the problem you worked on
+    Give the state-of-the art/plan
+    Describe what you have done/results starting with The working group created...
+    Write a conclusion
+    Write up any future work
+
+-->
+
+## Cloud computing backend
+
+Peter, Pjotr, MichaelC
+
+## A command-line sequence uploader
+
+Peter, Pjotr
+
+## Metadata uploader
+
+With Thomas
+
+## FASTA to GFA workflow
+
+Michael Heuer
+
+## BAM to GFA workflow
+
+Tazro & Erik
+
+## Phylogeny app
+
+With Rutger
+
+## RDF app
+
+Jerven?
+
+## EBI app
+
+?
+
+# Discussion
+
+Future work...
+
+# References