aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPjotr Prins2020-04-09 15:34:06 -0500
committerPjotr Prins2020-04-09 15:34:06 -0500
commit0670ac0644c1e7366952e254bdee2db62e673275 (patch)
treeaed056a1ca4208cf30993da3e96bbb4fb08dbe52
parent146cf2f5d1be9a5dd9d6cd65ce9c760853d014f8 (diff)
parentdbe094a150d6c969b3d69f112b3538e6a87a74a2 (diff)
downloadbh20-seq-resource-0670ac0644c1e7366952e254bdee2db62e673275.tar.gz
bh20-seq-resource-0670ac0644c1e7366952e254bdee2db62e673275.tar.lz
bh20-seq-resource-0670ac0644c1e7366952e254bdee2db62e673275.zip
Merge branch 'master' of github.com:arvados/bh20-seq-resource
-rw-r--r--README.md168
-rw-r--r--bh20seqanalyzer/main.py187
-rw-r--r--bh20sequploader/bh20seq-schema.yml89
-rw-r--r--bh20sequploader/main.py22
-rw-r--r--bh20sequploader/qc_metadata.py23
-rw-r--r--example/metadata.json0
-rw-r--r--example/metadata.yaml38
-rw-r--r--example/minimal_example.yaml14
-rw-r--r--setup.py5
9 files changed, 491 insertions, 55 deletions
diff --git a/README.md b/README.md
index ec9afb1..a6fe052 100644
--- a/README.md
+++ b/README.md
@@ -1,48 +1,162 @@
# Sequence uploader
-This repository provides a sequence uploader for the
+This repository provides a sequence uploader for the COVID-19 Virtual Biohackathon's Public Sequence Resource project. You can use it to upload the genomes of SARS-CoV-2 samples to make them publicly and freely available to other researchers.
-# Run
+To get started, first [install the uploader](#installation), and use the `bh20-seq-uploader` command to [uplaod your data](#usage).
-Run the uploader with a FASTA file and accompanying metadata:
+# Installation
- python3 bh20sequploader/main.py example/sequence.fasta example/metadata.json
+There are several ways to install the uploader. The most portable is with a [virtualenv](#installation-with-virtualenv).
-# Add a workflow
+## Installation with `virtualenv`
-get your SARS-CoV-2 sequences from GenBank in seqs.fa
+1. **Prepare your system.** You need to make sure you have Python, and the ability to install modules such as `pycurl` and `pyopenssl`. On Ubuntu 18.04, you can run:
```sh
-minimap2 -cx asm20 -X seqs.fa seqs.fa >seqs.paf
-seqwish -s seqs.fa -p seqs.paf -g seqs.gfa
-odgi build -g seqs.gfa -s -o seqs.odgi
-odgi viz -i seqs.odgi -o seqs.png -x 4000 -y 500 -R -P 5
+sudo apt update
+sudo apt install -y virtualenv git libcurl4-openssl-dev build-essential python3-dev libssl-dev
```
-from https://github.com/virtual-biohackathons/covid-19-bh20/wiki/Pangenome#pangenome-model-from-available-genomes
+2. **Create and enter your virtualenv.** Go to some memorable directory and make and enter a virtualenv:
-# Installation
+```sh
+virtualenv --python python3 venv
+. venv/bin/activate
+```
+
+Note that you will need to repeat the `. venv/bin/activate` step from this directory to enter your virtualenv whenever you want to use the installed tool.
+
+3. **Install the tool.** Once in your virtualenv, install this project:
+
+```sh
+pip3 install git+https://github.com/arvados/bh20-seq-resource.git@master
+```
+
+4. **Test the tool.** Try running:
+
+```sh
+bh20-seq-uploader --help
+```
+
+It should print some instructions about how to use the uploader.
+
+**Make sure you are in your virtualenv whenever you run the tool!** If you ever can't run the tool, and your prompt doesn't say `(venv)`, try going to the directory where you put the virtualenv and running `. venv/bin/activate`. It only works for the current terminal window; you will need to run it again if you open a new terminal.
+
+## Installation with `pip3 --user`
+
+If you don't want to have to enter a virtualenv every time you use the uploader, you can use the `--user` feature of `pip3` to install the tool for your user.
+
+1. **Prepare your system.** Just as for the `virtualenv` method, you need to install some dependencies. On Ubuntu 18.04, you can run:
+
+```sh
+sudo apt update
+sudo apt install -y virtualenv git libcurl4-openssl-dev build-essential python3-dev libssl-dev
+```
+
+2. **Install the tool.** You can run:
+
+```sh
+pip3 install --user git+https://github.com/arvados/bh20-seq-resource.git@master
+```
+
+3. **Make sure the tool is on your `PATH`.** THe `pip3` command will install the uploader in `.local/bin` inside your home directory. Your shell may not know to look for commands there by default. To fix this for the terminal you currently have open, run:
+
+```sh
+export PATH=$PATH:$HOME/.local/bin
+```
+
+To make this change permanent, assuming your shell is Bash, run:
+
+```sh
+echo 'export PATH=$PATH:$HOME/.local/bin' >>~/.bashrc
+```
+
+4. **Test the tool.** Try running:
+
+```sh
+bh20-seq-uploader --help
+```
+
+It should print some instructions about how to use the uploader.
-This tool requires the arvados Python module which can be installed
-using .deb or .rpm packages through
-https://doc.arvados.org/v2.0/sdk/python/sdk-python.html. The actual
-code lives [here](https://github.com/arvados/arvados/tree/master/sdk/python) and
-suggests a local install using
+## Installation from Source for Development
- apt-get install libcurl4-openssl-dev libssl1.0-dev
- pip3 install --user arvados-python-client
+If you plan to contribute to the project, you may want to install an editable copy from source. With this method, changes to the source code are automatically reflected in the installed copy of the tool.
-Next update
+1. **Prepare your system.** On Ubuntu 18.04, you can run:
- export PATH=$PATH:$HOME/.local/bin
+```sh
+sudo apt update
+sudo apt install -y virtualenv git libcurl4-openssl-dev build-essential python3-dev libssl-dev
+```
+
+2. **Clone and enter the repository.** You can run:
+
+```sh
+git clone https://github.com/arvados/bh20-seq-resource.git
+cd bh20-seq-resource
+```
+
+3. **Create and enter a virtualenv.** Go to some memorable directory and make and enter a virtualenv:
+
+```sh
+virtualenv --python python3 venv
+. venv/bin/activate
+```
+
+Note that you will need to repeat the `. venv/bin/activate` step from this directory to enter your virtualenv whenever you want to use the installed tool.
+
+4. **Install the checked-out repository in editable mode.** Once in your virtualenv, install with this special pip command:
+
+```sh
+pip3 install -e .
+```
+
+5. **Test the tool.** Try running:
+
+```sh
+bh20-seq-uploader --help
+```
+
+It should print some instructions about how to use the uploader.
+
+## Installation with GNU Guix
-## Install with GNU Guix
+Another way to install this tool is inside a [GNU Guix Environment](https://guix.gnu.org/manual/en/html_node/Invoking-guix-environment.html), which can handle installing dependencies for you even when you don't have root access on an Ubuntu system.
-Set up a container:
+1. **Set up and enter a container with the necessary dependencies.** After installing Guix as `~/opt/guix/bin/guix`, run:
+
+```sh
+~/opt/guix/bin/guix environment -C guix --ad-hoc git python openssl python-pycurl nss-certs
+```
+
+2. **Install the tool.** From there you can follow the [user installation instructions](#installation-with-pip3---user). In brief:
+
+```sh
+pip3 install --user git+https://github.com/arvados/bh20-seq-resource.git@master
+```
+
+# Usage
+
+Run the uploader with a FASTA file and accompanying metadata file in [JSON-LD format](https://json-ld.org/):
+
+```sh
+bh20-seq-uploader example/sequence.fasta example/metadata.json
+```
+
+## Workflow for Generating a Pangenome
+
+All these uploaded sequences are being fed into a workflow to generate a [pangenome](https://academic.oup.com/bib/article/19/1/118/2566735) for the virus. You can replicate this workflow yourself.
+
+Get your SARS-CoV-2 sequences from GenBank in `seqs.fa`, and then run:
+
+```sh
+minimap2 -cx asm20 -X seqs.fa seqs.fa >seqs.paf
+seqwish -s seqs.fa -p seqs.paf -g seqs.gfa
+odgi build -g seqs.gfa -s -o seqs.odgi
+odgi viz -i seqs.odgi -o seqs.png -x 4000 -y 500 -R -P 5
+```
- ~/opt/guix/bin/guix environment -C guix --ad-hoc python openssl python-pycurl nss-certs
- pip3 install --user arvados-python-client
+For more information on building pangenome models, [see this wiki page](https://github.com/virtual-biohackathons/covid-19-bh20/wiki/Pangenome#pangenome-model-from-available-genomes).
-Pip installed the following modules
- arvados-python-client-2.0.1 ciso8601-2.1.3 future-0.18.2 google-api-python-client-1.6.7 httplib2-0.17.1 oauth2client-4.1.3 pyasn1-0.4.8 pyasn1-modules-0.2.8 rsa-4.0 ruamel.yaml-0.15.77 six-1.14.0 uritemplate-3.0.1 ws4py-0.5.1
diff --git a/bh20seqanalyzer/main.py b/bh20seqanalyzer/main.py
index 23e58e9..1a8965b 100644
--- a/bh20seqanalyzer/main.py
+++ b/bh20seqanalyzer/main.py
@@ -1,29 +1,73 @@
import argparse
import arvados
+import arvados.collection
import time
import subprocess
import tempfile
import json
+import logging
+import ruamel.yaml
+from bh20sequploader.qc_metadata import qc_metadata
-def start_analysis(api, collection, analysis_project, workflow_uuid):
+logging.basicConfig(format="[%(asctime)s] %(levelname)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S",
+ level=logging.INFO)
+logging.getLogger("googleapiclient.discovery").setLevel(logging.WARN)
+
+def validate_upload(api, collection, validated_project,
+ fastq_project, fastq_workflow_uuid):
+ col = arvados.collection.Collection(collection["uuid"])
+
+ # validate the collection here. Check metadata, etc.
+ valid = True
+
+ if "metadata.yaml" not in col:
+ logging.warn("Upload '%s' missing metadata.yaml", collection["name"])
+ valid = False
+ else:
+ metadata_content = ruamel.yaml.round_trip_load(col.open("metadata.yaml"))
+ #valid = qc_metadata(metadata_content) and valid
+ if not valid:
+ logging.warn("Failed metadata qc")
+
+ if valid:
+ if "sequence.fasta" not in col:
+ if "reads.fastq" in col:
+ start_fastq_to_fasta(api, collection, fastq_project, fastq_workflow_uuid)
+ return False
+ else:
+ valid = False
+ logging.warn("Upload '%s' missing sequence.fasta", collection["name"])
+
+ dup = api.collections().list(filters=[["owner_uuid", "=", validated_project],
+ ["portable_data_hash", "=", col.portable_data_hash()]]).execute()
+ if dup["items"]:
+ # This exact collection has been uploaded before.
+ valid = False
+ logging.warn("Upload '%s' is duplicate" % collection["name"])
+
+ if valid:
+ logging.info("Added '%s' to validated sequences" % collection["name"])
+ # Move it to the "validated" project to be included in the next analysis
+ api.collections().update(uuid=collection["uuid"], body={
+ "owner_uuid": validated_project,
+ "name": "%s (%s)" % (collection["name"], time.asctime(time.gmtime()))}).execute()
+ else:
+ # It is invalid, delete it.
+ logging.warn("Deleting '%s'" % collection["name"])
+ api.collections().delete(uuid=collection["uuid"]).execute()
+
+ return valid
+
+
+def run_workflow(api, parent_project, workflow_uuid, name, inputobj):
project = api.groups().create(body={
"group_class": "project",
- "name": "Analysis of %s" % collection["name"],
- "owner_uuid": analysis_project,
+ "name": name,
+ "owner_uuid": parent_project,
}, ensure_unique_name=True).execute()
with tempfile.NamedTemporaryFile() as tmp:
- inputobj = json.dumps({
- "sequence": {
- "class": "File",
- "location": "keep:%s/sequence.fasta" % collection["portable_data_hash"]
- },
- "metadata": {
- "class": "File",
- "location": "keep:%s/metadata.jsonld" % collection["portable_data_hash"]
- }
- }, indent=2)
- tmp.write(inputobj.encode('utf-8'))
+ tmp.write(json.dumps(inputobj, indent=2).encode('utf-8'))
tmp.flush()
cmd = ["arvados-cwl-runner",
"--submit",
@@ -32,24 +76,125 @@ def start_analysis(api, collection, analysis_project, workflow_uuid):
"--project-uuid=%s" % project["uuid"],
"arvwf:%s" % workflow_uuid,
tmp.name]
- print("Running %s" % ' '.join(cmd))
+ logging.info("Running %s" % ' '.join(cmd))
comp = subprocess.run(cmd, capture_output=True)
if comp.returncode != 0:
- print(comp.stderr.decode('utf-8'))
+ logging.error(comp.stderr.decode('utf-8'))
+
+ return project
+
+
+def start_fastq_to_fasta(api, collection,
+ analysis_project,
+ fastq_workflow_uuid):
+ newproject = run_workflow(api, analysis_project, fastq_workflow_uuid, "FASTQ to FASTA", {
+ "fastq_forward": {
+ "class": "File",
+ "location": "keep:%s/reads.fastq" % collection["portable_data_hash"]
+ },
+ "metadata": {
+ "class": "File",
+ "location": "keep:%s/metadata.yaml" % collection["portable_data_hash"]
+ },
+ "ref_fasta": {
+ "class": "File",
+ "location": "keep:ffef6a3b77e5e04f8f62a7b6f67264d1+556/SARS-CoV2-NC_045512.2.fasta"
+ }
+ })
+ api.collections().update(uuid=collection["uuid"],
+ body={"owner_uuid": newproject["uuid"]}).execute()
+
+def start_pangenome_analysis(api,
+ analysis_project,
+ pangenome_workflow_uuid,
+ validated_project):
+ validated = arvados.util.list_all(api.collections().list, filters=[["owner_uuid", "=", validated_project]])
+ inputobj = {
+ "inputReads": []
+ }
+ for v in validated:
+ inputobj["inputReads"].append({
+ "class": "File",
+ "location": "keep:%s/sequence.fasta" % v["portable_data_hash"]
+ })
+ run_workflow(api, analysis_project, pangenome_workflow_uuid, "Pangenome analysis", inputobj)
+
+
+def get_workflow_output_from_project(api, uuid):
+ cr = api.container_requests().list(filters=[['owner_uuid', '=', uuid],
+ ["requesting_container_uuid", "=", None]]).execute()
+ if cr["items"] and cr["items"][0]["output_uuid"]:
+ return cr["items"][0]
else:
- api.collections().update(uuid=collection["uuid"], body={"owner_uuid": project['uuid']}).execute()
+ return None
+
+
+def copy_most_recent_result(api, analysis_project, latest_result_uuid):
+ most_recent_analysis = api.groups().list(filters=[['owner_uuid', '=', analysis_project]],
+ order="created_at desc", limit=1).execute()
+ for m in most_recent_analysis["items"]:
+ wf = get_workflow_output_from_project(api, m["uuid"])
+ if wf:
+ src = api.collections().get(uuid=wf["output_uuid"]).execute()
+ dst = api.collections().get(uuid=latest_result_uuid).execute()
+ if src["portable_data_hash"] != dst["portable_data_hash"]:
+ logging.info("Copying latest result from '%s' to %s", m["name"], latest_result_uuid)
+ api.collections().update(uuid=latest_result_uuid,
+ body={"manifest_text": src["manifest_text"],
+ "description": "Result from %s %s" % (m["name"], wf["uuid"])}).execute()
+ break
+
+
+def move_fastq_to_fasta_results(api, analysis_project, uploader_project):
+ projects = api.groups().list(filters=[['owner_uuid', '=', analysis_project],
+ ["properties.moved_output", "!=", True]],
+ order="created_at desc",).execute()
+ for p in projects["items"]:
+ wf = get_workflow_output_from_project(api, p["uuid"])
+ if wf:
+ logging.info("Moving completed fastq2fasta result %s back to uploader project", wf["output_uuid"])
+ api.collections().update(uuid=wf["output_uuid"],
+ body={"owner_uuid": uploader_project}).execute()
+ p["properties"]["moved_output"] = True
+ api.groups().update(uuid=p["uuid"], body={"properties": p["properties"]}).execute()
+
def main():
parser = argparse.ArgumentParser(description='Analyze collections uploaded to a project')
parser.add_argument('--uploader-project', type=str, default='lugli-j7d0g-n5clictpuvwk8aa', help='')
- parser.add_argument('--analysis-project', type=str, default='lugli-j7d0g-y4k4uswcqi3ku56', help='')
- parser.add_argument('--workflow-uuid', type=str, default='lugli-7fd4e-mqfu9y3ofnpnho1', help='')
+ parser.add_argument('--pangenome-analysis-project', type=str, default='lugli-j7d0g-y4k4uswcqi3ku56', help='')
+ parser.add_argument('--fastq-project', type=str, default='lugli-j7d0g-xcjxp4oox2u1w8u', help='')
+ parser.add_argument('--validated-project', type=str, default='lugli-j7d0g-5ct8p1i1wrgyjvp', help='')
+
+ parser.add_argument('--pangenome-workflow-uuid', type=str, default='lugli-7fd4e-mqfu9y3ofnpnho1', help='')
+ parser.add_argument('--fastq-workflow-uuid', type=str, default='lugli-7fd4e-2zp9q4jo5xpif9y', help='')
+
+ parser.add_argument('--latest-result-collection', type=str, default='lugli-4zz18-z513nlpqm03hpca', help='')
args = parser.parse_args()
api = arvados.api()
+ logging.info("Starting up, monitoring %s for uploads" % (args.uploader_project))
+
while True:
+ move_fastq_to_fasta_results(api, args.fastq_project, args.uploader_project)
+
new_collections = api.collections().list(filters=[['owner_uuid', '=', args.uploader_project]]).execute()
+ at_least_one_new_valid_seq = False
for c in new_collections["items"]:
- start_analysis(api, c, args.analysis_project, args.workflow_uuid)
- time.sleep(10)
+ at_least_one_new_valid_seq = validate_upload(api, c,
+ args.validated_project,
+ args.fastq_project,
+ args.fastq_workflow_uuid) or at_least_one_new_valid_seq
+
+ if at_least_one_new_valid_seq:
+ start_pangenome_analysis(api,
+ args.pangenome_analysis_project,
+ args.pangenome_workflow_uuid,
+ args.validated_project)
+
+ copy_most_recent_result(api,
+ args.pangenome_analysis_project,
+ args.latest_result_collection)
+
+ time.sleep(15)
diff --git a/bh20sequploader/bh20seq-schema.yml b/bh20sequploader/bh20seq-schema.yml
new file mode 100644
index 0000000..5c962d1
--- /dev/null
+++ b/bh20sequploader/bh20seq-schema.yml
@@ -0,0 +1,89 @@
+$base: http://biohackathon.org/bh20-seq-schema
+$namespaces:
+ sch: https://schema.org/
+ efo: http://www.ebi.ac.uk/efo/
+ obo: http://purl.obolibrary.org/obo/
+$graph:
+
+- name: hostSchema
+ type: record
+ fields:
+ host_species:
+ type: string
+ jsonldPredicate:
+ _id: http://www.ebi.ac.uk/efo/EFO_0000532
+ host_id: string
+ host_common_name: string?
+ host_sex: string?
+ host_age: int?
+ host_age_unit: string?
+ host_health_status: string?
+ host_treatment:
+ type: string?
+ jsonldPredicate:
+ _id: http://www.ebi.ac.uk/efo/EFO_0000727
+ additional_host_information: string?
+
+- name: sampleSchema
+ type: record
+ fields:
+ collector_name: string
+ collecting_institution: string
+ specimen_source: string?
+ collection_date: string?
+ collection_location:
+ type: string?
+ jsonldPredicate:
+ _id: https://schema.org/fromLocation
+ sample_storage_conditions: string?
+ additional_collection_information: string?
+
+- name: virusSchema
+ type: record
+ fields:
+ virus_species: string?
+ virus_strain: string?
+
+- name: technologySchema
+ type: record
+ fields:
+ sample_sequencing_technology:
+ type: string
+ jsonldPredicate:
+ _id: http://www.ebi.ac.uk/efo/EFO_0000532
+ sequence_assembly_method:
+ type: string?
+ jsonldPredicate:
+ _id: http://www.ebi.ac.uk/efo/EFO_0002699
+ sequencing_coverage:
+ type: string?
+ jsonldPredicate:
+ _id: http://purl.obolibrary.org/obo/FLU_0000848
+
+- name: submitterSchema
+ type: record
+ fields:
+ submitter_name: string
+ submitter_address: string?
+ originating_lab: string
+ lab_address: string?
+ provider_sample_id: string?
+ submitter_sample_id: string?
+ authors: string?
+ submitter_id: string?
+
+- name: MainSchema
+ type: record
+ documentRoot: true
+ fields:
+ host: hostSchema
+ sample: sampleSchema
+ virus: virusSchema?
+ technology: technologySchema
+ submitter: submitterSchema
+ sequencefile:
+ doc: The subject (eg the fasta/fastq file) that this metadata describes
+ type: string?
+ jsonldPredicate:
+ _id: "@id"
+ _type: "@id"
diff --git a/bh20sequploader/main.py b/bh20sequploader/main.py
index 17ad492..56cbe22 100644
--- a/bh20sequploader/main.py
+++ b/bh20sequploader/main.py
@@ -6,6 +6,7 @@ import json
import urllib.request
import socket
import getpass
+from .qc_metadata import qc_metadata
ARVADOS_API_HOST='lugli.arvadosapi.com'
ARVADOS_API_TOKEN='2fbebpmbo3rw3x05ueu2i6nx70zhrsb1p22ycu3ry34m4x4462'
@@ -19,18 +20,26 @@ def main():
api = arvados.api(host=ARVADOS_API_HOST, token=ARVADOS_API_TOKEN, insecure=True)
+ if not qc_metadata(args.metadata.name):
+ print("Failed metadata qc")
+ exit(1)
+
col = arvados.collection.Collection(api_client=api)
- print("Reading FASTA")
- with col.open("sequence.fasta", "w") as f:
+ if args.sequence.name.endswith("fasta") or args.sequence.name.endswith("fa"):
+ target = "sequence.fasta"
+ elif args.sequence.name.endswith("fastq") or args.sequence.name.endswith("fq"):
+ target = "reads.fastq"
+
+ with col.open(target, "w") as f:
r = args.sequence.read(65536)
print(r[0:20])
while r:
f.write(r)
r = args.sequence.read(65536)
- print("Reading JSONLD")
- with col.open("metadata.jsonld", "w") as f:
+ print("Reading metadata")
+ with col.open("metadata.yaml", "w") as f:
r = args.metadata.read(65536)
print(r[0:20])
while r:
@@ -49,4 +58,7 @@ def main():
(properties['upload_user'], properties['upload_ip']),
properties=properties, ensure_unique_name=True)
-main()
+ print("Done")
+
+if __name__ == "__main__":
+ main()
diff --git a/bh20sequploader/qc_metadata.py b/bh20sequploader/qc_metadata.py
new file mode 100644
index 0000000..ebe4dfc
--- /dev/null
+++ b/bh20sequploader/qc_metadata.py
@@ -0,0 +1,23 @@
+import schema_salad.schema
+import logging
+import pkg_resources
+import logging
+
+def qc_metadata(metadatafile):
+ schema_resource = pkg_resources.resource_stream(__name__, "bh20seq-schema.yml")
+ cache = {"https://raw.githubusercontent.com/arvados/bh20-seq-resource/master/bh20sequploader/bh20seq-schema.yml": schema_resource.read().decode("utf-8")}
+ (document_loader,
+ avsc_names,
+ schema_metadata,
+ metaschema_loader) = schema_salad.schema.load_schema("https://raw.githubusercontent.com/arvados/bh20-seq-resource/master/bh20sequploader/bh20seq-schema.yml", cache=cache)
+
+ if not isinstance(avsc_names, schema_salad.avro.schema.Names):
+ print(avsc_names)
+ return False
+
+ try:
+ doc, metadata = schema_salad.schema.load_and_validate(document_loader, avsc_names, metadatafile, True)
+ return True
+ except Exception as e:
+ logging.warn(e)
+ return False
diff --git a/example/metadata.json b/example/metadata.json
deleted file mode 100644
index e69de29..0000000
--- a/example/metadata.json
+++ /dev/null
diff --git a/example/metadata.yaml b/example/metadata.yaml
new file mode 100644
index 0000000..41ff93e
--- /dev/null
+++ b/example/metadata.yaml
@@ -0,0 +1,38 @@
+host:
+ host_id: XX1
+ host_species: string
+ host_common_name: string
+ host_sex: string
+ host_age: 20
+ host_age_unit: string
+ host_health_status: string
+ host_treatment: string
+ additional_host_information: string
+
+sample:
+ collector_name: XXX
+ collecting_institution: XXX
+ specimen_source: XXX
+ collection_date: XXX
+ collection_location: XXX
+ sample_storage_conditions: XXX
+ additional_collection_information: XXX
+
+virus:
+ virus_species: XX
+ virus_strain: XX
+
+technology:
+ sample_sequencing_technology: XX
+ sequence_assembly_method: XX
+ sequencing_coverage: 70x
+
+submitter:
+ submitter_name: tester
+ submitter_address: testerAdd
+ originating_lab: testLab
+ lab_address: labAdd
+ provider_sample_id: string
+ submitter_sample_id: string
+ authors: testAuthor
+ submitter_id: X12
diff --git a/example/minimal_example.yaml b/example/minimal_example.yaml
new file mode 100644
index 0000000..201b080
--- /dev/null
+++ b/example/minimal_example.yaml
@@ -0,0 +1,14 @@
+host:
+ host_id: XX
+ host_species: string
+
+sample:
+ collector_name: XXX
+ collecting_institution: XXX
+
+technology:
+ sample_sequencing_technology: XX
+
+submitter:
+ submitter_name: tester
+ originating_lab: testLab \ No newline at end of file
diff --git a/setup.py b/setup.py
index 9e73ff0..48c25aa 100644
--- a/setup.py
+++ b/setup.py
@@ -6,7 +6,7 @@ import setuptools.command.egg_info as egg_info_cmd
from setuptools import setup
SETUP_DIR = os.path.dirname(__file__)
-README = os.path.join(SETUP_DIR, "README.rst")
+README = os.path.join(SETUP_DIR, "README.md")
try:
import gittaggers
@@ -15,7 +15,7 @@ try:
except ImportError:
tagger = egg_info_cmd.egg_info
-install_requires = ["arvados-python-client"]
+install_requires = ["arvados-python-client", "schema-salad"]
needs_pytest = {"pytest", "test", "ptr"}.intersection(sys.argv)
pytest_runner = ["pytest < 6", "pytest-runner < 5"] if needs_pytest else []
@@ -30,6 +30,7 @@ setup(
author_email="peter.amstutz@curii.com",
license="Apache 2.0",
packages=["bh20sequploader", "bh20seqanalyzer"],
+ package_data={"bh20sequploader": ["bh20seq-schema.yml"]},
install_requires=install_requires,
setup_requires=[] + pytest_runner,
tests_require=["pytest<5"],