The FHIR Scripts scripts provide commands that support the develop process for FHIR profiles and IGs.
The following commands arw available:
Command | Description |
---|---|
update |
Update the script itself |
pytools |
Update the Python tools, e.g. igtools, epatools |
tools |
Update Sushi and IG Publisher |
fhircache |
Rebuild FHIR cache |
bdcache |
Delete the build cache |
build |
Build IG |
deploy |
Deploy an IG |
Download the latest version of the script to the current directory.
Install the latest version of the Python tools:
Install the latest version of FSH Sushi and IG Publisher.
Clears the FHIR cache and rebuilds it from FHIR packages. Optionally it can install packages from a local directory and cache dependency packages in this directory.
./fhir_scripts.sh fhircache [<pkgdir>]
When <pkgdir>
is provided, the dependencies from package.json
are read and available packages from <pkgdir>
will be installed. Additionally, direct dependencies that are not present in <pkgdir>
will be downloaded for later usages.
Delete cached schemas and TX from input-cache/
.
Performs several steps to support the process of building a FHIR IG.
It is separated into two parts, building
- the FHIR definitions
- the FHIR IG
whereas, the optional argument noig
only builds the definitions and nodefs
only builds the IG.
Steps:
- Track and update requirements and update release notes, if igtools are available
- Build FHIR definitions using FSH Sushi
- Merge CapabilityStatements, if epatools are available
Steps:
- Build IG using IG Publisher
- Generate OpenAPI specifications, if epatools are available
- Update archived IG, if epatools are available
For building the IG config.sh
needs to present in the current directory defining the publish URL for the IG Publisher as PUBLISH_URL
. For updating the archive a list of files needs to be defined as CONTENT_FILES
.
Deploy the IG from the build directory (output/
) to the webserver.
Uses a gCloud as a target and requires the following tools:
- gcloud
- gsutils
The IG will be deployed to <bucket>/<path>/<version>
. Depending on the argument dev
or prod
the respective bucket is used.
First, it is checked if logged in into a gcloud Account and starting the login process, if not. Then, the path in the bucket is checked if empty, cleared if not and the files copied afterwards.
The configuration is read from a config.sh
file in the current directory defining:
TARGET=<version>
BUCKET_PATH=<path>
BUCKET_NAME_DEV=<dev-bucket>
BUCKET_NAME_PROD=<prod-ucket>