Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion health/micro-ui/web/health-dss/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@ FROM node:14-alpine3.16 AS build
RUN apk update
RUN apk add --no-cache 'git>2.30.0'
Comment on lines 2 to 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Build failing: node-gyp can’t find Python (node-sass). Install python3, make, g++ and add python symlink.

The pipeline error indicates node-sass native compilation is failing due to missing Python. On Alpine, node-gyp requires python3 and build tools, and often a python symlink.

Apply this diff to fix the build:

-RUN apk update
-RUN apk add --no-cache 'git>2.30.0'
+RUN apk add --no-cache 'git>2.30.0' python3 make g++
+# node-gyp expects `python`; provide a symlink to python3
+RUN ln -sf python3 /usr/bin/python

Optional (if you hit further linking issues): add libc6-compat.

-RUN apk add --no-cache 'git>2.30.0' python3 make g++
+RUN apk add --no-cache 'git>2.30.0' python3 make g++ libc6-compat

Note: You can also reduce layers by dropping the separate apk update since --no-cache fetches fresh indexes.

Also applies to: 39-39

🧰 Tools
🪛 Hadolint (2.12.0)

[warning] 3-3: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>

(DL3018)


[info] 3-3: Multiple consecutive RUN instructions. Consider consolidation.

(DL3059)

🤖 Prompt for AI Agents
In health/micro-ui/web/health-dss/Dockerfile around lines 2-3 (and also apply to
line 39), the build fails because node-gyp/node-sass need Python and build tools
on Alpine; remove the separate `apk update` and instead install python3, make,
g++ (and optionally libc6-compat) in one apk add --no-cache command, then create
a /usr/bin/python symlink pointing to python3 so node-gyp can find it; ensure
these packages are installed in the same layer as other build dependencies to
avoid extra layers.

ARG WORK_DIR
ARG GIT_COMMIT=unknown
ARG GIT_BRANCH=unknown
Comment on lines +5 to +6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Build args for VCS metadata added — ensure CI supplies them and consider surfacing in final image.

Good addition. Two follow-ups:

  • Make sure your CI/build system passes GIT_COMMIT and GIT_BRANCH (else the image will carry "unknown").
  • If you intend these to be available at runtime (final nginx image), you need to re-declare them in the final stage; ENV from the build stage does not propagate across stages.

Proposed change to expose in the final stage (after the second FROM):

 FROM nginx:mainline-alpine
 #FROM ghcr.io/egovernments/nginx:mainline-alpine
+ARG GIT_COMMIT=unknown
+ARG GIT_BRANCH=unknown
+ENV GIT_COMMIT=$GIT_COMMIT
+ENV GIT_BRANCH=$GIT_BRANCH
 ENV WORK_DIR=/var/web/health-dss
🤖 Prompt for AI Agents
In health/micro-ui/web/health-dss/Dockerfile around lines 5 to 6, the Dockerfile
defines build ARGs GIT_COMMIT and GIT_BRANCH but does not ensure they are
supplied by CI or propagated into the final image; update your CI to pass these
build-args and, in the final stage (after the second FROM), re-declare the ARGs
and set corresponding ENVs so the values are available at runtime (e.g., add ARG
GIT_COMMIT and ARG GIT_BRANCH in the final stage and then set ENV
GIT_COMMIT=$GIT_COMMIT and ENV GIT_BRANCH=$GIT_BRANCH).


ENV GIT_COMMIT=$GIT_COMMIT
ENV GIT_BRANCH=$GIT_BRANCH
Comment on lines +8 to +9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

ENV set only in build stage won’t be present in the final image.

If your React/Vue build pipeline consumes these during build, you’re fine. If you expect to read them at container runtime (served by nginx), re-declare them in the final stage as shown in my other comment.

🤖 Prompt for AI Agents
In health/micro-ui/web/health-dss/Dockerfile around lines 8 to 9, the ENV
variables GIT_COMMIT and GIT_BRANCH are only set in the build stage so they
won’t be available in the final image at runtime; to fix, re-declare the same
ENV GIT_COMMIT and ENV GIT_BRANCH lines in the final stage (after the final
FROM) or convert them to build-time ARG and propagate them into the final stage
(ARG in build stage, ARG in final stage, then ENV from that ARG) so the
variables are present in the deployed container.


WORKDIR /app
ENV NODE_OPTIONS="--max-old-space-size=4792"
ENV YARN_DEBUG=true
Expand Down Expand Up @@ -37,7 +43,7 @@ RUN yarn build:webpack

FROM nginx:mainline-alpine
#FROM ghcr.io/egovernments/nginx:mainline-alpine
ENV WORK_DIR=/var/web/dashboard-ui
ENV WORK_DIR=/var/web/health-dss

RUN mkdir -p ${WORK_DIR}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
server
{
listen 80;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Harden vhost selection: set default_server and an explicit server_name.

To avoid accidental vhost collisions if more server blocks exist, declare default_server and a catch-all server_name.

-  listen 80;
+  listen 80 default_server;
+  server_name _;
🤖 Prompt for AI Agents
In health/micro-ui/web/micro-ui-internals/packages/modules/health-dss/nginx.conf
around line 3, the server block only has "listen 80;" which risks vhost
collisions; update the listen directive to include "default_server" and add an
explicit catch-all server_name (e.g. "_" or a specific hostname) to the server
block so this vhost is the default and will not accidentally match other virtual
hosts.

underscores_in_headers on;

location /dashboard-ui
{
root /var/web;
index index.html index.htm;
try_files $uri $uri/ /dashboard-ui/index.html;
}
Comment on lines +6 to +11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: add caching headers for static assets and prevent caching index.html.

For hashed bundle assets, long-lived caching improves performance. Avoid caching index.html to ensure users get the latest app shell.

Add these sibling locations in the same server block (outside the existing location):

# Long cache for static assets (adjust extensions to your build artifacts)
location ~* \.(?:js|css|png|jpg|jpeg|gif|svg|ico|woff2?|ttf|map)$ {
  add_header Cache-Control "public, max-age=31536000, immutable";
  expires 1y;
}

# Do not cache the SPA shell
location = /dashboard-ui/index.html {
  add_header Cache-Control "no-store";
}
🤖 Prompt for AI Agents
In health/micro-ui/web/micro-ui-internals/packages/modules/health-dss/nginx.conf
around lines 6 to 11, add two sibling location blocks in the same server block
(outside the existing /dashboard-ui location): one location matching static
asset extensions (js, css, png, jpg, jpeg, gif, svg, ico, woff2, woff, ttf, map
— adjust extensions to your build artifacts) that sets long-lived caching
headers (e.g., add_header Cache-Control "public, max-age=31536000, immutable";
and expires 1y;) and a separate exact-match location for
/dashboard-ui/index.html that sets no-store (e.g., add_header Cache-Control
"no-store";) so hashed bundles are cached long-term while the SPA shell is
always fetched fresh.

💡 Verification agent

🧩 Analysis chain

Verify docroot mapping: likely mismatch between Nginx root and Docker deployment path (may cause 404s).

Given the Dockerfile change (AI summary) to deploy built assets under /var/web/health-dss, serving /dashboard-ui with root /var/web; will look under /var/web/dashboard-ui, not /var/web/health-dss. If assets are indeed under /var/web/health-dss, switch to alias and tighten the location. Otherwise, confirm assets land in /var/web/dashboard-ui.

Apply if assets are under /var/web/health-dss:

-  location /dashboard-ui
+  location ^~ /dashboard-ui/
   {
-    root /var/web;
+    alias /var/web/health-dss/;
     index index.html index.htm;
     try_files $uri $uri/ /dashboard-ui/index.html;
   }

Run this script to confirm where assets are copied in the final image:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching Dockerfiles for health-dss workdir and copy destinations..."
fd -a Dockerfile | xargs -I{} sh -c 'echo "---- {} ----"; rg -n -C2 --no-heading -e "WORKDIR" -e "COPY" -e "/var/web" -e "health-dss" "{}"'

echo
echo "Confirming nginx.conf path(s) referenced and current content:"
fd -a nginx.conf | xargs -I{} sh -c 'echo "---- {} ----"; sed -n "1,120p" "{}"'

Length of output: 15134


Fix Nginx docroot for health-dss assets

The Dockerfile for health-dss copies your built files into /var/web/health-dss, but the current nginx.conf uses root /var/web; under /dashboard-ui, causing Nginx to look in /var/web/dashboard-ui (and 404). Update the location block to use alias and strip the URI prefix:

• File: health/micro-ui/web/health-dss/nginx.conf
• Lines: 6–11

-  location /dashboard-ui
-  {
-    root /var/web;
+  location ^~ /dashboard-ui/ {
+    alias /var/web/health-dss/;
     index index.html index.htm;
     try_files $uri $uri/ /dashboard-ui/index.html;
   }

This change maps /dashboard-ui/* requests to the actual /var/web/health-dss directory.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
location /dashboard-ui
{
root /var/web;
index index.html index.htm;
try_files $uri $uri/ /dashboard-ui/index.html;
}
location ^~ /dashboard-ui/ {
alias /var/web/health-dss/;
index index.html index.htm;
try_files $uri $uri/ /dashboard-ui/index.html;
}
🤖 Prompt for AI Agents
In health/micro-ui/web/micro-ui-internals/packages/modules/health-dss/nginx.conf
around lines 6 to 11, the location /dashboard-ui block uses root /var/web which
resolves to /var/web/dashboard-ui and 404s; change the block to use alias
/var/web/health-dss/ and ensure the location has a trailing slash (location
/dashboard-ui/ { ... }) so Nginx strips the /dashboard-ui prefix, keep index
directives, and use try_files $uri $uri/ /dashboard-ui/index.html to fall back
to the app entrypoint.

}
2 changes: 2 additions & 0 deletions health/micro-ui/web/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
"workspaces": [
"micro-ui-internals/packages/modules/campaign-manager",
"micro-ui-internals/packages/modules/health-hrms",
"micro-ui-internals/packages/modules/health-dss",
"micro-ui-internals/packages/modules/pgr"
],
"homepage": "/workbench-ui",
Expand All @@ -21,6 +22,7 @@
"@egovernments/digit-ui-module-hcmworkbench": "0.1.5",
"@egovernments/digit-ui-module-utilities": "1.0.12",
"@egovernments/digit-ui-module-campaign-manager": "0.4.0",
"@egovernments/digit-ui-module-health-dss": "0.0.1",
"@egovernments/digit-ui-module-health-pgr": "0.0.1",
"@egovernments/digit-ui-react-components": "1.8.24",
"@egovernments/digit-ui-svg-components": "1.0.21",
Expand Down
Loading