Skip to content

Conversation

nuwang
Copy link
Member

@nuwang nuwang commented Sep 8, 2025

The tpv shared database stores adds the following params to make it easier to analyse metrics:

    params:
      tpv_cores: '{cores}'
      tpv_gpus: '{gpus}'
      tpv_mem: '{mem}'

Otherwise, you have to reverse engineer it from the destination params using some complicated SQL:

                COALESCE(
                    -- Attempt to calculate memory from mem-per-cpu multiplied by the number of cores
                    CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'mem-per-cpu=(\d+)') AS NUMERIC)
                    * CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'ntasks=(\d+)') AS NUMERIC),
                    -- Fallback to regular mem if mem-per-cpu is not found
                    CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'mem=(\d+)') AS NUMERIC)
                )/1024.0 as tpv_mem_gb,

In the same go, it would be great if main could use the default from the tpv-shared-db. I think the original reasons for having a different _default are no longer an issue.

The tpv shared database stores adds the following params to make it easier to analyse metrics:
```
    params:
      tpv_cores: '{cores}'
      tpv_gpus: '{gpus}'
      tpv_mem: '{mem}'
```
Otherwise, you have to reverse engineer it from the destination params using some complicated SQL:
```
                COALESCE(
                    -- Attempt to calculate memory from mem-per-cpu multiplied by the number of cores
                    CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'mem-per-cpu=(\d+)') AS NUMERIC)
                    * CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'ntasks=(\d+)') AS NUMERIC),
                    -- Fallback to regular mem if mem-per-cpu is not found
                    CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'mem=(\d+)') AS NUMERIC)
                )/1024.0 as tpv_mem_gb,
```

In the same go, it would be great if main could use the default from the tpv-shared-db. I think the original reasons for having a different _default are no longer an issue.
time: null
default_time: "36:00:00"
xdg_cache_home: null
params:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That ends up in the galaxy database, doesn't it ?
I would not recommend this, does this offer anything the cgroup metrics don't ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the goal is to know how much we allocated originally, and compare it to the cgroup metrics to determine wastage. I don't think there's any easy way to figure that out from the cgroup. And the cgroup metrics will also eventually end up in the database also?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose that depends on what you record in the cgroup metrics, we have both allocated and peak memory and allocated CPU and runtime. Yes, cgroup metrics are already in the database, but we should not add more if it's redundant information. I think we already record everything we need for regression analysis, I made a start in https://github.com/mvdbeek/tpv-regression.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's great! This should fit in nicely at some point with: galaxyproject/tpv-shared-database#64 and https://github.com/nuwang/tpv-db-optimizer
How are you getting allocated CPU and memory? Which cgroup metrics are you using?

Copy link
Member Author

@nuwang nuwang Sep 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would ultimately use api/jobs/{job_id}/metrics, but quick and dirty I just use gxadmin. The metrics are not customized, it's just the standard set, for memory it's galaxy_memory_mb and for cpus galaxy_slots

@cat-bro
Copy link

cat-bro commented Sep 11, 2025

There was another use case for this: Galaxy Australia’s tpv rank function is querying the job table to see the cores/mem commitment for ‘queued’ and ‘running’ jobs at destinations. This would not be possible with metric data that only exists after the job has run. If these params were populated across galaxies, we could include reasoning about them in TPV helper functions like so: https://github.com/galaxyproject/total-perspective-vortex/pull/133/files

Without this, Galaxy Australia can continue to extract the values from submit_native_specification.

@mvdbeek
Copy link
Member

mvdbeek commented Sep 12, 2025

Have you run an explain analyze on this ? I worry that that is going to be slow. Could you track this outside of the galaxy database ?

@cat-bro
Copy link

cat-bro commented Sep 17, 2025

galaxy=> EXPLAIN ANALYZE SELECT
    job.destination_id,
    job.state,
    COUNT(job.id) AS job_count,
    SUM(CAST(encode(destination_params, 'escape')::json ->> 'tpv_cores' AS FLOAT)) AS sum_cores,
    SUM(CAST(encode(destination_params, 'escape')::json ->> 'tpv_mem' AS FLOAT))   AS sum_mem,
    SUM(CAST(encode(destination_params, 'escape')::json ->> 'tpv_gpus' AS FLOAT))  AS sum_gpus
FROM job
WHERE job.state IN ('queued', 'running')
GROUP BY job.destination_id, job.state;
                                                           QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=8031.93..8035.85 rows=392 width=44) (actual time=2.486..2.491 rows=15 loops=1)
   Group Key: destination_id, state
   Batches: 1  Memory Usage: 37kB
   ->  Index Scan using ix_job_state on job  (cost=0.43..7747.71 rows=4737 width=307) (actual time=0.041..0.941 rows=51 loops=1)
         Index Cond: ((state)::text = ANY ('{queued,running}'::text[]))
 Planning Time: 0.174 ms
 Execution Time: 2.522 ms
(7 rows)

@cat-bro
Copy link

cat-bro commented Sep 17, 2025

We were initially tracking destination availability outside the galaxy database but there was a tendency for destinations to become overallocated, since many jobs might be scheduled in a short space of time before feedback is available. Roughly 2/3 of jobs can only run at one destination and for the other 1/3 this query is run for each job at the ranking step. We switched to this ranking method early last year and did not observe any change in database load.

Apart from ranking, it would be useful to know about resource allocations for non-terminal jobs when limiting jobs. Job limits that can be imposed per destination or per user are limited by job count. It would be ideal to be able to reason about limits in terms of resources instead of the number of jobs.

@nuwang
Copy link
Member Author

nuwang commented Sep 17, 2025

I think there's something to be said about viewing resource allocation as a dispatch time parameter of the job, rather than a metric/statistic about the job after the fact - although I do agree that there's redundancy + tacking that on through TPV is more a workaround than a proper mechanism. Is there a field comparable to galaxy_slots/mem for gpus?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants