-
Notifications
You must be signed in to change notification settings - Fork 30
Store tpv_cores, mem and gpus for analysis purposes #428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The tpv shared database stores adds the following params to make it easier to analyse metrics: ``` params: tpv_cores: '{cores}' tpv_gpus: '{gpus}' tpv_mem: '{mem}' ``` Otherwise, you have to reverse engineer it from the destination params using some complicated SQL: ``` COALESCE( -- Attempt to calculate memory from mem-per-cpu multiplied by the number of cores CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'mem-per-cpu=(\d+)') AS NUMERIC) * CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'ntasks=(\d+)') AS NUMERIC), -- Fallback to regular mem if mem-per-cpu is not found CAST(SUBSTRING(encode(j.destination_params, 'escape') FROM 'mem=(\d+)') AS NUMERIC) )/1024.0 as tpv_mem_gb, ``` In the same go, it would be great if main could use the default from the tpv-shared-db. I think the original reasons for having a different _default are no longer an issue.
time: null | ||
default_time: "36:00:00" | ||
xdg_cache_home: null | ||
params: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That ends up in the galaxy database, doesn't it ?
I would not recommend this, does this offer anything the cgroup metrics don't ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the goal is to know how much we allocated originally, and compare it to the cgroup metrics to determine wastage. I don't think there's any easy way to figure that out from the cgroup. And the cgroup metrics will also eventually end up in the database also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose that depends on what you record in the cgroup metrics, we have both allocated and peak memory and allocated CPU and runtime. Yes, cgroup metrics are already in the database, but we should not add more if it's redundant information. I think we already record everything we need for regression analysis, I made a start in https://github.com/mvdbeek/tpv-regression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's great! This should fit in nicely at some point with: galaxyproject/tpv-shared-database#64 and https://github.com/nuwang/tpv-db-optimizer
How are you getting allocated CPU and memory? Which cgroup metrics are you using?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The subqueries I'm seeking to simplify are https://github.com/nuwang/tpv-db-optimizer/blob/5e2c26fa6fef4aaf8cbe8662db596103e0281f20/views.py#L105
Which have been simplified in au and eu to this: https://github.com/nuwang/tpv-db-optimizer/blob/5e2c26fa6fef4aaf8cbe8662db596103e0281f20/views.py#L58
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would ultimately use api/jobs/{job_id}/metrics
, but quick and dirty I just use gxadmin. The metrics are not customized, it's just the standard set, for memory it's galaxy_memory_mb
and for cpus galaxy_slots
There was another use case for this: Galaxy Australia’s tpv rank function is querying the job table to see the cores/mem commitment for ‘queued’ and ‘running’ jobs at destinations. This would not be possible with metric data that only exists after the job has run. If these params were populated across galaxies, we could include reasoning about them in TPV helper functions like so: https://github.com/galaxyproject/total-perspective-vortex/pull/133/files Without this, Galaxy Australia can continue to extract the values from submit_native_specification. |
Have you run an explain analyze on this ? I worry that that is going to be slow. Could you track this outside of the galaxy database ? |
|
We were initially tracking destination availability outside the galaxy database but there was a tendency for destinations to become overallocated, since many jobs might be scheduled in a short space of time before feedback is available. Roughly 2/3 of jobs can only run at one destination and for the other 1/3 this query is run for each job at the ranking step. We switched to this ranking method early last year and did not observe any change in database load. Apart from ranking, it would be useful to know about resource allocations for non-terminal jobs when limiting jobs. Job limits that can be imposed per destination or per user are limited by job count. It would be ideal to be able to reason about limits in terms of resources instead of the number of jobs. |
I think there's something to be said about viewing resource allocation as a dispatch time parameter of the job, rather than a metric/statistic about the job after the fact - although I do agree that there's redundancy + tacking that on through TPV is more a workaround than a proper mechanism. Is there a field comparable to galaxy_slots/mem for gpus? |
The tpv shared database stores adds the following params to make it easier to analyse metrics:
Otherwise, you have to reverse engineer it from the destination params using some complicated SQL:
In the same go, it would be great if main could use the default from the tpv-shared-db. I think the original reasons for having a different _default are no longer an issue.