GPU Allocation Dashboard

2026-05-15 18:36:36 UTC 時点

FPT

FPTクラウド環境

GCP

GCP WhaleのPoC環境

オンプレ

港北DCの水冷GPU環境

AWS

AWS WhaleのPoC環境

FPT(FPTクラウド環境)

空き 0 GPU
NVIDIA-H100-80GB-HBM3: 空き 0 / 20
fke-recruit-gpu-cluster-eovg7iys-gpu-2x-01-z1-699d9-92pnr NVIDIA-H100-80GB-HBM3 × 2
空き 0 / 2
fke-recruit-gpu-cluster-eovg7iys-gpu-2x-01-z1-699d9-lxgtm NVIDIA-H100-80GB-HBM3 × 2
空き 0 / 2
fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 × 8
空き 0 / 8
fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-mj9jm NVIDIA-H100-80GB-HBM3 × 8
空き 0 / 8

GCP(GCP WhaleのPoC環境)

空き 5 GPU
nvidia-h100-80gb: 空き 5 / 8
gke-prod-whale-np-0-42a4db54-0wfw nvidia-h100-80gb × 8
空き 5 / 8

オンプレ(港北DCの水冷GPU環境)

空き 1 GPU
NVIDIA-H200: 空き 1 / 8
worker1 NVIDIA-H200 × 4
空き 1 / 4
worker2 NVIDIA-H200 × 4
空き 0 / 4

AWS(AWS WhaleのPoC環境)

空き 23 GPU
NVIDIA-H100-80GB-HBM3: 空き 15 / 16
NVIDIA-L4: 空き 8 / 8
ip-10-51-170-219.ec2.internal NVIDIA-H100-80GB-HBM3 × 8
空き 8 / 8
ip-10-51-170-22.ec2.internal NVIDIA-H100-80GB-HBM3 × 8
空き 7 / 8
ip-10-51-167-254.ec2.internal NVIDIA-L4 × 4
空き 4 / 4
ip-10-51-167-9.ec2.internal NVIDIA-L4 × 4
空き 4 / 4

GPU Allocation

100%
使用中

20.0 / 20.0 GPU

Allocated GPUs

20.0

Pod request の合計

Free GPUs

0.0

追加申請の目安

Active Namespaces

8

GPU request を持つ namespace 数

GPU 種別一覧

GPU Type Physical Allocatable Usage Free Nodes
NVIDIA-H100-80GB-HBM3 20.0 20.0
20.0
0.0 4

Namespace 別 GPU 使用状況

fpt-recoff
GPU
6.0 / 20.0
Pods 5
Types NVIDIA-H100-80GB-HBM3
llm-leaderboard-prod
GPU
3.0 / 20.0
Pods 3
Types NVIDIA-H100-80GB-HBM3
whalelm
GPU
3.0 / 20.0
Pods 2
Types NVIDIA-H100-80GB-HBM3
nim-service
GPU
2.0 / 20.0
Pods 1
Types NVIDIA-H100-80GB-HBM3
recruit-piu
GPU
2.0 / 20.0
Pods 1
Types NVIDIA-H100-80GB-HBM3
wl-yamaguchi
GPU
2.0 / 20.0
Pods 1
Types NVIDIA-H100-80GB-HBM3
car-explainer
GPU
1.0 / 20.0
Pods 1
Types NVIDIA-H100-80GB-HBM3
hyakutake
GPU
1.0 / 20.0
Pods 1
Types NVIDIA-H100-80GB-HBM3

Pod 詳細

Namespace Pod Node GPU Type Phase GPU Limit
fpt-recoff vscode-server-fpt-chaunt2 fke-recruit-gpu-cluster-eovg7iys-gpu-2x-01-z1-699d9-92pnr NVIDIA-H100-80GB-HBM3 Running 2.0 2.0
nim-service gpt-oss-120b-nim-7d857c87c-r52zd fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-mj9jm NVIDIA-H100-80GB-HBM3 Running 2.0 2.0
recruit-piu qwen3-coder-next-fp8-86689f4c4-7xlvv fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-mj9jm NVIDIA-H100-80GB-HBM3 Running 2.0 2.0
whalelm ml-pytorch-deployment-h100x1-yakushiji-67d898f74c-4gdrp fke-recruit-gpu-cluster-eovg7iys-gpu-2x-01-z1-699d9-lxgtm NVIDIA-H100-80GB-HBM3 Running 2.0 2.0
wl-yamaguchi vllm-qwen36-27b-68bc5bbdd7-5bsck fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-mj9jm NVIDIA-H100-80GB-HBM3 Running 2.0 2.0
car-explainer vllm-car-explainer-9d675966-nrcpt fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-mj9jm NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
fpt-recoff vllm-qwen-3-5-4b-grounding fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
fpt-recoff vscode-onecompress-benchmark fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
fpt-recoff vscode-onecompress-study fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
fpt-recoff vscode-server-fpt fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
hyakutake vllm-gemma4-e4b-it-84989f5fc7-rrg2h fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-mj9jm NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
llm-leaderboard-prod runner-worker-prod-0 fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
llm-leaderboard-prod runner-worker-prod-1 fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
llm-leaderboard-prod runner-worker-prod-2 fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0
whalelm ml-pytorch-deployment-h100x1-kiryu-8f7b9464c-zmc4p fke-recruit-gpu-cluster-eovg7iys-gpu-8x-z1-c6b67-flh6k NVIDIA-H100-80GB-HBM3 Running 1.0 1.0

GPU Allocation

38%
使用中

3.0 / 8.0 GPU

Allocated GPUs

3.0

Pod request の合計

Free GPUs

5.0

追加申請の目安

Active Namespaces

1

GPU request を持つ namespace 数

GPU 種別一覧

GPU Type Physical Allocatable Usage Free Nodes
nvidia-h100-80gb 8.0 8.0
3.0
5.0 1

Namespace 別 GPU 使用状況

default
GPU
3.0 / 8.0
Pods 3
Types nvidia-h100-80gb

Pod 詳細

Namespace Pod Node GPU Type Phase GPU Limit
default ml-pytorch-deployment-h100x1-inoue-7f88678fbd-522r8 gke-prod-whale-np-0-42a4db54-0wfw nvidia-h100-80gb Running 1.0 1.0
default ml-pytorch-deployment-h100x1-masuda-664f677fd7-s5vzr gke-prod-whale-np-0-42a4db54-0wfw nvidia-h100-80gb Running 1.0 1.0
default ml-pytorch-deployment-h100x1-yakushiji-869f75fd65-nwpjw gke-prod-whale-np-0-42a4db54-0wfw nvidia-h100-80gb Running 1.0 1.0

GPU Allocation

88%
使用中

7.0 / 8.0 GPU

Allocated GPUs

7.0

Pod request の合計

Free GPUs

1.0

追加申請の目安

Active Namespaces

4

GPU request を持つ namespace 数

GPU 種別一覧

GPU Type Physical Allocatable Usage Free Nodes
NVIDIA-H200 8.0 8.0
7.0
1.0 2

Namespace 別 GPU 使用状況

kataoka-ns
GPU
3.0 / 8.0
Pods 2
Types NVIDIA-H200
taguchi-ns
GPU
2.0 / 8.0
Pods 1
Types NVIDIA-H200
miyachi
GPU
1.0 / 8.0
Pods 1
Types NVIDIA-H200
runai-reservation
GPU
1.0 / 8.0
Pods 1
Types NVIDIA-H200

Pod 詳細

Namespace Pod Node GPU Type Phase GPU Limit
kataoka-ns vllm-qwen35-27b-fp8-agent-tp2-r1-dep-6c48f64b66-8z572 worker2 NVIDIA-H200 Running 2.0 2.0
taguchi-ns qwen3-8b-lora-unsloth worker1 NVIDIA-H200 Running 2.0 2.0
kataoka-ns work-container-gpu-worker2-deploy-9b6cf857b-mmk2k worker2 NVIDIA-H200 Running 1.0 1.0
miyachi hunyuanworld-7558bd544-q58ll worker2 NVIDIA-H200 Running 1.0 1.0
runai-reservation gpu-reservation-worker1-4fzlg worker1 NVIDIA-H200 Running 1.0 1.0

GPU Allocation

4%
使用中

1.0 / 24.0 GPU

Allocated GPUs

1.0

Pod request の合計

Free GPUs

23.0

追加申請の目安

Active Namespaces

1

GPU request を持つ namespace 数

GPU 種別一覧

GPU Type Physical Allocatable Usage Free Nodes
NVIDIA-H100-80GB-HBM3 16.0 16.0
1.0
15.0 2
NVIDIA-L4 8.0 8.0
0.0
8.0 2

Namespace 別 GPU 使用状況

default
GPU
1.0 / 24.0
Pods 1
Types NVIDIA-H100-80GB-HBM3

Pod 詳細

Namespace Pod Node GPU Type Phase GPU Limit
default pytorch-aws-h100-deployment-59585cd95c-nrhkg ip-10-51-170-22.ec2.internal NVIDIA-H100-80GB-HBM3 Running 1.0 1.0