|
|
@@ -44,7 +44,7 @@ If you need any support or have any questions on contributing to the project, yo
|
|
|
|
|
|
## FAQ
|
|
|
|
|
|
-#### How do you measure the cost of CPU/RAM/GPU/storage for a container, pod, deployment, etc.
|
|
|
+### _How do you measure the cost of CPU/RAM/GPU/storage for a container, pod, deployment, etc._
|
|
|
|
|
|
The OpenCost cost model collects pricing data from major cloud providers, e.g. GCP, Azure and AWS, to provide the real-time cost of running workloads. Based on data from these APIs, each container/pod inherits a cost per CPU-hour, GPU-hour, Storage Gb-hour and cost per RAM Gb-hour based on the node where it was running or the class of storage provisioned. This means containers of the same size, as measured by the max of requests or usage, could be charged different resource rates if they are scheduled in separate regions, on nodes with different usage types (on-demand vs preemptible), etc.
|
|
|
|
|
|
@@ -52,7 +52,7 @@ For on-prem clusters, these resource prices can be configured directly with cust
|
|
|
|
|
|
Measuring the CPU/RAM/GPU cost of a deployment, service, namespace, etc is the aggregation of its individual container costs.
|
|
|
|
|
|
-#### How do you determine RAM/CPU/GPU costs for a node when this data isn’t provided by a cloud provider?
|
|
|
+### _How do you determine RAM/CPU/GPU costs for a node when this data isn’t provided by a cloud provider?_
|
|
|
|
|
|
When explicit RAM, CPU or GPU prices are not provided by your cloud provider, the OpenCost model falls back to the ratio of base CPU, GPU and RAM price inputs supplied. The default values for these parameters are based on the marginal resource rates of the cloud provider, but they can be customized within OpenCost.
|
|
|
|
|
|
@@ -62,14 +62,14 @@ As an example, let's imagine a node with 1 GPU, 1 CPU and 1 Gb of RAM that costs
|
|
|
|
|
|
NodeHourlyCost = NORMALIZED_GPU_PRICE * # of GPUS + NORMALIZED_CPU_PRICE * # of CPUS + NORMALIZED_RAM_PRICE * # of RAM Gb
|
|
|
|
|
|
-#### How do you allocate a specific amount of RAM/CPU to an individual pod or container?
|
|
|
+### _How do you allocate a specific amount of RAM/CPU to an individual pod or container?_
|
|
|
|
|
|
Resources are allocated based on the time-weighted maximum of resource Requests and Usage over the measured period. For example, a pod with no usage and 1 CPU requested for 12 hours out of a 24 hour window would be allocated 12 CPU hours. For pods with BestEffort quality of service (i.e. no requests) allocation is done solely on resource usage.
|
|
|
|
|
|
-#### How do I set my AWS Spot estimates for cost allocation?
|
|
|
+### _How do I set my AWS Spot estimates for cost allocation?_
|
|
|
|
|
|
Modify [spotCPU](https://github.com/kubecost/opencost/blob/master/configs/default.json#L5) and [spotRAM](https://github.com/kubecost/opencost/blob/master/configs/default.json#L7) in default.json to the level of recent market prices. Allocation will use these prices, but it does not take into account what you are actually charged by AWS. Alternatively, you can provide an AWS key to allow access to the Spot data feed. This will provide accurate Spot price reconciliation.
|
|
|
|
|
|
-#### Do I need a GCP billing API key?
|
|
|
+### _Do I need a GCP billing API key?_
|
|
|
|
|
|
We supply a global key with a low limit for evaluation, but you will want to supply your own before moving to production.
|