Cross-cloud cost allocation models for Kubernetes workloads https://kubecost.com

Sean Holcomb c45b202ec6 Merge pull request #952 from kubecost/sean/ratecard-pricing-source hace 4 años
.circleci e4fda39d08 add workflow hace 7 años
.github 6f020e405d Create PULL_REQUEST_TEMPLATE.md hace 4 años
cmd 2730ce14cc (WIP) Add separate main.go for kubemetrics emission, added env vars specific to the pod, and pvcs to cluster cache. hace 4 años
configs feeb101a1e Support Azure Spot instances prices hace 4 años
kubernetes 4dfe2cf90e Added CORS headers to cmd/costmodel/main.go. hace 5 años
pkg c45b202ec6 Merge pull request #952 from kubecost/sean/ratecard-pricing-source hace 4 años
test 2841672080 Added a string bank utility which can be leveraged in the buffer readstring. Refactor specific utilities into respecitive _util package. hace 4 años
ui 5e37d2fb37 Fixed a bad URL in the allocations UI hace 5 años
.dockerignore 93dcd24356 Docker Ignore hace 5 años
.gitignore 96c3b99eba Open source Reports UI hace 5 años
CONTRIBUTING.md bb82b5211c Fixes #597 hace 5 años
Dockerfile 6b0a74f791 Use alpine latest hace 5 años
Dockerfile.metrics 2730ce14cc (WIP) Add separate main.go for kubemetrics emission, added env vars specific to the pod, and pvcs to cluster cache. hace 4 años
LICENSE e21bf66dc1 Add LICENSE hace 7 años
PROMETHEUS.md 0c81c7a798 read and use gpu count hace 5 años
README.md 6c21c87b8f Update Spot language in README hace 4 años
allocation-dashboard.png 4f9b246c1e Add files via upload hace 7 años
allocation-drilldown.gif 87d607a9c6 Add files via upload hace 5 años
deploying-as-a-pod.md d72c3dde04 Update yaml reference hace 5 años
go.mod a8174b61ec Minor adjustments to allow listing of active AWS Reserved Instances. hace 4 años
go.sum 58e071424a Run go mod tidy to enable building image hace 4 años
kubecost-exporter.md 4dfe2cf90e Added CORS headers to cmd/costmodel/main.go. hace 5 años

README.md

Kubecost

Kubecost models give teams visibility into current and historical Kubernetes spend and resource allocation. These models provide cost transparency in Kubernetes environments that support multiple applications, teams, departments, etc.

Kubecost allocation UI

To see more on the functionality of the full Kubecost product, please visit the features page on our website. Here is a summary of features enabled by this cost model:

  • Real-time cost allocation by Kubernetes service, deployment, namespace, label, statefulset, daemonset, pod, and container
  • Dynamic asset pricing enabled by integrations with AWS, Azure, and GCP billing APIs
  • Supports on-prem k8s clusters with custom pricing sheets
  • Allocation for in-cluster resources like CPU, GPU, memory, and persistent volumes.
  • Allocation for AWS & GCP out-of-cluster resources like RDS instances and S3 buckets with key (optional)
  • Easily export pricing data to Prometheus with /metrics endpoint (learn more)
  • Free and open source distribution (Apache2 license)

Requirements

  • Kubernetes version 1.8 or higher
  • Prometheus
  • kube-state-metrics (optional)

Getting Started

You can deploy Kubecost on any Kubernetes 1.8+ cluster in a matter of minutes, if not seconds. Visit the Kubecost docs for recommended install options. Compared to building from source, installing from Helm is faster and includes all necessary dependencies.

Usage

Contributing

We :heart: pull requests! See CONTRIBUTING.md for information on buiding the project from source and contributing changes.

Licensing

Licensed under the Apache License, Version 2.0 (the "License")

## Software stack

Golang application. Prometheus. Kubernetes.

Frequently Asked Questions

How do you measure the cost of CPU/RAM/GPU/storage for a container, pod, deployment, etc.

The Kubecost model collects pricing data from major cloud providers, e.g. GCP, Azure and AWS, to provide the real-time cost of running workloads. Based on data from these APIs, each container/pod inherits a cost per CPU-hour, GPU-hour, Storage Gb-hour and cost per RAM Gb-hour based on the node where it was running or the class of storage provisioned. This means containers of the same size, as measured by the max of requests or usage, could be charged different resource rates if they are scheduled in seperate regions, on nodes with different usage types (on-demand vs preemptible), etc.

For on-prem clusters, these resource prices can be configured directly with custom pricing sheets (more below).

Measuring the CPU/RAM/GPU cost of a deployment, service, namespace, etc is the aggregation of its individual container costs.

How do you determine RAM/CPU costs for a node when this data isn’t provided by a cloud provider?

When explicit RAM or CPU prices are not provided by your cloud provider, the Kubecost model falls back to the ratio of base CPU and RAM price inputs supplied. The default values for these parameters are based on the marginal resource rates of the cloud provider, but they can be customized within Kubecost.

These base RAM/CPU prices are normalized to ensure the sum of each component is equal to the total price of the node provisioned, based on billing rates from your provider. When the sum of RAM/CPU costs is greater (or less) than the price of the node, then the ratio between the two input prices are held constant.

As an example, let's imagine a node with 1 CPU and 1 Gb of RAM that costs $20/mo. If your base CPU price is $30 and your RAM Gb price is $10, then these inputs will be normlized to $15 for CPU and $5 for RAM so that the sum equals the cost of the node. Note that the price of a CPU remains 3x the price of a Gb of RAM.

NodeHourlyCost = NORMALIZED_CPU_PRICE * # of CPUS + NORMALIZED_RAM_PRICE * # of RAM Gb

How do you allocate a specific amount of RAM/CPU to an individual pod or container?

Resources are allocated based on the time-weighted maximum of resource Requests and Usage over the measured period. For example, a pod with no usage and 1 CPU requested for 12 hours out of a 24 hour window would be allocated 12 CPU hours. For pods with BestEffort quality of service (i.e. no requests) allocation is done solely on resource usage.

How do I set my AWS Spot estimates for cost allocation?

Modify spotCPU and spotRAM in default.json to the level of recent market prices. Allocation will use these prices, but it does not take into account what you are actually charged by AWS. Alternatively, you can provide an AWS key to allow access to the Spot data feed. This will provide accurate Spot price reconciliation.

Do I need a GCP billing API key?

We supply a global key with a low limit for evaluation, but you will want to supply your own before moving to production.

Please reach out with any additional questions on Slack or via email at team@kubecost.com.