elenalape 4 éve
szülő
commit
c9a4dff062
3 módosított fájl, 46 hozzáadás és 53 törlés
  1. 20 19
      CONTRIBUTING.md
  2. 26 34
      README.md
  3. BIN
      opencost-header.png

+ 20 - 19
CONTRIBUTING.md

@@ -1,22 +1,21 @@
-# Contributing to our project #
+# Contributing to our project
 
 Thanks for your help improving the project!
 
-## Getting Help ##
+## Getting Help
 
-If you have a question about Kubecost or have encountered problems using it,
+If you have a question about OpenCost or have encountered problems using it,
 you can start by asking a question on [Slack](https://join.slack.com/t/kubecost/shared_invite/enQtNTA2MjQ1NDUyODE5LWFjYzIzNWE4MDkzMmUyZGU4NjkwMzMyMjIyM2E0NGNmYjExZjBiNjk1YzY5ZDI0ZTNhZDg4NjlkMGRkYzFlZTU) or via email at [support@kubecost.com](support@kubecost.com)
 
-
-## Workflow ##
+## Workflow
 
 This repository's contribution workflow follows a typical open-source model:
+
 - [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) this repository
 - Work on the forked repository
 - Open a pull request to [merge the fork back into this repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork)
 
-
-## Building ## 
+## Building
 
 Follow these steps to build from source and deploy:
 
@@ -31,33 +30,35 @@ To test, build the cost-model docker container and then push it to a Kubernetes
 
 To confirm that the server is running, you can hit [http://localhost:9003/costDataModel?timeWindow=1d](http://localhost:9003/costDataModel?timeWindow=1d)
 
-## Running locally ##
+## Running locally
 
 In order to run cost-model locally, or outside of the runtime of a Kubernetes cluster, you can set the environment variable `KUBECONFIG_PATH`.
 
 Example:
+
 ```bash
 export KUBECONFIG_PATH=~/.kube/config
 ```
 
-## Running the integration tests ##
+## Running the integration tests
+
 To run these tests:
-* Make sure you have a kubeconfig that can point to your cluster, and have permissions to create/modify a namespace called "test"
-* Connect to your the prometheus kubecost emits to on localhost:9003: 
-```kubectl port-forward --namespace kubecost service/kubecost-prometheus-server 9003:80```
-* Temporary workaround: Copy the default.json file in this project at cloud/default.json to /models/default.json on the machine your test is running on. TODO: fix this and inject the cloud/default.json path into provider.go.
-* Navigate to cost-model/test
-* Run ```go test -timeout 700s``` from the testing directory. The tests right now take about 10 minutes (600s) to run because they bring up and down pods and wait for Prometheus to scrape data about them.
 
+- Make sure you have a kubeconfig that can point to your cluster, and have permissions to create/modify a namespace called "test"
+- Connect to your the prometheus kubecost emits to on localhost:9003:
+  `kubectl port-forward --namespace kubecost service/kubecost-prometheus-server 9003:80`
+- Temporary workaround: Copy the default.json file in this project at cloud/default.json to /models/default.json on the machine your test is running on. TODO: fix this and inject the cloud/default.json path into provider.go.
+- Navigate to cost-model/test
+- Run `go test -timeout 700s` from the testing directory. The tests right now take about 10 minutes (600s) to run because they bring up and down pods and wait for Prometheus to scrape data about them.
 
-## Certification of Origin ##
+## Certification of Origin
 
-By contributing to this project you certify that your contribution was created in whole or in part by you and that you have the right to submit it under the open source license indicated in the project. In other words, please confirm that you, as a contributor, have the legal right to make the contribution. 
+By contributing to this project you certify that your contribution was created in whole or in part by you and that you have the right to submit it under the open source license indicated in the project. In other words, please confirm that you, as a contributor, have the legal right to make the contribution.
 
-## Committing ###
+## Committing
 
 Please write a commit message with Fixes Issue # if there is an outstanding issue that is fixed. It’s okay to submit a PR without a corresponding issue, just please try be detailed in the description about the problem you’re addressing.
 
-Please run go fmt on the project directory. Lint can be okay (for example, comments on exported functions are nice but not required on the server). 
+Please run go fmt on the project directory. Lint can be okay (for example, comments on exported functions are nice but not required on the server).
 
 Please email us (support@kubecost.com) or reach out to us on [Slack](https://join.slack.com/t/kubecost/shared_invite/enQtNTA2MjQ1NDUyODE5LWFjYzIzNWE4MDkzMmUyZGU4NjkwMzMyMjIyM2E0NGNmYjExZjBiNjk1YzY5ZDI0ZTNhZDg4NjlkMGRkYzFlZTU) if you need help or have any questions!

+ 26 - 34
README.md

@@ -1,58 +1,52 @@
-## Kubecost
+<img src="./opencost-header.png"/>
 
-Kubecost models give teams visibility into current and historical Kubernetes spend and resource allocation. These models  provide cost transparency in Kubernetes environments that support multiple applications, teams, departments, etc.
+# OpenCost — your favorite open source cost monitoring tool for Kubernetes
 
-![Kubecost allocation UI](/allocation-drilldown.gif)
+OpenCost models give teams visibility into current and historical Kubernetes spend and resource allocation. These models provide cost transparency in Kubernetes environments that support multiple applications, teams, departments, etc.
+
+OpenCost is developed by [Kubecost](https://kubecost.com).
+
+![OpenCost allocation UI](/allocation-drilldown.gif)
+
+To see more on the functionality of OpenCost, as well as the full Kubecost product, please visit the [features page](https://kubecost.com/#features) on our website.
 
-To see more on the functionality of the full Kubecost product, please visit the [features page](https://kubecost.com/#features) on our website. 
 Here is a summary of features enabled by this cost model:
 
 - Real-time cost allocation by Kubernetes service, deployment, namespace, label, statefulset, daemonset, pod, and container
-- Dynamic asset pricing enabled by integrations with AWS, Azure, and GCP billing APIs 
+- Dynamic asset pricing enabled by integrations with AWS, Azure, and GCP billing APIs
 - Supports on-prem k8s clusters with custom pricing sheets
 - Allocation for in-cluster resources like CPU, GPU, memory, and persistent volumes.
 - Allocation for AWS & GCP out-of-cluster resources like RDS instances and S3 buckets with key (optional)
 - Easily export pricing data to Prometheus with /metrics endpoint ([learn more](PROMETHEUS.md))
 - Free and open source distribution (Apache2 license)
 
-## Requirements
-
-- Kubernetes version 1.8 or higher
-- Prometheus
-- kube-state-metrics (optional) 
-
 ## Getting Started
 
-You can deploy Kubecost on any Kubernetes 1.8+ cluster in a matter of minutes, if not seconds. 
-Visit the Kubecost docs for [recommended install options](https://docs.kubecost.com/install). Compared to building from source, installing from Helm is faster and includes all necessary dependencies. 
+You can deploy OpenCost and/or Kubecost on any Kubernetes 1.8+ cluster in a matter of minutes, if not seconds!
+
+Visit the full documentation for [recommended install options](https://docs.kubecost.com/install). Compared to building from source, installing from Helm is faster and includes all necessary dependencies.
 
 ## Usage
 
-* User interface
-* [Cost APIs](https://github.com/kubecost/docs/blob/master/apis.md)
-* [CLI / kubectl cost](https://github.com/kubecost/kubectl-cost)
-* [Prometheus metric exporter](kubecost-exporter.md)
+- User interface
+- [Cost APIs](https://github.com/kubecost/docs/blob/master/apis.md)
+- [CLI / kubectl cost](https://github.com/kubecost/kubectl-cost)
+- [Prometheus metric exporter](kubecost-exporter.md)
 
 ## Contributing
 
 We :heart: pull requests! See [`CONTRIBUTING.md`](CONTRIBUTING.md) for information on buiding the project from source
-and contributing changes. 
-
-## Licensing
-
-Licensed under the Apache License, Version 2.0 (the "License")
+and contributing changes.
 
- ## Software stack
+## Community
 
-Golang application. 
-Prometheus. 
-Kubernetes. 
+If you need any support or have any questions on contributing to the project, you can reach us on [Slack](https://join.slack.com/t/kubecost/shared_invite/enQtNTA2MjQ1NDUyODE5LWFjYzIzNWE4MDkzMmUyZGU4NjkwMzMyMjIyM2E0NGNmYjExZjBiNjk1YzY5ZDI0ZTNhZDg4NjlkMGRkYzFlZTU) or via email at [team@kubecost.com](team@kubecost.com).
 
-## Frequently Asked Questions
+## FAQ
 
 #### How do you measure the cost of CPU/RAM/GPU/storage for a container, pod, deployment, etc.
 
-The Kubecost model collects pricing data from major cloud providers, e.g. GCP, Azure and AWS, to provide the real-time cost of running workloads. Based on data from these APIs, each container/pod inherits a cost per CPU-hour, GPU-hour, Storage Gb-hour and cost per RAM Gb-hour based on the node where it was running or the class of storage provisioned. This means containers of the same size, as measured by the max of requests or usage, could be charged different resource rates if they are scheduled in separate regions, on nodes with different usage types (on-demand vs preemptible), etc. 
+The OpenCost cost model collects pricing data from major cloud providers, e.g. GCP, Azure and AWS, to provide the real-time cost of running workloads. Based on data from these APIs, each container/pod inherits a cost per CPU-hour, GPU-hour, Storage Gb-hour and cost per RAM Gb-hour based on the node where it was running or the class of storage provisioned. This means containers of the same size, as measured by the max of requests or usage, could be charged different resource rates if they are scheduled in separate regions, on nodes with different usage types (on-demand vs preemptible), etc.
 
 For on-prem clusters, these resource prices can be configured directly with custom pricing sheets (more below).
 
@@ -60,7 +54,7 @@ Measuring the CPU/RAM/GPU cost of a deployment, service, namespace, etc is the a
 
 #### How do you determine RAM/CPU/GPU costs for a node when this data isn’t provided by a cloud provider?
 
-When explicit RAM, CPU or GPU prices are not provided by your cloud provider, the Kubecost model falls back to the ratio of base CPU, GPU and RAM price inputs supplied. The default values for these parameters are based on the marginal resource rates of the cloud provider, but they can be customized within Kubecost.
+When explicit RAM, CPU or GPU prices are not provided by your cloud provider, the OpenCost model falls back to the ratio of base CPU, GPU and RAM price inputs supplied. The default values for these parameters are based on the marginal resource rates of the cloud provider, but they can be customized within OpenCost.
 
 These base RAM/CPU/GPU prices are normalized to ensure the sum of each component is equal to the total price of the node provisioned, based on billing rates from your provider. When the sum of RAM/CPU/GPU costs is greater (or less) than the price of the node, then the ratio between the input prices is held constant.
 
@@ -70,14 +64,12 @@ As an example, let's imagine a node with 1 GPU, 1 CPU and 1 Gb of RAM that costs
 
 #### How do you allocate a specific amount of RAM/CPU to an individual pod or container?
 
-Resources are allocated based on the time-weighted maximum of resource Requests and Usage over the measured period. For example, a pod with no usage and 1 CPU requested for 12 hours out of a 24 hour window would be allocated 12 CPU hours. For pods with BestEffort quality of service (i.e. no requests) allocation is done solely on resource usage. 
+Resources are allocated based on the time-weighted maximum of resource Requests and Usage over the measured period. For example, a pod with no usage and 1 CPU requested for 12 hours out of a 24 hour window would be allocated 12 CPU hours. For pods with BestEffort quality of service (i.e. no requests) allocation is done solely on resource usage.
 
 #### How do I set my AWS Spot estimates for cost allocation?
 
-Modify [spotCPU](https://github.com/kubecost/cost-model/blob/master/configs/default.json#L5) and  [spotRAM](https://github.com/kubecost/cost-model/blob/master/configs/default.json#L7) in default.json to the level of recent market prices. Allocation will use these prices, but it does not take into account what you are actually charged by AWS. Alternatively, you can provide an AWS key to allow access to the Spot data feed. This will provide accurate Spot price reconciliation. 
+Modify [spotCPU](https://github.com/kubecost/opencost/blob/master/configs/default.json#L5) and [spotRAM](https://github.com/kubecost/opencost/blob/master/configs/default.json#L7) in default.json to the level of recent market prices. Allocation will use these prices, but it does not take into account what you are actually charged by AWS. Alternatively, you can provide an AWS key to allow access to the Spot data feed. This will provide accurate Spot price reconciliation.
 
 #### Do I need a GCP billing API key?
 
-We supply a global key with a low limit for evaluation, but you will want to supply your own before moving to production.  
-  
-Please reach out with any additional questions on  [Slack](https://join.slack.com/t/kubecost/shared_invite/enQtNTA2MjQ1NDUyODE5LWFjYzIzNWE4MDkzMmUyZGU4NjkwMzMyMjIyM2E0NGNmYjExZjBiNjk1YzY5ZDI0ZTNhZDg4NjlkMGRkYzFlZTU) or via email at [team@kubecost.com](team@kubecost.com). 
+We supply a global key with a low limit for evaluation, but you will want to supply your own before moving to production.

BIN
opencost-header.png