Browse Source

Merge branch 'develop' into develop

Sean Holcomb 3 năm trước cách đây
mục cha
commit
fbbd22be5e

+ 10 - 3
.github/workflows/pr.yaml

@@ -7,6 +7,13 @@ on:
 
 jobs:
   build:
+    strategy:
+      matrix:
+        include:
+          - component: Frontend
+            location: ui
+          - component: Backend
+            location: .
     runs-on: ubuntu-latest
 
     steps:
@@ -17,9 +24,9 @@ jobs:
       - name: Set up Docker Buildx
         uses: docker/setup-buildx-action@v1
 
-      - name: Build
+      - name: Build ${{ matrix.component }}
         uses: docker/build-push-action@v2
         with:
-          context: ./
-          file: ./Dockerfile
+          context: ${{ matrix.location }}/
+          file: ${{ matrix.location }}/Dockerfile
           push: false

+ 5 - 4
CONTRIBUTING.md

@@ -3,14 +3,15 @@
 Thanks for your help improving the OpenCost project! There are many ways to contribute to the project, including the following:
 
 * contributing or providing feedback on the [OpenCost Spec](https://github.com/opencost/opencost/tree/develop/spec)
-* contributing documentation 
-* joining the discussion on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel or in the [OpenCost community discussions](https://drive.google.com/drive/folders/1hXlcyFPePB7t3z6lyVzdxmdfrbzeT1Jz) folder
+* contributing documentation here or to the [OpenCost website](https://github.com/kubecost/opencost-website)
+* joining the discussion in the [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel
+* participating in the fortnightly [OpenCost Working Group](https://calendar.google.com/calendar/u/0/embed?src=c_c0f7q56e5eeod3j89bb320fvjg@group.calendar.google.com&ctz=America/Los_Angeles) meetings ([notes here](https://drive.google.com/drive/folders/1hXlcyFPePB7t3z6lyVzdxmdfrbzeT1Jz))
 * committing software via the workflow below
 
 ## Getting Help
 
 If you have a question about OpenCost or have encountered problems using it,
-you can start by asking a question on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel or via email at [support@kubecost.com](support@kubecost.com)
+you can start by asking a question on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel or via email at [opencost@kubecost.com](opencost@kubecost.com)
 
 ## Workflow
 
@@ -96,4 +97,4 @@ Please write a commit message with Fixes Issue # if there is an outstanding issu
 
 Please run go fmt on the project directory. Lint can be okay (for example, comments on exported functions are nice but not required on the server).
 
-Please email us [support@kubecost.com](support@kubecost.com) or reach out to us on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel if you need help or have any questions!
+Please email us [opencost@kubecost.com](opencost@kubecost.com) or reach out to us on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel if you need help or have any questions!

+ 9 - 0
NOTICE

@@ -0,0 +1,9 @@
+OpenCost
+Copyright 2022 Cloud Native Computing Foundation
+
+This product includes software developed at
+The Cloud Native Computing Foundation (http://www.cncf.io).
+
+The Initial Developer of some parts of the specification and project is
+Kubecost (http://www.kubecost.com).
+Copyright 2019 - 2022 Stackwatch Incorporated. All Rights Reserved.

+ 1 - 1
PROMETHEUS.md

@@ -1 +1 @@
-<https://www.opencost.io/docs/>
+Available at <https://www.opencost.io/docs/prometheus>

+ 1 - 2
README.md

@@ -4,7 +4,6 @@
 
 OpenCost models give teams visibility into current and historical Kubernetes spend and resource allocation. These models provide cost transparency in Kubernetes environments that support multiple applications, teams, departments, etc.
 
-
 OpenCost was originally developed and open sourced by [Kubecost](https://kubecost.com). This project combines a [specification](/spec/) as well as a Golang implementation of these detailed requirements.
 
 ![OpenCost allocation UI](/allocation-drilldown.gif)
@@ -38,7 +37,7 @@ and contributing changes.
 
 ## Community
 
-If you need any support or have any questions on contributing to the project, you can reach us on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel or via email at [team@kubecost.com](team@kubecost.com).
+If you need any support or have any questions on contributing to the project, you can reach us on [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel or via email at [opencost@kubecost.com](opencost@kubecost.com).
 
 ## FAQ
 

+ 1 - 1
ROADMAP.md

@@ -10,4 +10,4 @@ __2022 roadmap__
 * More robust API documentation
 * Expose carbon emission ratings
 
-Please contact us at team@kubecost.com if you're interest in more detail. 
+Please contact us at opencost@kubecost.com if you're interest in more detail.

+ 22 - 11
pkg/costmodel/cluster.go

@@ -116,13 +116,24 @@ type Disk struct {
 	ClaimNamespace string
 	Cost           float64
 	Bytes          float64
-	BytesUsedAvg   float64
-	BytesUsedMax   float64
-	Local          bool
-	Start          time.Time
-	End            time.Time
-	Minutes        float64
-	Breakdown      *ClusterCostsBreakdown
+
+	// These two fields may not be available at all times because they rely on
+	// a new set of metrics that may or may not be available. Thus, they must
+	// be nilable to represent the complete absence of the data.
+	//
+	// In other words, nilability here lets us distinguish between
+	// "metric is not available" and "metric is available but is 0".
+	//
+	// They end in "Ptr" to distinguish from an earlier version in order to
+	// ensure that all usages are checked for nil.
+	BytesUsedAvgPtr *float64
+	BytesUsedMaxPtr *float64
+
+	Local     bool
+	Start     time.Time
+	End       time.Time
+	Minutes   float64
+	Breakdown *ClusterCostsBreakdown
 }
 
 type DiskIdentifier struct {
@@ -321,7 +332,7 @@ func ClusterDisks(client prometheus.Client, provider cloud.Provider, start, end
 				Local:     true,
 			}
 		}
-		diskMap[key].BytesUsedAvg = bytesAvg
+		diskMap[key].BytesUsedAvgPtr = &bytesAvg
 	}
 
 	for _, result := range resLocalStorageUsedMax {
@@ -346,7 +357,7 @@ func ClusterDisks(client prometheus.Client, provider cloud.Provider, start, end
 				Local:     true,
 			}
 		}
-		diskMap[key].BytesUsedMax = bytesMax
+		diskMap[key].BytesUsedMaxPtr = &bytesMax
 	}
 
 	for _, result := range resLocalStorageBytes {
@@ -1456,7 +1467,7 @@ func pvCosts(diskMap map[DiskIdentifier]*Disk, resolution time.Duration, resActi
 				Breakdown: &ClusterCostsBreakdown{},
 			}
 		}
-		diskMap[key].BytesUsedAvg = usage
+		diskMap[key].BytesUsedAvgPtr = &usage
 	}
 
 	for _, result := range resPVUsedMax {
@@ -1517,6 +1528,6 @@ func pvCosts(diskMap map[DiskIdentifier]*Disk, resolution time.Duration, resActi
 				Breakdown: &ClusterCostsBreakdown{},
 			}
 		}
-		diskMap[key].BytesUsedMax = usage
+		diskMap[key].BytesUsedMaxPtr = &usage
 	}
 }

+ 29 - 4
pkg/kubecost/asset.go

@@ -1072,7 +1072,7 @@ type Disk struct {
 	Local          float64
 	Breakdown      *Breakdown
 	StorageClass   string   // @bingen:field[version=17]
-	ByteHoursUsed  float64  // @bingen:field[version=18]
+	ByteHoursUsed  *float64 // @bingen:field[version=18]
 	ByteUsageMax   *float64 // @bingen:field[version=18]
 	VolumeName     string   // @bingen:field[version=18]
 	ClaimName      string   // @bingen:field[version=18]
@@ -1268,7 +1268,21 @@ func (d *Disk) add(that *Disk) {
 	d.Cost += that.Cost
 
 	d.ByteHours += that.ByteHours
-	d.ByteHoursUsed += that.ByteHoursUsed
+
+	if d.ByteHoursUsed == nil && that.ByteHoursUsed != nil {
+		copy := *that.ByteHoursUsed
+		d.ByteHoursUsed = &copy
+	} else if d.ByteHoursUsed != nil && that.ByteHoursUsed == nil {
+		// do nothing
+	} else if d.ByteHoursUsed != nil && that.ByteHoursUsed != nil {
+		sum := *d.ByteHoursUsed
+		sum += *that.ByteHoursUsed
+		d.ByteHoursUsed = &sum
+	}
+
+	// We have to nil out the max because we don't know if we're
+	// aggregating across time our properties. See RawAllocationOnly on
+	// Allocation for further reference.
 	d.ByteUsageMax = nil
 
 	// If storage class don't match default it to empty storage class
@@ -1294,6 +1308,11 @@ func (d *Disk) Clone() Asset {
 		copied := *d.ByteUsageMax
 		max = &copied
 	}
+	var byteHoursUsed *float64
+	if d.ByteHoursUsed != nil {
+		copied := *d.ByteHoursUsed
+		byteHoursUsed = &copied
+	}
 
 	return &Disk{
 		Properties:     d.Properties.Clone(),
@@ -1304,7 +1323,7 @@ func (d *Disk) Clone() Asset {
 		Adjustment:     d.Adjustment,
 		Cost:           d.Cost,
 		ByteHours:      d.ByteHours,
-		ByteHoursUsed:  d.ByteHoursUsed,
+		ByteHoursUsed:  byteHoursUsed,
 		ByteUsageMax:   max,
 		Local:          d.Local,
 		Breakdown:      d.Breakdown.Clone(),
@@ -1346,7 +1365,13 @@ func (d *Disk) Equal(a Asset) bool {
 	if d.ByteHours != that.ByteHours {
 		return false
 	}
-	if d.ByteHoursUsed != that.ByteHoursUsed {
+	if d.ByteHoursUsed != nil && that.ByteHoursUsed == nil {
+		return false
+	}
+	if d.ByteHoursUsed == nil && that.ByteHoursUsed != nil {
+		return false
+	}
+	if (d.ByteHoursUsed != nil && that.ByteHoursUsed != nil) && *d.ByteHoursUsed != *that.ByteHoursUsed {
 		return false
 	}
 	if d.ByteUsageMax != nil && that.ByteUsageMax == nil {

+ 11 - 2
pkg/kubecost/asset_json.go

@@ -259,7 +259,11 @@ func (d *Disk) MarshalJSON() ([]byte, error) {
 	jsonEncodeFloat64(buffer, "minutes", d.Minutes(), ",")
 	jsonEncodeFloat64(buffer, "byteHours", d.ByteHours, ",")
 	jsonEncodeFloat64(buffer, "bytes", d.Bytes(), ",")
-	jsonEncodeFloat64(buffer, "byteHoursUsed", d.ByteHoursUsed, ",")
+	if d.ByteHoursUsed == nil {
+		jsonEncode(buffer, "byteHoursUsed", nil, ",")
+	} else {
+		jsonEncodeFloat64(buffer, "byteHoursUsed", *d.ByteHoursUsed, ",")
+	}
 	if d.ByteUsageMax == nil {
 		jsonEncode(buffer, "byteUsageMax", nil, ",")
 	} else {
@@ -342,7 +346,12 @@ func (d *Disk) InterfaceToDisk(itf interface{}) error {
 		d.ByteHours = ByteHours.(float64)
 	}
 	if ByteHoursUsed, err := getTypedVal(fmap["byteHoursUsed"]); err == nil {
-		d.ByteHoursUsed = ByteHoursUsed.(float64)
+		if ByteHoursUsed == nil {
+			d.ByteHoursUsed = nil
+		} else {
+			byteHours := ByteHoursUsed.(float64)
+			d.ByteHoursUsed = &byteHours
+		}
 	}
 	if ByteUsageMax, err := getTypedVal(fmap["byteUsageMax"]); err == nil {
 		if ByteUsageMax == nil {

+ 8 - 3
pkg/kubecost/asset_json_test.go

@@ -164,7 +164,8 @@ func TestDisk_Unmarshal(t *testing.T) {
 
 	disk1 := NewDisk("disk1", "cluster1", "disk1", *unmarshalWindow.start, *unmarshalWindow.end, unmarshalWindow)
 	disk1.ByteHours = 60.0 * gb * hours
-	disk1.ByteHoursUsed = 40.0 * gb * hours
+	used := 40.0 * gb * hours
+	disk1.ByteHoursUsed = &used
 	max := 50.0 * gb * hours
 	disk1.ByteUsageMax = &max
 	disk1.Cost = 4.0
@@ -214,7 +215,7 @@ func TestDisk_Unmarshal(t *testing.T) {
 	if disk1.ByteHours != disk2.ByteHours {
 		t.Fatalf("Disk Unmarshal: ByteHours mutated in unmarshal")
 	}
-	if disk1.ByteHoursUsed != disk2.ByteHoursUsed {
+	if *disk1.ByteHoursUsed != *disk2.ByteHoursUsed {
 		t.Fatalf("Disk Unmarshal: ByteHoursUsed mutated in unmarshal")
 	}
 	if *disk1.ByteUsageMax != *disk2.ByteUsageMax {
@@ -232,7 +233,7 @@ func TestDisk_Unmarshal(t *testing.T) {
 	disk3 := NewDisk("disk3", "cluster1", "disk3", *unmarshalWindow.start, *unmarshalWindow.end, unmarshalWindow)
 
 	disk3.ByteHours = 60.0 * gb * hours
-	disk3.ByteHoursUsed = 40.0 * gb * hours
+	disk3.ByteHoursUsed = nil
 	disk3.ByteUsageMax = nil
 	disk3.Cost = 4.0
 	disk3.Local = 1.0
@@ -256,6 +257,10 @@ func TestDisk_Unmarshal(t *testing.T) {
 		t.Fatalf("Disk Unmarshal: unexpected error: %s", err)
 	}
 
+	// Check that both disks have nil usage
+	if disk3.ByteHoursUsed != disk4.ByteHoursUsed {
+		t.Fatalf("Disk Unmarshal: ByteHoursUsed mutated in unmarshal")
+	}
 	// Check that both disks have nil max usage
 	if disk3.ByteUsageMax != disk4.ByteUsageMax {
 		t.Fatalf("Disk Unmarshal: ByteUsageMax mutated in unmarshal")

+ 15 - 4
pkg/kubecost/kubecost_codecs.go

@@ -4978,7 +4978,13 @@ func (target *Disk) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
 	} else {
 		buff.WriteString(target.StorageClass) // write string
 	}
-	buff.WriteFloat64(target.ByteHoursUsed) // write float64
+	if target.ByteHoursUsed == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		buff.WriteFloat64(*target.ByteHoursUsed) // write float64
+	}
 	if target.ByteUsageMax == nil {
 		buff.WriteUInt8(uint8(0)) // write nil byte
 	} else {
@@ -5191,11 +5197,16 @@ func (target *Disk) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error)
 
 	// field version check
 	if uint8(18) <= version {
-		dd := buff.ReadFloat64() // read float64
-		target.ByteHoursUsed = dd
+		if buff.ReadUInt8() == uint8(0) {
+			target.ByteHoursUsed = nil
+		} else {
+			dd := buff.ReadFloat64() // read float64
+			target.ByteHoursUsed = &dd
 
+		}
 	} else {
-		target.ByteHoursUsed = float64(0) // default
+		target.ByteHoursUsed = nil
+
 	}
 
 	// field version check

+ 11 - 0
ui/Dockerfile

@@ -0,0 +1,11 @@
+FROM node:16-alpine as builder
+ADD package*.json /opt/ui/
+WORKDIR /opt/ui
+RUN npm install
+ADD src /opt/ui/src
+ENV BASE_URL=/allocation
+RUN npx parcel build src/index.html
+
+FROM nginx:alpine
+COPY --from=builder /opt/ui/dist /var/www
+COPY default.nginx.conf /etc/nginx/conf.d/

+ 70 - 0
ui/default.nginx.conf

@@ -0,0 +1,70 @@
+gzip_static  on;
+gzip on;
+gzip_min_length 50000;
+gzip_proxied expired no-cache no-store private auth;
+gzip_types
+    application/atom+xml
+    application/geo+json
+    application/javascript
+    application/x-javascript
+    application/json
+    application/ld+json
+    application/manifest+json
+    application/rdf+xml
+    application/rss+xml
+    application/vnd.ms-fontobject
+    application/wasm
+    application/x-web-app-manifest+json
+    application/xhtml+xml
+    application/xml
+    font/eot
+    font/otf
+    font/ttf
+    image/bmp
+    image/svg+xml
+    text/cache-manifest
+    text/calendar
+    text/css
+    text/javascript
+    text/markdown
+    text/plain
+    text/xml
+    text/x-component
+    text/x-cross-domain-policy;
+server {
+    server_name _;
+    root /var/www;
+    index index.html;
+    large_client_header_buffers 4 32k;
+    add_header Cache-Control "must-revalidate";
+
+    error_page 504 /custom_504.html;
+    location = /custom_504.html {
+        internal;
+    }
+
+    add_header Cache-Control "max-age=300";
+    location / {
+        try_files $uri $uri/ /index.html;
+    }
+
+    add_header ETag "1.96.0";
+    listen 9090;
+    listen [::]:9090;
+    resolver kube-dns.kube-system.svc.cluster.local valid=5s;
+    location /healthz {
+        return 200 'OK';
+    }
+    location /allocation {
+        proxy_connect_timeout       180;
+        proxy_send_timeout          180;
+        proxy_read_timeout          180;
+        set $server http://cost-analyzer.kubecost.svc.cluster.local:9003;
+        proxy_pass $server;
+        proxy_redirect off;
+        proxy_http_version 1.1;
+        proxy_set_header Connection "";
+        proxy_set_header  X-Real-IP  $remote_addr;
+        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
+    }
+}

+ 1 - 1
ui/src/Reports.js

@@ -155,7 +155,7 @@ const ReportsPage = () => {
         const allocationRange = resp.data
         for (const i in allocationRange) {
           // update cluster aggregations to use clusterName/clusterId names
-          if (aggregateBy == 'cluster') {
+	  if (aggregateBy == 'cluster') {
             for (const k in allocationRange[i]) {
               allocationRange[i][k].name = 'cluster-one';
             }