소스 검색

Merge branch 'develop' into kaelan-fix-disabled-metrics

Ajay Tripathy 3 년 전
부모
커밋
6bfcfa1efd
56개의 변경된 파일6196개의 추가작업 그리고 322개의 파일을 삭제
  1. 1 0
      .gitignore
  2. 6 6
      CODE_OF_CONDUCT.md
  3. 1 0
      CONTRIBUTING.md
  4. 5 5
      README.md
  5. BIN
      allocation-dashboard.png
  6. BIN
      allocation-drilldown.gif
  7. 1 1
      docs/README.md
  8. 7 7
      go.mod
  9. 21 14
      go.sum
  10. 13 0
      kubernetes/opencost.yaml
  11. 317 69
      pkg/cloud/aliyunprovider.go
  12. 245 0
      pkg/cloud/aliyunprovider_test.go
  13. 14 9
      pkg/cloud/awsprovider.go
  14. 149 3
      pkg/cloud/azureprovider.go
  15. 7 1
      pkg/cloud/customprovider.go
  16. 5 0
      pkg/cloud/gcpprovider.go
  17. 11 0
      pkg/cloud/provider.go
  18. 7 1
      pkg/cloud/scalewayprovider.go
  19. 13 13
      pkg/costmodel/cluster.go
  20. 6 0
      pkg/env/env.go
  21. 10 0
      pkg/filter/allcut.go
  22. 9 0
      pkg/filter/allpass.go
  23. 36 0
      pkg/filter/and.go
  24. 20 0
      pkg/filter/filter.go
  25. 1073 0
      pkg/filter/filter_test.go
  26. 17 0
      pkg/filter/not.go
  27. 36 0
      pkg/filter/or.go
  28. 83 0
      pkg/filter/stringmapproperty.go
  29. 83 0
      pkg/filter/stringproperty.go
  30. 80 0
      pkg/filter/stringsliceproperty.go
  31. 70 0
      pkg/filter/util/cloudcostaggregate.go
  32. 40 0
      pkg/filter/window.go
  33. 112 0
      pkg/filter/window_test.go
  34. 82 1
      pkg/kubecost/allocation.go
  35. 54 19
      pkg/kubecost/asset.go
  36. 5 5
      pkg/kubecost/asset_test.go
  37. 28 0
      pkg/kubecost/assetprops.go
  38. 19 1
      pkg/kubecost/bingen.go
  39. 422 0
      pkg/kubecost/cloudcostaggregate.go
  40. 321 0
      pkg/kubecost/cloudcostitem.go
  41. 118 0
      pkg/kubecost/coverage.go
  42. 6 6
      pkg/kubecost/diff_test.go
  43. 2108 125
      pkg/kubecost/kubecost_codecs.go
  44. 2 0
      pkg/kubecost/query.go
  45. 24 19
      pkg/kubecost/summaryallocation.go
  46. 65 5
      pkg/kubecost/summaryallocation_test.go
  47. 116 4
      pkg/kubecost/window.go
  48. 300 0
      pkg/kubecost/window_test.go
  49. 4 3
      pkg/storage/gcsstorage.go
  50. 15 0
      pkg/util/mathutil/mathutil.go
  51. 3 0
      pkg/util/timeutil/timeutil.go
  52. 1 1
      ui/Dockerfile
  53. 3 3
      ui/README.md
  54. 2 1
      ui/src/components/Header.js
  55. BIN
      ui/src/images/logo.png
  56. BIN
      ui/src/opencost-ui.png

+ 1 - 0
.gitignore

@@ -6,3 +6,4 @@ ui/.cache
 ui/dist
 ui/dist
 ui/node_modules/
 ui/node_modules/
 cmd/costmodel/costmodel
 cmd/costmodel/costmodel
+pkg/cloud/azureorphan_test.go

+ 6 - 6
CODE_OF_CONDUCT.md

@@ -1,7 +1,7 @@
 # Contributor Code of Conduct
 # Contributor Code of Conduct
 
 
-As contributors and maintainers in the Kubecost community, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
-We are committed to making participation in the Kubecost community a harassment-free experience for everyone.
+As contributors and maintainers in the OpenCost community, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
+We are committed to making participation in the OpenCost community a harassment-free experience for everyone.
 
 
 # Scope
 # Scope
 
 
@@ -32,13 +32,13 @@ Project maintainers who do not follow or enforce the Code of Conduct may be perm
 
 
 # Reporting
 # Reporting
 
 
-For incidents occurring in the Kubecost community, contact the Kubecost Code of Conduct Committee via conduct@kubecost.com. You can expect a response within two business days.
-For other projects, please contact the Kubecost staff via conduct@kubecost.com. You can expect a response within three business days.
+For incidents occurring in the OpenCost community, contact the OpenCost Code of Conduct Committee via conduct@kubecost.com. You can expect a response within two business days.
+For other projects, please contact the OpenCost staff via conduct@kubecost.com. You can expect a response within three business days.
 
 
 # Enforcement
 # Enforcement
 
 
-The Kubecost project's Code of Conduct Committee enforces code of conduct issues. For all other projects, the Kubecost enforces code of conduct issues.
-Both bodies try to resolve incidents without punishment, but may remove people from the project or Kubecost communities at their discretion.
+The OpenCost project's Code of Conduct Committee enforces code of conduct issues. For all other projects, the OpenCost enforces code of conduct issues.
+Both bodies try to resolve incidents without punishment, but may remove people from the project or OpenCost communities at their discretion.
 
 
 # Acknowledgements
 # Acknowledgements
 This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 2.0 available at http://contributor-covenant.org/version/2/0/code_of_conduct/
 This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 2.0 available at http://contributor-covenant.org/version/2/0/code_of_conduct/

+ 1 - 0
CONTRIBUTING.md

@@ -7,6 +7,7 @@ Thanks for your help improving the OpenCost project! There are many ways to cont
 * joining the discussion in the [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel
 * joining the discussion in the [CNCF Slack](https://slack.cncf.io/) in the [#opencost](https://cloud-native.slack.com/archives/C03D56FPD4G) channel
 * participating in the fortnightly [OpenCost Working Group](https://calendar.google.com/calendar/u/0/embed?src=c_c0f7q56e5eeod3j89bb320fvjg@group.calendar.google.com&ctz=America/Los_Angeles) meetings ([notes here](https://drive.google.com/drive/folders/1hXlcyFPePB7t3z6lyVzdxmdfrbzeT1Jz))
 * participating in the fortnightly [OpenCost Working Group](https://calendar.google.com/calendar/u/0/embed?src=c_c0f7q56e5eeod3j89bb320fvjg@group.calendar.google.com&ctz=America/Los_Angeles) meetings ([notes here](https://drive.google.com/drive/folders/1hXlcyFPePB7t3z6lyVzdxmdfrbzeT1Jz))
 * committing software via the workflow below
 * committing software via the workflow below
+* keep up with community events using our [Calendar](https://calendar.google.com/calendar/u/0/embed?src=c_c0f7q56e5eeod3j89bb320fvjg@group.calendar.google.com&ctz=America/Los_Angeles)
 
 
 ## Getting Help
 ## Getting Help
 
 

+ 5 - 5
README.md

@@ -6,13 +6,13 @@ OpenCost models give teams visibility into current and historical Kubernetes spe
 
 
 OpenCost was originally developed and open sourced by [Kubecost](https://kubecost.com). This project combines a [specification](/spec/) as well as a Golang implementation of these detailed requirements.
 OpenCost was originally developed and open sourced by [Kubecost](https://kubecost.com). This project combines a [specification](/spec/) as well as a Golang implementation of these detailed requirements.
 
 
-![OpenCost allocation UI](/allocation-drilldown.gif)
+![OpenCost allocation UI](./ui/src/opencost-ui.png)
 
 
 To see the full functionality of OpenCost you can view [OpenCost features](https://opencost.io). Here is a summary of features enabled:
 To see the full functionality of OpenCost you can view [OpenCost features](https://opencost.io). Here is a summary of features enabled:
 
 
-- Real-time cost allocation by Kubernetes service, deployment, namespace, label, statefulset, daemonset, pod, and container
-- Dynamic asset pricing enabled by integrations with AWS, Azure, and GCP billing APIs
-- Supports on-prem k8s clusters with custom pricing sheets
+- Real-time cost allocation by Kubernetes cluster, node, namespace, controller kind, controller, service, or pod
+- Dynamic onDemand asset pricing enabled by integrations with AWS, Azure, and GCP billing APIs
+- Supports on-prem k8s clusters with custom CSV pricing
 - Allocation for in-cluster resources like CPU, GPU, memory, and persistent volumes.
 - Allocation for in-cluster resources like CPU, GPU, memory, and persistent volumes.
 - Easily export pricing data to Prometheus with /metrics endpoint ([learn more](PROMETHEUS.md))
 - Easily export pricing data to Prometheus with /metrics endpoint ([learn more](PROMETHEUS.md))
 - Free and open source distribution (Apache2 license)
 - Free and open source distribution (Apache2 license)
@@ -21,7 +21,7 @@ To see the full functionality of OpenCost you can view [OpenCost features](https
 
 
 You can deploy OpenCost on any Kubernetes 1.8+ cluster in a matter of minutes, if not seconds!
 You can deploy OpenCost on any Kubernetes 1.8+ cluster in a matter of minutes, if not seconds!
 
 
-Visit the full documentation for [recommended install options](https://www.opencost.io/docs/install). Compared to building from source, installing from Helm is faster and includes all necessary dependencies.
+Visit the full documentation for [recommended install options](https://www.opencost.io/docs/install).
 
 
 ## Usage
 ## Usage
 
 

BIN
allocation-dashboard.png


BIN
allocation-drilldown.gif


+ 1 - 1
docs/README.md

@@ -1 +1 @@
-<https://www.opencost.io/docs/>
+The docs are available at <https://www.opencost.io/docs/> and the source is at <https://github.com/opencost/opencost-website/>

+ 7 - 7
go.mod

@@ -12,7 +12,8 @@ require (
 	github.com/Azure/go-autorest/autorest v0.11.27
 	github.com/Azure/go-autorest/autorest v0.11.27
 	github.com/Azure/go-autorest/autorest/adal v0.9.18
 	github.com/Azure/go-autorest/autorest/adal v0.9.18
 	github.com/Azure/go-autorest/autorest/azure/auth v0.5.11
 	github.com/Azure/go-autorest/autorest/azure/auth v0.5.11
-	github.com/aws/aws-sdk-go v1.28.9
+	github.com/aliyun/alibaba-cloud-sdk-go v1.62.3
+	github.com/aws/aws-sdk-go v1.44.153
 	github.com/aws/aws-sdk-go-v2 v1.13.0
 	github.com/aws/aws-sdk-go-v2 v1.13.0
 	github.com/aws/aws-sdk-go-v2/config v1.13.1
 	github.com/aws/aws-sdk-go-v2/config v1.13.1
 	github.com/aws/aws-sdk-go-v2/credentials v1.8.0
 	github.com/aws/aws-sdk-go-v2/credentials v1.8.0
@@ -64,7 +65,6 @@ require (
 	github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
 	github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
 	github.com/Azure/go-autorest/logger v0.2.1 // indirect
 	github.com/Azure/go-autorest/logger v0.2.1 // indirect
 	github.com/Azure/go-autorest/tracing v0.6.0 // indirect
 	github.com/Azure/go-autorest/tracing v0.6.0 // indirect
-	github.com/aliyun/alibaba-cloud-sdk-go v1.62.3 // indirect
 	github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.2.0 // indirect
 	github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.2.0 // indirect
 	github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.10.0 // indirect
 	github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.10.0 // indirect
 	github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.4 // indirect
 	github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.4 // indirect
@@ -84,7 +84,7 @@ require (
 	github.com/go-logr/logr v0.2.0 // indirect
 	github.com/go-logr/logr v0.2.0 // indirect
 	github.com/gofrs/uuid v4.2.0+incompatible // indirect
 	github.com/gofrs/uuid v4.2.0+incompatible // indirect
 	github.com/gogo/protobuf v1.3.2 // indirect
 	github.com/gogo/protobuf v1.3.2 // indirect
-	github.com/golang-jwt/jwt/v4 v4.4.1 // indirect
+	github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
 	github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
 	github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
 	github.com/golang/protobuf v1.5.2 // indirect
 	github.com/golang/protobuf v1.5.2 // indirect
 	github.com/google/go-cmp v0.5.6 // indirect
 	github.com/google/go-cmp v0.5.6 // indirect
@@ -122,12 +122,12 @@ require (
 	github.com/spf13/pflag v1.0.5 // indirect
 	github.com/spf13/pflag v1.0.5 // indirect
 	github.com/subosito/gotenv v1.2.0 // indirect
 	github.com/subosito/gotenv v1.2.0 // indirect
 	go.opencensus.io v0.23.0 // indirect
 	go.opencensus.io v0.23.0 // indirect
-	golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4 // indirect
+	golang.org/x/crypto v0.3.0 // indirect
 	golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
 	golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
 	golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
 	golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
-	golang.org/x/net v0.1.0 // indirect
-	golang.org/x/sys v0.1.0 // indirect
-	golang.org/x/term v0.1.0 // indirect
+	golang.org/x/net v0.2.0 // indirect
+	golang.org/x/sys v0.2.0 // indirect
+	golang.org/x/term v0.2.0 // indirect
 	golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
 	golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
 	golang.org/x/tools v0.1.12 // indirect
 	golang.org/x/tools v0.1.12 // indirect
 	google.golang.org/appengine v1.6.7 // indirect
 	google.golang.org/appengine v1.6.7 // indirect

+ 21 - 14
go.sum

@@ -101,8 +101,8 @@ github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5
 github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
 github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
 github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
 github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
 github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
 github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
-github.com/aws/aws-sdk-go v1.28.9 h1:grIuBQc+p3dTRXerh5+2OxSuWFi0iXuxbFdTSg0jaW0=
-github.com/aws/aws-sdk-go v1.28.9/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
+github.com/aws/aws-sdk-go v1.44.153 h1:KfN5URb9O/Fk48xHrAinrPV2DzPcLa0cd9yo1ax5KGg=
+github.com/aws/aws-sdk-go v1.44.153/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
 github.com/aws/aws-sdk-go-v2 v1.13.0 h1:1XIXAfxsEmbhbj5ry3D3vX+6ZcUYvIqSm4CWWEuGZCA=
 github.com/aws/aws-sdk-go-v2 v1.13.0 h1:1XIXAfxsEmbhbj5ry3D3vX+6ZcUYvIqSm4CWWEuGZCA=
 github.com/aws/aws-sdk-go-v2 v1.13.0/go.mod h1:L6+ZpqHaLbAaxsqV0L4cvxZY7QupWJB4fhkf8LXvC7w=
 github.com/aws/aws-sdk-go-v2 v1.13.0/go.mod h1:L6+ZpqHaLbAaxsqV0L4cvxZY7QupWJB4fhkf8LXvC7w=
 github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.2.0 h1:scBthy70MB3m4LCMFaBcmYCyR2XWOz6MxSfdSu/+fQo=
 github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.2.0 h1:scBthy70MB3m4LCMFaBcmYCyR2XWOz6MxSfdSu/+fQo=
@@ -246,8 +246,8 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69
 github.com/goji/httpauth v0.0.0-20160601135302-2da839ab0f4d/go.mod h1:nnjvkQ9ptGaCkuDUx6wNykzzlUixGxvkme+H/lnzb+A=
 github.com/goji/httpauth v0.0.0-20160601135302-2da839ab0f4d/go.mod h1:nnjvkQ9ptGaCkuDUx6wNykzzlUixGxvkme+H/lnzb+A=
 github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
 github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
 github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
 github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
-github.com/golang-jwt/jwt/v4 v4.4.1 h1:pC5DB52sCeK48Wlb9oPcdhnjkz1TKt1D/P7WKJ0kUcQ=
-github.com/golang-jwt/jwt/v4 v4.4.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
+github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
+github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
 github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
 github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
 github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
 github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
 github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
 github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -326,7 +326,6 @@ github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+
 github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
 github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
 github.com/googleapis/gnostic v0.4.1 h1:DLJCy1n/vrD4HPjOvYcT8aYQXpPIzoRZONaYwyycI+I=
 github.com/googleapis/gnostic v0.4.1 h1:DLJCy1n/vrD4HPjOvYcT8aYQXpPIzoRZONaYwyycI+I=
 github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
 github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
-github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
 github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
 github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
 github.com/gorilla/css v1.0.0 h1:BQqNyPTi50JCFMTw/b67hByjMVXZRwGha6wxVGkeihY=
 github.com/gorilla/css v1.0.0 h1:BQqNyPTi50JCFMTw/b67hByjMVXZRwGha6wxVGkeihY=
 github.com/gorilla/css v1.0.0/go.mod h1:Dn721qIggHpt4+EFCcTLTU/vk5ySda2ReITrtgBl60c=
 github.com/gorilla/css v1.0.0/go.mod h1:Dn721qIggHpt4+EFCcTLTU/vk5ySda2ReITrtgBl60c=
@@ -388,7 +387,6 @@ github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfE
 github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
 github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
 github.com/jszwec/csvutil v1.2.1 h1:9+vmGqMdYxIbeDmVbTrVryibx2izwHAfKdPwl4GPNHM=
 github.com/jszwec/csvutil v1.2.1 h1:9+vmGqMdYxIbeDmVbTrVryibx2izwHAfKdPwl4GPNHM=
 github.com/jszwec/csvutil v1.2.1/go.mod h1:8YHz6C3KVdIeCxLMvwbbIVDCTA/Wi2df93AZlQNaE2U=
 github.com/jszwec/csvutil v1.2.1/go.mod h1:8YHz6C3KVdIeCxLMvwbbIVDCTA/Wi2df93AZlQNaE2U=
-github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
 github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
 github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
 github.com/juju/errors v0.0.0-20181118221551-089d3ea4e4d5/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q=
 github.com/juju/errors v0.0.0-20181118221551-089d3ea4e4d5/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q=
 github.com/juju/loggo v0.0.0-20180524022052-584905176618/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U=
 github.com/juju/loggo v0.0.0-20180524022052-584905176618/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U=
@@ -558,9 +556,7 @@ github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6Mwd
 github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
 github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
 github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
 github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
 github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
 github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
-github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
 github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
 github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
-github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
 github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
 github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
 github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
 github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
 github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
 github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
@@ -594,7 +590,9 @@ github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5Cc
 github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
 github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
 github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
 github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
 github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
 github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
+github.com/uber/jaeger-client-go v2.30.0+incompatible h1:D6wyKGCecFaSRUpo8lCVbaOOb6ThwMmTEbhRwtKR97o=
 github.com/uber/jaeger-client-go v2.30.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
 github.com/uber/jaeger-client-go v2.30.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
+github.com/uber/jaeger-lib v2.4.1+incompatible h1:td4jdvLcExb4cBISKIpHuGoVXh+dVKhn2Um6rjCsSsg=
 github.com/uber/jaeger-lib v2.4.1+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
 github.com/uber/jaeger-lib v2.4.1+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
 github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
 github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
 github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
 github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
@@ -619,6 +617,7 @@ github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
 github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
 github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
 github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
 github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
 github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
 github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
 go.etcd.io/bbolt v1.3.5 h1:XAzx9gjCb0Rxj7EoqcClPD1d5ZBxZJk0jbuoPHenBt0=
 go.etcd.io/bbolt v1.3.5 h1:XAzx9gjCb0Rxj7EoqcClPD1d5ZBxZJk0jbuoPHenBt0=
 go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
 go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
 go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
 go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
@@ -633,6 +632,7 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
 go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
 go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
 go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
 go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
 go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
 go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
+go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
 go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
 go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
 go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
 go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
 go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
 go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
@@ -653,8 +653,8 @@ golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWP
 golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
 golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
 golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
 golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
 golang.org/x/crypto v0.0.0-20211215165025-cf75a172585e/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
 golang.org/x/crypto v0.0.0-20211215165025-cf75a172585e/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
-golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4 h1:kUhD7nTDoI3fVd9G4ORWrbV5NY0liEs/Jg2pv5f+bBA=
-golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/crypto v0.3.0 h1:a06MkbcxBrEFc0w0QIZWXrH/9cCX6KJyWbBOIwAn+7A=
+golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
 golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
 golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
 golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
 golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
 golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
 golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -743,8 +743,10 @@ golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qx
 golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
 golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
 golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
 golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
 golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
 golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=
+golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
 golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
 golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
+golang.org/x/net v0.2.0 h1:sZfSu1wtKLGlWI4ZZayP0ck9Y73K1ynO6gqzTdBVdPU=
+golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
 golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
 golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
 golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
 golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
 golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
 golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -834,12 +836,17 @@ golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBc
 golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
+golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.2.0 h1:ljd4t30dBnAvMZaQCevtY0xLLD0A+bRZXbgLMLU1F/A=
+golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
 golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
 golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
 golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw=
+golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
 golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
 golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.2.0 h1:z85xZCsEl7bi/KwbNADeBYoOP0++7W1ipu+aGnpwzRM=
+golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
 golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
 golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
 golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
 golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
 golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
 golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -848,6 +855,7 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
 golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
 golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
 golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
 golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
 golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
 golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
 golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
 golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
 golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
 golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
 golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
 golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1038,7 +1046,6 @@ gopkg.in/go-playground/validator.v8 v8.18.2/go.mod h1:RX2a/7Ha8BgOhfk7j780h4/u/R
 gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
 gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
 gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
 gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
 gopkg.in/ini.v1 v1.57.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
 gopkg.in/ini.v1 v1.57.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
-gopkg.in/ini.v1 v1.62.0 h1:duBzk771uxoUuOlyRLkHsygud9+5lrlGjdFBb4mSKDU=
 gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
 gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
 gopkg.in/ini.v1 v1.66.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
 gopkg.in/ini.v1 v1.66.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
 gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
 gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=

+ 13 - 0
kubernetes/opencost.yaml

@@ -157,6 +157,16 @@ spec:
             - name: CLUSTER_ID
             - name: CLUSTER_ID
               value: "cluster-one" # Default cluster ID to use if cluster_id is not set in Prometheus metrics.
               value: "cluster-one" # Default cluster ID to use if cluster_id is not set in Prometheus metrics.
           imagePullPolicy: Always
           imagePullPolicy: Always
+        - image: quay.io/kubecost1/opencost-ui:latest
+          name: opencost-ui
+          resources:
+            requests:
+              cpu: "10m"
+              memory: "55M"
+            limits:
+              cpu: "999m"
+              memory: "1G"
+          imagePullPolicy: Always
 ---
 ---
 
 
 # Expose the cost model with a service
 # Expose the cost model with a service
@@ -176,4 +186,7 @@ spec:
     - name: opencost
     - name: opencost
       port: 9003
       port: 9003
       targetPort: 9003
       targetPort: 9003
+    - name: opencost-ui
+      port: 9090
+      targetPort: 9090
 ---
 ---

+ 317 - 69
pkg/cloud/aliyunprovider.go

@@ -1,6 +1,7 @@
 package cloud
 package cloud
 
 
 import (
 import (
+	"errors"
 	"fmt"
 	"fmt"
 	"io"
 	"io"
 	"io/ioutil"
 	"io/ioutil"
@@ -42,7 +43,6 @@ const (
 	ALIBABA_YEAR_PRICE_UNIT                    = "Year"
 	ALIBABA_YEAR_PRICE_UNIT                    = "Year"
 	ALIBABA_UNKNOWN_INSTANCE_FAMILY_TYPE       = "unknown"
 	ALIBABA_UNKNOWN_INSTANCE_FAMILY_TYPE       = "unknown"
 	ALIBABA_NOT_SUPPORTED_INSTANCE_FAMILY_TYPE = "unsupported"
 	ALIBABA_NOT_SUPPORTED_INSTANCE_FAMILY_TYPE = "unsupported"
-	ALIBABA_ENHANCED_GENERAL_PURPOSE_TYPE      = "g6e"
 	ALIBABA_DISK_CLOUD_ESSD_CATEGORY           = "cloud_essd"
 	ALIBABA_DISK_CLOUD_ESSD_CATEGORY           = "cloud_essd"
 	ALIBABA_DISK_CLOUD_CATEGORY                = "cloud"
 	ALIBABA_DISK_CLOUD_CATEGORY                = "cloud"
 	ALIBABA_DATA_DISK_CATEGORY                 = "data"
 	ALIBABA_DATA_DISK_CATEGORY                 = "data"
@@ -61,6 +61,9 @@ var (
 	sizeRegEx = regexp.MustCompile("(.*?)Gi")
 	sizeRegEx = regexp.MustCompile("(.*?)Gi")
 )
 )
 
 
+// Variable to keep track of instance families that fail in DescribePrice API due improper defaulting of systemDisk if the information is not available
+var alibabaDefaultToCloudEssd = []string{"g6e", "r6e", "r7", "g7", "g7a", "r7a"}
+
 // Why predefined and dependency on code? Can be converted to API call - https://www.alibabacloud.com/help/en/elastic-compute-service/latest/regions-describeregions
 // Why predefined and dependency on code? Can be converted to API call - https://www.alibabacloud.com/help/en/elastic-compute-service/latest/regions-describeregions
 var alibabaRegions = []string{
 var alibabaRegions = []string{
 	"cn-qingdao",
 	"cn-qingdao",
@@ -92,13 +95,27 @@ var alibabaRegions = []string{
 }
 }
 
 
 // To-Do: Convert to API call - https://www.alibabacloud.com/help/en/elastic-compute-service/latest/describeinstancetypefamilies
 // To-Do: Convert to API call - https://www.alibabacloud.com/help/en/elastic-compute-service/latest/describeinstancetypefamilies
-// Also first pass only completely tested pricing API for General pupose instances families.
+// Also first pass only completely tested pricing API for General pupose instances families & memory optimized instance families
 var alibabaInstanceFamilies = []string{
 var alibabaInstanceFamilies = []string{
+	"g7",
+	"g7a",
 	"g6e",
 	"g6e",
 	"g6",
 	"g6",
 	"g5",
 	"g5",
 	"sn2",
 	"sn2",
 	"sn2ne",
 	"sn2ne",
+	"r7",
+	"r7a",
+	"r6e",
+	"r6a",
+	"r6",
+	"r5",
+	"se1",
+	"se1ne",
+	"re6",
+	"re6p",
+	"re4",
+	"se1",
 }
 }
 
 
 // AlibabaAccessKey holds Alibaba credentials parsing from the service-key.json file.
 // AlibabaAccessKey holds Alibaba credentials parsing from the service-key.json file.
@@ -107,7 +124,7 @@ type AlibabaAccessKey struct {
 	SecretAccessKey string `json:"alibaba_secret_access_key"`
 	SecretAccessKey string `json:"alibaba_secret_access_key"`
 }
 }
 
 
-// Slim Version of k8s disk assigned to a node or PV, To be used if price adjustment need to happen with local disk information passed to describePrice.
+// Slim Version of k8s disk assigned to a node or PV.
 type SlimK8sDisk struct {
 type SlimK8sDisk struct {
 	DiskType         string
 	DiskType         string
 	RegionID         string
 	RegionID         string
@@ -141,10 +158,11 @@ type SlimK8sNode struct {
 	IsIoOptimized      bool
 	IsIoOptimized      bool
 	OSType             string
 	OSType             string
 	ProviderID         string
 	ProviderID         string
-	InstanceTypeFamily string // Bug in DescribePrice, doesn't default to enhanced type correct and you get an error in DescribePrice to get around need the family of the InstanceType.
+	SystemDisk         *SlimK8sDisk
+	InstanceTypeFamily string // Bug in DescribePrice, doesn't default to enhanced type correctly and you get an error in DescribePrice to get around need the family of the InstanceType.
 }
 }
 
 
-func NewSlimK8sNode(instanceType, regionID, priceUnit, memorySizeInKiB, osType, providerID, instanceTypeFamily string, isIOOptimized bool) *SlimK8sNode {
+func NewSlimK8sNode(instanceType, regionID, priceUnit, memorySizeInKiB, osType, providerID, instanceTypeFamily string, isIOOptimized bool, systemDiskInfo *SlimK8sDisk) *SlimK8sNode {
 	return &SlimK8sNode{
 	return &SlimK8sNode{
 		InstanceType:       instanceType,
 		InstanceType:       instanceType,
 		RegionID:           regionID,
 		RegionID:           regionID,
@@ -152,12 +170,13 @@ func NewSlimK8sNode(instanceType, regionID, priceUnit, memorySizeInKiB, osType,
 		MemorySizeInKiB:    memorySizeInKiB,
 		MemorySizeInKiB:    memorySizeInKiB,
 		IsIoOptimized:      isIOOptimized,
 		IsIoOptimized:      isIOOptimized,
 		OSType:             osType,
 		OSType:             osType,
+		SystemDisk:         systemDiskInfo,
 		ProviderID:         providerID,
 		ProviderID:         providerID,
 		InstanceTypeFamily: instanceTypeFamily,
 		InstanceTypeFamily: instanceTypeFamily,
 	}
 	}
 }
 }
 
 
-// AlibabaNodeAttributes represents metadata about the product pricing information used to map to a node.
+// AlibabaNodeAttributes represents metadata about the Node in its pricing information.
 // Basic Attributes needed atleast to get the key, Some attributes from k8s Node response
 // Basic Attributes needed atleast to get the key, Some attributes from k8s Node response
 // be populated directly into *Node object.
 // be populated directly into *Node object.
 type AlibabaNodeAttributes struct {
 type AlibabaNodeAttributes struct {
@@ -169,19 +188,37 @@ type AlibabaNodeAttributes struct {
 	IsIoOptimized bool `json:"isIoOptimized"`
 	IsIoOptimized bool `json:"isIoOptimized"`
 	// OSType represents the OS installed in the Instance.
 	// OSType represents the OS installed in the Instance.
 	OSType string `json:"osType"`
 	OSType string `json:"osType"`
+	// SystemDiskCategory represents the exact category of the system disk attached to the node.
+	SystemDiskCategory string `json:"systemDiskCategory"`
+	// SystemDiskSizeInGiB represents the size of the system disk attached to the node.
+	SystemDiskSizeInGiB string `json:"systemDiskSizeInGiB"`
+	// SystemDiskPerformanceLevel represents the performance level of the system disk attached to the node.
+	SystemDiskPerformanceLevel string `json:"systemPerformanceLevel"`
 }
 }
 
 
 func NewAlibabaNodeAttributes(node *SlimK8sNode) *AlibabaNodeAttributes {
 func NewAlibabaNodeAttributes(node *SlimK8sNode) *AlibabaNodeAttributes {
+	if node == nil {
+		return nil
+	}
+	var diskCategory, sizeInGiB, performanceLevel string
+	if node.SystemDisk != nil {
+		diskCategory = node.SystemDisk.DiskCategory
+		sizeInGiB = node.SystemDisk.SizeInGiB
+		performanceLevel = node.SystemDisk.PerformanceLevel
+	}
 	return &AlibabaNodeAttributes{
 	return &AlibabaNodeAttributes{
-		InstanceType:    node.InstanceType,
-		MemorySizeInKiB: node.MemorySizeInKiB,
-		IsIoOptimized:   node.IsIoOptimized,
-		OSType:          node.OSType,
+		InstanceType:               node.InstanceType,
+		MemorySizeInKiB:            node.MemorySizeInKiB,
+		IsIoOptimized:              node.IsIoOptimized,
+		OSType:                     node.OSType,
+		SystemDiskCategory:         diskCategory,
+		SystemDiskSizeInGiB:        sizeInGiB,
+		SystemDiskPerformanceLevel: performanceLevel,
 	}
 	}
 }
 }
 
 
-// AlibabaPVAttributes represents metadata about the product pricing information used to map to a PV.
-// Basic Attributes needed atleast to get the keys.Some attributes from k8s PV response
+// AlibabaPVAttributes represents metadata the PV in its pricing information.
+// Basic Attributes needed atleast to get the keys. Some attributes from k8s PV response
 // be populated directly into *PV object.
 // be populated directly into *PV object.
 type AlibabaPVAttributes struct {
 type AlibabaPVAttributes struct {
 	// PVType can be Cloud Disk, NetWork Attached Storage(NAS) or Object Storage Service (OSS).
 	// PVType can be Cloud Disk, NetWork Attached Storage(NAS) or Object Storage Service (OSS).
@@ -201,6 +238,9 @@ type AlibabaPVAttributes struct {
 // TO-Do: next iteration of Alibaba provider support NetWork Attached Storage(NAS) and Object Storage Service (OSS type PVs).
 // TO-Do: next iteration of Alibaba provider support NetWork Attached Storage(NAS) and Object Storage Service (OSS type PVs).
 // Currently defaulting to cloudDisk with provision to add work in future.
 // Currently defaulting to cloudDisk with provision to add work in future.
 func NewAlibabaPVAttributes(disk *SlimK8sDisk) *AlibabaPVAttributes {
 func NewAlibabaPVAttributes(disk *SlimK8sDisk) *AlibabaPVAttributes {
+	if disk == nil {
+		return nil
+	}
 	return &AlibabaPVAttributes{
 	return &AlibabaPVAttributes{
 		PVType:             ALIBABA_PV_CLOUD_DISK_TYPE,
 		PVType:             ALIBABA_PV_CLOUD_DISK_TYPE,
 		PVSubType:          disk.DiskType,
 		PVSubType:          disk.DiskType,
@@ -211,9 +251,9 @@ func NewAlibabaPVAttributes(disk *SlimK8sDisk) *AlibabaPVAttributes {
 }
 }
 
 
 // Stage 1 support will be Pay-As-You-Go with HourlyPrice equal to TradePrice with PriceUnit as Hour
 // Stage 1 support will be Pay-As-You-Go with HourlyPrice equal to TradePrice with PriceUnit as Hour
-// TO-DO: Subscription and Premptible support, need to find how to distinguish node into these categories]
-// TO-DO: Open question Subscription would be either Monthly or Yearly, Firstly Data retrieval/population
-// TO-DO:  need to be tested from describe price API, but how would you calculate hourly price, is it PRICE_YEARLY/HOURS_IN_THE_YEAR?
+// TO-DO: Subscription and Premptible support, Information can be gathered from describing instance for subscription type
+// and spotprice can be gather from DescribeSpotPriceHistory API.
+// TO-DO: how would you calculate hourly price for subscription type, is it PRICE_YEARLY/HOURS_IN_THE_YEAR|MONTH?
 type AlibabaPricingDetails struct {
 type AlibabaPricingDetails struct {
 	// Represents hourly price for the given Alibaba cloud Product.
 	// Represents hourly price for the given Alibaba cloud Product.
 	HourlyPrice float32 `json:"hourlyPrice"`
 	HourlyPrice float32 `json:"hourlyPrice"`
@@ -267,21 +307,18 @@ type Alibaba struct {
 	Config                  *ProviderConfig
 	Config                  *ProviderConfig
 	*CustomProvider
 	*CustomProvider
 
 
-	// TO-DO: These needs to be decided if either exported or unexported.
+	// The following fields are unexported because of avoiding any leak of secrets of these keys.
+	// Alibaba Access key used specifically in signer interface used to sign API calls
 	serviceAccountChecks *ServiceAccountChecks
 	serviceAccountChecks *ServiceAccountChecks
 	clusterAccountId     string
 	clusterAccountId     string
 	clusterRegion        string
 	clusterRegion        string
-
-	// The following fields are unexported because of avoiding any leak of secrets of these keys.
-	// Alibaba Access key used specifically in signer interface used to sign API calls
-	accessKey *credentials.AccessKeyCredential
+	accessKey            *credentials.AccessKeyCredential
 	// Map of regionID to sdk.client to call API for that region
 	// Map of regionID to sdk.client to call API for that region
 	clients map[string]*sdk.Client
 	clients map[string]*sdk.Client
 }
 }
 
 
 // GetAlibabaAccessKey return the Access Key used to interact with the Alibaba cloud, if not set it
 // GetAlibabaAccessKey return the Access Key used to interact with the Alibaba cloud, if not set it
 // set it first by looking at env variables else load it from secret files.
 // set it first by looking at env variables else load it from secret files.
-// <IMPORTANT>Ask in PR what is the exact purpose of so many functions to set the key in AWS providers, am i missing something here!!!!!
 func (alibaba *Alibaba) GetAlibabaAccessKey() (*credentials.AccessKeyCredential, error) {
 func (alibaba *Alibaba) GetAlibabaAccessKey() (*credentials.AccessKeyCredential, error) {
 	if alibaba.accessKeyisLoaded() {
 	if alibaba.accessKeyisLoaded() {
 		return alibaba.accessKey, nil
 		return alibaba.accessKey, nil
@@ -292,7 +329,6 @@ func (alibaba *Alibaba) GetAlibabaAccessKey() (*credentials.AccessKeyCredential,
 		return nil, fmt.Errorf("error getting the default config for Alibaba Cloud provider: %w", err)
 		return nil, fmt.Errorf("error getting the default config for Alibaba Cloud provider: %w", err)
 	}
 	}
 
 
-	//Look for service key values in env if not present in config via helm chart once changes are done
 	if config.AlibabaServiceKeyName == "" {
 	if config.AlibabaServiceKeyName == "" {
 		config.AlibabaServiceKeyName = env.GetAlibabaAccessKeyID()
 		config.AlibabaServiceKeyName = env.GetAlibabaAccessKeyID()
 	}
 	}
@@ -306,7 +342,6 @@ func (alibaba *Alibaba) GetAlibabaAccessKey() (*credentials.AccessKeyCredential,
 		if err != nil {
 		if err != nil {
 			return nil, fmt.Errorf("unable to set the Alibaba Cloud key/secret from config file %w", err)
 			return nil, fmt.Errorf("unable to set the Alibaba Cloud key/secret from config file %w", err)
 		}
 		}
-		// set custom pricing keys too
 		config.AlibabaServiceKeyName = env.GetAlibabaAccessKeyID()
 		config.AlibabaServiceKeyName = env.GetAlibabaAccessKeyID()
 		config.AlibabaServiceKeySecret = env.GetAlibabaAccessKeySecret()
 		config.AlibabaServiceKeySecret = env.GetAlibabaAccessKeySecret()
 	}
 	}
@@ -320,7 +355,7 @@ func (alibaba *Alibaba) GetAlibabaAccessKey() (*credentials.AccessKeyCredential,
 	return alibaba.accessKey, nil
 	return alibaba.accessKey, nil
 }
 }
 
 
-// DownloadPricingData satisfies the provider interface and downloads the price for node and PVs.
+// DownloadPricingData satisfies the provider interface and downloads the prices for Node instances and PVs.
 func (alibaba *Alibaba) DownloadPricingData() error {
 func (alibaba *Alibaba) DownloadPricingData() error {
 	alibaba.DownloadPricingDataLock.Lock()
 	alibaba.DownloadPricingDataLock.Lock()
 	defer alibaba.DownloadPricingDataLock.Unlock()
 	defer alibaba.DownloadPricingDataLock.Unlock()
@@ -352,15 +387,9 @@ func (alibaba *Alibaba) DownloadPricingData() error {
 	alibaba.clients = make(map[string]*sdk.Client)
 	alibaba.clients = make(map[string]*sdk.Client)
 	alibaba.Pricing = make(map[string]*AlibabaPricing)
 	alibaba.Pricing = make(map[string]*AlibabaPricing)
 
 
-	// TO-DO: Add disk price adjustment by parsing the local disk information and putting it as a param in describe Price function.
 	for _, node := range nodeList {
 	for _, node := range nodeList {
 		pricingObj := &AlibabaPricing{}
 		pricingObj := &AlibabaPricing{}
 		slimK8sNode := generateSlimK8sNodeFromV1Node(node)
 		slimK8sNode := generateSlimK8sNodeFromV1Node(node)
-		lookupKey, err = determineKeyForPricing(slimK8sNode)
-		if _, ok := alibaba.Pricing[lookupKey]; ok {
-			log.Debugf("Pricing information for node with same features %s already exists hence skipping", lookupKey)
-			continue
-		}
 
 
 		if client, ok = alibaba.clients[slimK8sNode.RegionID]; !ok {
 		if client, ok = alibaba.clients[slimK8sNode.RegionID]; !ok {
 			client, err = sdk.NewClientWithAccessKey(slimK8sNode.RegionID, aak.AccessKeyId, aak.AccessKeySecret)
 			client, err = sdk.NewClientWithAccessKey(slimK8sNode.RegionID, aak.AccessKeyId, aak.AccessKeySecret)
@@ -370,6 +399,18 @@ func (alibaba *Alibaba) DownloadPricingData() error {
 			alibaba.clients[slimK8sNode.RegionID] = client
 			alibaba.clients[slimK8sNode.RegionID] = client
 		}
 		}
 		signer = signers.NewAccessKeySigner(aak)
 		signer = signers.NewAccessKeySigner(aak)
+
+		// Adjust the system Disk information of a Node by retrieving the details of associated disk. If unable to retrieve set it to empty
+		// system disk to pass through and use defaults with Alibaba pricing API.
+		instanceID := getInstanceIDFromProviderID(slimK8sNode.ProviderID)
+		slimK8sNode.SystemDisk = getSystemDiskInfoOfANode(instanceID, slimK8sNode.RegionID, client, signer)
+
+		lookupKey, err = determineKeyForPricing(slimK8sNode)
+		if _, ok := alibaba.Pricing[lookupKey]; ok {
+			log.Debugf("Pricing information for node with same features %s already exists hence skipping", lookupKey)
+			continue
+		}
+
 		pricingObj, err = processDescribePriceAndCreateAlibabaPricing(client, slimK8sNode, signer, c)
 		pricingObj, err = processDescribePriceAndCreateAlibabaPricing(client, slimK8sNode, signer, c)
 
 
 		if err != nil {
 		if err != nil {
@@ -440,8 +481,8 @@ func (alibaba *Alibaba) NodePricing(key Key) (*Node, error) {
 
 
 	pricing, ok := alibaba.Pricing[keyFeature]
 	pricing, ok := alibaba.Pricing[keyFeature]
 	if !ok {
 	if !ok {
-		log.Warnf("Node pricing information not found for node with feature: %s", keyFeature)
-		return &Node{}, nil
+		log.Errorf("Node pricing information not found for node with feature: %s", keyFeature)
+		return nil, fmt.Errorf("Node pricing information not found for node with feature: %s letting it use default values", keyFeature)
 	}
 	}
 
 
 	log.Debugf("returning the node price for the node with feature: %s", keyFeature)
 	log.Debugf("returning the node price for the node with feature: %s", keyFeature)
@@ -460,8 +501,8 @@ func (alibaba *Alibaba) PVPricing(pvk PVKey) (*PV, error) {
 	pricing, ok := alibaba.Pricing[keyFeature]
 	pricing, ok := alibaba.Pricing[keyFeature]
 
 
 	if !ok {
 	if !ok {
-		log.Warnf("Persistent Volume pricing not found for PV with feature: %s", keyFeature)
-		return &PV{}, nil
+		log.Errorf("Persistent Volume pricing not found for PV with feature: %s", keyFeature)
+		return nil, fmt.Errorf("Persistent Volume pricing not found for PV with feature: %s letting it use default values", keyFeature)
 	}
 	}
 
 
 	log.Debugf("returning the PV price for the node with feature: %s", keyFeature)
 	log.Debugf("returning the PV price for the node with feature: %s", keyFeature)
@@ -547,7 +588,7 @@ func (alibaba *Alibaba) Regions() []string {
 	return alibabaRegions
 	return alibabaRegions
 }
 }
 
 
-// ClusterInfo returns information about Alibaba Cloud cluster, as provided by metadata. TO-DO: Look at this function closely at next PR iteration
+// ClusterInfo returns information about Alibaba Cloud cluster, as provided by metadata.
 func (alibaba *Alibaba) ClusterInfo() (map[string]string, error) {
 func (alibaba *Alibaba) ClusterInfo() (map[string]string, error) {
 
 
 	c, err := alibaba.GetConfig()
 	c, err := alibaba.GetConfig()
@@ -584,14 +625,47 @@ func (alibaba *Alibaba) GetDisks() ([]byte, error) {
 	return nil, nil
 	return nil, nil
 }
 }
 
 
-// Will look at this in Next PR if needed
+func (alibaba *Alibaba) GetOrphanedResources() ([]OrphanedResource, error) {
+	return nil, errors.New("not implemented")
+}
+
 func (alibaba *Alibaba) UpdateConfig(r io.Reader, updateType string) (*CustomPricing, error) {
 func (alibaba *Alibaba) UpdateConfig(r io.Reader, updateType string) (*CustomPricing, error) {
-	return nil, nil
+	return alibaba.Config.Update(func(c *CustomPricing) error {
+		if updateType != "" {
+			return fmt.Errorf("UpdateConfig for Alibaba Provider doesn't support updateType %s at this time", updateType)
+
+		} else {
+			a := make(map[string]interface{})
+			err := json.NewDecoder(r).Decode(&a)
+			if err != nil {
+				return err
+			}
+			for k, v := range a {
+				kUpper := strings.Title(k) // Just so we consistently supply / receive the same values, uppercase the first letter.
+				vstr, ok := v.(string)
+				if ok {
+					err := SetCustomPricingField(c, kUpper, vstr)
+					if err != nil {
+						return err
+					}
+				} else {
+					return fmt.Errorf("type error while updating config for %s", kUpper)
+				}
+			}
+		}
+
+		if env.IsRemoteEnabled() {
+			err := UpdateClusterMeta(env.GetClusterID(), c.ClusterName)
+			if err != nil {
+				return err
+			}
+		}
+		return nil
+	})
 }
 }
 
 
-// Will look at this in Next PR if needed
 func (alibaba *Alibaba) UpdateConfigFromConfigMap(cm map[string]string) (*CustomPricing, error) {
 func (alibaba *Alibaba) UpdateConfigFromConfigMap(cm map[string]string) (*CustomPricing, error) {
-	return nil, nil
+	return alibaba.Config.UpdateFromMap(cm)
 }
 }
 
 
 // Will look at this in Next PR if needed
 // Will look at this in Next PR if needed
@@ -634,20 +708,33 @@ func (alibaba *Alibaba) accessKeyisLoaded() bool {
 }
 }
 
 
 type AlibabaNodeKey struct {
 type AlibabaNodeKey struct {
-	ProviderID       string
-	RegionID         string
-	InstanceType     string
-	OSType           string
-	OptimizedKeyword string //If IsIoOptimized key will have optimize if not unoptimized the key for the node
+	ProviderID                 string
+	RegionID                   string
+	InstanceType               string
+	OSType                     string
+	OptimizedKeyword           string //If IsIoOptimized is true use the word optimize in the Node key and if its not optimized use the word nonoptimize
+	SystemDiskCategory         string
+	SystemDiskSizeInGiB        string
+	SystemDiskPerformanceLevel string
 }
 }
 
 
-func NewAlibabaNodeKey(node *SlimK8sNode, optimizedKeyword string) *AlibabaNodeKey {
+func NewAlibabaNodeKey(node *SlimK8sNode, optimizedKeyword, systemDiskCategory, systemDiskSizeInGiB, systemDiskPerfromanceLevel string) *AlibabaNodeKey {
+	var providerID, regionID, instanceType, osType string
+	if node != nil {
+		providerID = node.ProviderID
+		regionID = node.RegionID
+		instanceType = node.InstanceType
+		osType = node.OSType
+	}
 	return &AlibabaNodeKey{
 	return &AlibabaNodeKey{
-		ProviderID:       node.ProviderID,
-		RegionID:         node.RegionID,
-		InstanceType:     node.InstanceType,
-		OSType:           node.OSType,
-		OptimizedKeyword: optimizedKeyword,
+		ProviderID:                 providerID,
+		RegionID:                   regionID,
+		InstanceType:               instanceType,
+		OSType:                     osType,
+		OptimizedKeyword:           optimizedKeyword,
+		SystemDiskCategory:         systemDiskCategory,
+		SystemDiskSizeInGiB:        systemDiskSizeInGiB,
+		SystemDiskPerformanceLevel: systemDiskPerfromanceLevel,
 	}
 	}
 }
 }
 
 
@@ -656,7 +743,8 @@ func (alibabaNodeKey *AlibabaNodeKey) ID() string {
 }
 }
 
 
 func (alibabaNodeKey *AlibabaNodeKey) Features() string {
 func (alibabaNodeKey *AlibabaNodeKey) Features() string {
-	keyLookup := stringutil.DeleteEmptyStringsFromArray([]string{alibabaNodeKey.RegionID, alibabaNodeKey.InstanceType, alibabaNodeKey.OSType, alibabaNodeKey.OptimizedKeyword})
+	keyLookup := stringutil.DeleteEmptyStringsFromArray([]string{alibabaNodeKey.RegionID, alibabaNodeKey.InstanceType, alibabaNodeKey.OSType,
+		alibabaNodeKey.OptimizedKeyword, alibabaNodeKey.SystemDiskCategory, alibabaNodeKey.SystemDiskSizeInGiB, alibabaNodeKey.SystemDiskPerformanceLevel})
 	return strings.Join(keyLookup, "::")
 	return strings.Join(keyLookup, "::")
 }
 }
 
 
@@ -670,17 +758,58 @@ func (alibabaNodeKey *AlibabaNodeKey) GPUCount() int {
 
 
 // Get's the key for the k8s node input
 // Get's the key for the k8s node input
 func (alibaba *Alibaba) GetKey(mapValue map[string]string, node *v1.Node) Key {
 func (alibaba *Alibaba) GetKey(mapValue map[string]string, node *v1.Node) Key {
-	//Mostly parse the Node object and get the ProviderID, region, InstanceType, OSType and OptimizedKeyword(In if block)
-	// Currently just hardcoding a Node but eventually need to Node object
 	slimK8sNode := generateSlimK8sNodeFromV1Node(node)
 	slimK8sNode := generateSlimK8sNodeFromV1Node(node)
 
 
+	var aak *credentials.AccessKeyCredential
+	var err error
+	var ok bool
+	var client *sdk.Client
+	var signer *signers.AccessKeySigner
+
 	optimizedKeyword := ""
 	optimizedKeyword := ""
 	if slimK8sNode.IsIoOptimized {
 	if slimK8sNode.IsIoOptimized {
 		optimizedKeyword = ALIBABA_OPTIMIZE_KEYWORD
 		optimizedKeyword = ALIBABA_OPTIMIZE_KEYWORD
 	} else {
 	} else {
 		optimizedKeyword = ALIBABA_NON_OPTIMIZE_KEYWORD
 		optimizedKeyword = ALIBABA_NON_OPTIMIZE_KEYWORD
 	}
 	}
-	return NewAlibabaNodeKey(slimK8sNode, optimizedKeyword)
+
+	var diskCategory, diskSizeInGiB, diskPerformanceLevel string
+
+	if !alibaba.accessKeyisLoaded() {
+		aak, err = alibaba.GetAlibabaAccessKey()
+		if err != nil {
+			log.Warnf("unable to set the signer for node with providerID %s to retrieve the key skipping SystemDisk Retrieval with err: %v", slimK8sNode.ProviderID, err)
+			return NewAlibabaNodeKey(slimK8sNode, optimizedKeyword, diskCategory, diskSizeInGiB, diskPerformanceLevel)
+		}
+	} else {
+		aak = alibaba.accessKey
+	}
+
+	signer = signers.NewAccessKeySigner(aak)
+
+	if aak == nil {
+		log.Warnf("unable to retrieve the Alibaba API keys for node with providerID %s hence skipping SystemDisk Retrieval", slimK8sNode.ProviderID)
+		return NewAlibabaNodeKey(slimK8sNode, optimizedKeyword, diskCategory, diskSizeInGiB, diskPerformanceLevel)
+	}
+
+	if client, ok = alibaba.clients[slimK8sNode.RegionID]; !ok {
+		client, err = sdk.NewClientWithAccessKey(slimK8sNode.RegionID, aak.AccessKeyId, aak.AccessKeySecret)
+		if err != nil {
+			log.Warnf("unable to set the client  for node with providerID %s to retrieve the key skipping SystemDisk Retrieval with err: %v", slimK8sNode.ProviderID, err)
+			return NewAlibabaNodeKey(slimK8sNode, optimizedKeyword, diskCategory, diskSizeInGiB, diskPerformanceLevel)
+		}
+		alibaba.clients[slimK8sNode.RegionID] = client
+	}
+
+	instanceID := getInstanceIDFromProviderID(slimK8sNode.ProviderID)
+	slimK8sNode.SystemDisk = getSystemDiskInfoOfANode(instanceID, slimK8sNode.RegionID, client, signer)
+
+	if slimK8sNode.SystemDisk != nil {
+		diskCategory = slimK8sNode.SystemDisk.DiskCategory
+		diskSizeInGiB = slimK8sNode.SystemDisk.SizeInGiB
+		diskPerformanceLevel = slimK8sNode.SystemDisk.PerformanceLevel
+	}
+	return NewAlibabaNodeKey(slimK8sNode, optimizedKeyword, diskCategory, diskSizeInGiB, diskPerformanceLevel)
 }
 }
 
 
 type AlibabaPVKey struct {
 type AlibabaPVKey struct {
@@ -734,7 +863,6 @@ func (alibabaPVKey *AlibabaPVKey) GetStorageClass() string {
 // When supporting different new type of instances like Compute Optimized, Memory Optimized etc make sure you add the instance type
 // When supporting different new type of instances like Compute Optimized, Memory Optimized etc make sure you add the instance type
 // in unit test and check if it works or not to create the ack request and processDescribePriceAndCreateAlibabaPricing function
 // in unit test and check if it works or not to create the ack request and processDescribePriceAndCreateAlibabaPricing function
 // else more paramters need to be pulled from kubernetes node response or gather infromation from elsewhere and function modified.
 // else more paramters need to be pulled from kubernetes node response or gather infromation from elsewhere and function modified.
-// TO-DO: Add disk adjustments to the node , Test it out!
 func createDescribePriceACSRequest(i interface{}) (*requests.CommonRequest, error) {
 func createDescribePriceACSRequest(i interface{}) (*requests.CommonRequest, error) {
 	request := requests.NewCommonRequest()
 	request := requests.NewCommonRequest()
 	request.Method = requests.GET
 	request.Method = requests.GET
@@ -750,10 +878,23 @@ func createDescribePriceACSRequest(i interface{}) (*requests.CommonRequest, erro
 		request.QueryParams["ResourceType"] = ALIBABA_INSTANCE_RESOURCE_TYPE
 		request.QueryParams["ResourceType"] = ALIBABA_INSTANCE_RESOURCE_TYPE
 		request.QueryParams["InstanceType"] = node.InstanceType
 		request.QueryParams["InstanceType"] = node.InstanceType
 		request.QueryParams["PriceUnit"] = node.PriceUnit
 		request.QueryParams["PriceUnit"] = node.PriceUnit
-		// For Enhanced General Purpose Type g6e SystemDisk.Category param doesn't default right,
-		// need it to be specifically assigned to "cloud_ssd" otherwise there's errors
-		if node.InstanceTypeFamily == ALIBABA_ENHANCED_GENERAL_PURPOSE_TYPE {
-			request.QueryParams["SystemDisk.Category"] = ALIBABA_DISK_CLOUD_ESSD_CATEGORY
+		if node.SystemDisk != nil {
+			// Only if the required information is present it should be overridden else default it via the API
+			if node.SystemDisk.DiskCategory != "" {
+				request.QueryParams["SystemDisk.Category"] = node.SystemDisk.DiskCategory
+			}
+			if node.SystemDisk.SizeInGiB != "" {
+				request.QueryParams["SystemDisk.Size"] = node.SystemDisk.SizeInGiB
+			}
+			if node.SystemDisk.PerformanceLevel != "" {
+				request.QueryParams["SystemDisk.PerformanceLevel"] = node.SystemDisk.PerformanceLevel
+			}
+		} else {
+			// When System Disk information is not available for instance family g6e, r7 and r6e the defaults in
+			// DescribePrice dont default rightly to cloud_essd for these instances.
+			if slices.Contains(alibabaDefaultToCloudEssd, node.InstanceTypeFamily) {
+				request.QueryParams["SystemDisk.Category"] = ALIBABA_DISK_CLOUD_ESSD_CATEGORY
+			}
 		}
 		}
 		request.TransToAcsRequest()
 		request.TransToAcsRequest()
 		return request, nil
 		return request, nil
@@ -775,6 +916,22 @@ func createDescribePriceACSRequest(i interface{}) (*requests.CommonRequest, erro
 	}
 	}
 }
 }
 
 
+// createDescribeDisksCSRequest creates the HTTP GET Request to map the system disk to the InstanceID
+func createDescribeDisksACSRequest(instanceID, regionID, diskType string) (*requests.CommonRequest, error) {
+	request := requests.NewCommonRequest()
+	request.Method = requests.GET
+	request.Product = ALIBABA_ECS_PRODUCT_CODE
+	request.Domain = ALIBABA_ECS_DOMAIN
+	request.Version = ALIBABA_ECS_VERSION
+	request.Scheme = requests.HTTPS
+	request.ApiName = ALIBABA_DESCRIBE_DISK_API_ACTION
+	request.QueryParams["RegionId"] = regionID
+	request.QueryParams["InstanceId"] = instanceID
+	request.QueryParams["DiskType"] = diskType
+	request.TransToAcsRequest()
+	return request, nil
+}
+
 // determineKeyForPricing generate a unique key from SlimK8sNode object that is constructed from v1.Node object and
 // determineKeyForPricing generate a unique key from SlimK8sNode object that is constructed from v1.Node object and
 // SlimK8sDisk that is constructed from v1.PersistentVolume.
 // SlimK8sDisk that is constructed from v1.PersistentVolume.
 func determineKeyForPricing(i interface{}) (string, error) {
 func determineKeyForPricing(i interface{}) (string, error) {
@@ -784,11 +941,17 @@ func determineKeyForPricing(i interface{}) (string, error) {
 	switch i.(type) {
 	switch i.(type) {
 	case *SlimK8sNode:
 	case *SlimK8sNode:
 		node := i.(*SlimK8sNode)
 		node := i.(*SlimK8sNode)
+		var diskCategory, diskSizeInGiB, diskPerformanceLevel string
+		if node.SystemDisk != nil {
+			diskCategory = node.SystemDisk.DiskCategory
+			diskSizeInGiB = node.SystemDisk.SizeInGiB
+			diskPerformanceLevel = node.SystemDisk.PerformanceLevel
+		}
 		if node.IsIoOptimized {
 		if node.IsIoOptimized {
-			keyLookup := stringutil.DeleteEmptyStringsFromArray([]string{node.RegionID, node.InstanceType, node.OSType, ALIBABA_OPTIMIZE_KEYWORD})
+			keyLookup := stringutil.DeleteEmptyStringsFromArray([]string{node.RegionID, node.InstanceType, node.OSType, ALIBABA_OPTIMIZE_KEYWORD, diskCategory, diskSizeInGiB, diskPerformanceLevel})
 			return strings.Join(keyLookup, "::"), nil
 			return strings.Join(keyLookup, "::"), nil
 		} else {
 		} else {
-			keyLookup := stringutil.DeleteEmptyStringsFromArray([]string{node.RegionID, node.InstanceType, node.OSType, ALIBABA_NON_OPTIMIZE_KEYWORD})
+			keyLookup := stringutil.DeleteEmptyStringsFromArray([]string{node.RegionID, node.InstanceType, node.OSType, ALIBABA_NON_OPTIMIZE_KEYWORD, diskCategory, diskSizeInGiB, diskPerformanceLevel})
 			return strings.Join(keyLookup, "::"), nil
 			return strings.Join(keyLookup, "::"), nil
 		}
 		}
 	case *SlimK8sDisk:
 	case *SlimK8sDisk:
@@ -812,6 +975,7 @@ type Price struct {
 type PriceInfo struct {
 type PriceInfo struct {
 	Price Price `json:"Price"`
 	Price Price `json:"Price"`
 }
 }
+
 type DescribePriceResponse struct {
 type DescribePriceResponse struct {
 	RequestId string    `json:"RequestId"`
 	RequestId string    `json:"RequestId"`
 	PriceInfo PriceInfo `json:"PriceInfo"`
 	PriceInfo PriceInfo `json:"PriceInfo"`
@@ -885,11 +1049,10 @@ func processDescribePriceAndCreateAlibabaPricing(client *sdk.Client, i interface
 // This function is to get the InstanceFamily from the InstanceType , convention followed in
 // This function is to get the InstanceFamily from the InstanceType , convention followed in
 // instance type is ecs.[FamilyName].[DifferentSize], it gets the familyName , if it is unable to get it
 // instance type is ecs.[FamilyName].[DifferentSize], it gets the familyName , if it is unable to get it
 // it lists the instance family name as Unknown.
 // it lists the instance family name as Unknown.
-// TO-DO: might need predefined list of instance types.
 func getInstanceFamilyFromType(instanceType string) string {
 func getInstanceFamilyFromType(instanceType string) string {
 	splitinstanceType := strings.Split(instanceType, ".")
 	splitinstanceType := strings.Split(instanceType, ".")
 	if len(splitinstanceType) != 3 {
 	if len(splitinstanceType) != 3 {
-		log.Warnf("unable to find the family of the instance type %s, returning it's family type unknown", instanceType)
+		log.Warnf("unable to find the family of the instance type %s, returning its family type unknown", instanceType)
 		return ALIBABA_UNKNOWN_INSTANCE_FAMILY_TYPE
 		return ALIBABA_UNKNOWN_INSTANCE_FAMILY_TYPE
 	}
 	}
 	if !slices.Contains(alibabaInstanceFamilies, splitinstanceType[1]) {
 	if !slices.Contains(alibabaInstanceFamilies, splitinstanceType[1]) {
@@ -899,7 +1062,91 @@ func getInstanceFamilyFromType(instanceType string) string {
 	return splitinstanceType[1]
 	return splitinstanceType[1]
 }
 }
 
 
-// generateSlimK8sNodeFromV1Node generates SlimK8sNode struct from v1.Node to fetch pricing information.
+// getInstanceIDFromProviderID returns the instance ID associated with the Node. A *v1.Node providerID in Alibaba cloud
+// is of <REGION-ID>.<INSTANCE-ID>. This function returns the Instance ID for the given ProviderID. if its unable to interpret
+// it defaults to empty string.
+func getInstanceIDFromProviderID(providerID string) string {
+	if providerID == "" {
+		return ""
+	}
+	splitStrings := strings.Split(providerID, ".")
+	if len(splitStrings) < 2 {
+		return ""
+	}
+	return splitStrings[1]
+}
+
+type Disk struct {
+	Category         string `json:"Category"`
+	Size             int    `json:"Size"`
+	PerformanceLevel string `json:"PerformanceLevel"`
+	Type             string `json:"Type"`
+	RegionId         string `json:"RegionId"`
+	DiskId           string `json:"DiskId"`
+	DiskChargeType   string `json:"DiskChargeType"`
+}
+
+type Disks struct {
+	Disk []*Disk `json:"Disk"`
+}
+
+type DescribeDiskResponse struct {
+	TotalCount int    `json:"TotalCount"`
+	Disks      *Disks `json:"Disks"`
+}
+
+// getSystemDiskInfoOfANode gets the relevant System disk information associated with the Node given by the instanceID
+// in form of a SlimK8sDisk with only relevant information that can adjust the node pricing. If any error occurs return
+// an empty disk to not impact any default set at the price retrieval of the node.
+func getSystemDiskInfoOfANode(instanceID, regionID string, client *sdk.Client, signer *signers.AccessKeySigner) (systemDisk *SlimK8sDisk) {
+	systemDisk = &SlimK8sDisk{}
+	var response DescribeDiskResponse
+	// if instanceID is empty string return an empty k8s
+	if instanceID == "" {
+		return
+	}
+	req, err := createDescribeDisksACSRequest(instanceID, regionID, ALIBABA_SYSTEM_DISK_CATEGORY)
+	// if any error occurs return an empty disk to not impact default pricing.
+	if err != nil {
+		log.Warnf("Unable to create Describe Disk Request with err: %v for node with InstanceID: %s, hence defaulting it to an empty system disk to pass through to defaults", err, instanceID)
+		return
+	}
+
+	resp, err := client.ProcessCommonRequestWithSigner(req, signer)
+	if err != nil || resp.GetHttpStatus() != 200 {
+		log.Warnf("Unable to process Describe Disk request with err: %v and errcode: %d for the node with InstanceID: %s, hence defaulting it to an empty system disk to pass through to defaults", err, resp.GetHttpStatus(), instanceID)
+		return
+	} else {
+		// This is where population of Pricing happens
+		err = json.Unmarshal(resp.GetHttpContentBytes(), &response)
+		if err != nil {
+			log.Warnf("Unable to unmarshall Describe Disk response with err: %v for the node with InstanceID: %s, hence defaulting it to an empty system disk to pass through to defaults", err, instanceID)
+			return
+		}
+		// Every instance should only have one system disk per Alibaba Cloud documentation https://www.alibabacloud.com/help/en/elastic-compute-service/latest/block-storage-overview-disks,
+		// if TotalCount is not 1 just return empty and let it not impact default pricing.
+		if response.TotalCount != 1 {
+			log.Warnf("Total count of system disk for node with InstanceID: %s is not 1, hence defaulting it to an empty system disk to pass through to defaults", instanceID)
+			return
+		}
+
+		if response.Disks == nil {
+			log.Warnf("Disks information missing for node with InstanceID: %s, hence defaulting it to an empty system disk to pass through to defaults", instanceID)
+			return
+		}
+
+		if len(response.Disks.Disk) < 1 {
+			log.Warnf("Total number of system disk for node with InstanceID: %s is less than 1, hence defaulting it to an empty system disk to pass through to defaults", instanceID)
+			return
+		}
+
+		// TO-DO: When supporting Subscription type disk, you can leverge the disk.DiskChargeType here to map it to subscription type.
+		systemDisk := response.Disks.Disk[0]
+		return NewSlimK8sDisk(systemDisk.Type, systemDisk.RegionId, ALIBABA_HOUR_PRICE_UNIT, systemDisk.Category, systemDisk.PerformanceLevel, systemDisk.DiskId, "", fmt.Sprintf("%d", systemDisk.Size))
+	}
+}
+
+// generateSlimK8sNodeFromV1Node generates SlimK8sNode struct from v1.Node to fetch pricing information and call alibaba API.
 func generateSlimK8sNodeFromV1Node(node *v1.Node) *SlimK8sNode {
 func generateSlimK8sNodeFromV1Node(node *v1.Node) *SlimK8sNode {
 	var regionID, osType, instanceType, providerID, priceUnit, instanceFamily string
 	var regionID, osType, instanceType, providerID, priceUnit, instanceFamily string
 	var memorySizeInKiB string // TO-DO: try to convert it into float
 	var memorySizeInKiB string // TO-DO: try to convert it into float
@@ -926,7 +1173,8 @@ func generateSlimK8sNodeFromV1Node(node *v1.Node) *SlimK8sNode {
 	IsIoOptimized = true
 	IsIoOptimized = true
 	priceUnit = ALIBABA_HOUR_PRICE_UNIT
 	priceUnit = ALIBABA_HOUR_PRICE_UNIT
 
 
-	return NewSlimK8sNode(instanceType, regionID, priceUnit, memorySizeInKiB, osType, providerID, instanceFamily, IsIoOptimized)
+	systemDisk := &SlimK8sDisk{}
+	return NewSlimK8sNode(instanceType, regionID, priceUnit, memorySizeInKiB, osType, providerID, instanceFamily, IsIoOptimized, systemDisk)
 }
 }
 
 
 // getNumericalValueFromResourceQuantity returns the numericalValue of the resourceQuantity
 // getNumericalValueFromResourceQuantity returns the numericalValue of the resourceQuantity
@@ -947,8 +1195,8 @@ func getNumericalValueFromResourceQuantity(quantity string) (value string) {
 	return
 	return
 }
 }
 
 
-// generateSlimK8sDiskFromV1PV function generates SlimK8sDisk from v1.PersistentVolume and DescribeDisk API(If required) of alibaba
-// to generate slim disk type that can be used to fetch pricing information.
+// generateSlimK8sDiskFromV1PV function generates SlimK8sDisk from v1.PersistentVolume
+// to generate slim disk type that can be used to fetch pricing information for Data disk type.
 func generateSlimK8sDiskFromV1PV(pv *v1.PersistentVolume, regionID string) *SlimK8sDisk {
 func generateSlimK8sDiskFromV1PV(pv *v1.PersistentVolume, regionID string) *SlimK8sDisk {
 
 
 	// All PVs are data disks while local disk are categorized as system disk
 	// All PVs are data disks while local disk are categorized as system disk
@@ -1022,8 +1270,8 @@ func determinePVRegion(pv *v1.PersistentVolume) string {
 	}
 	}
 
 
 	if pvZone == "" {
 	if pvZone == "" {
-		// zone and regionID labels are optional in Alibaba PV creation, while UI creation put's a zone associated with PV assign the region of
-		// pv based on this information if available. If pv is provision via yaml and the block is missing default it to clusterRegion.
+		// zone and regionID labels are optional in Alibaba PV creation, while PV through UI creation put's a zone PV is associated with and the region
+		// can be determined from this information. If pv is provision via yaml and the block is missing that's the only time it gets defaulted to clusterRegion.
 		if pv.Spec.NodeAffinity != nil {
 		if pv.Spec.NodeAffinity != nil {
 			nodeAffinity := pv.Spec.NodeAffinity
 			nodeAffinity := pv.Spec.NodeAffinity
 			if nodeAffinity.Required != nil && nodeAffinity.Required.NodeSelectorTerms != nil {
 			if nodeAffinity.Required != nil && nodeAffinity.Required.NodeSelectorTerms != nil {

+ 245 - 0
pkg/cloud/aliyunprovider_test.go

@@ -83,6 +83,34 @@ func TestProcessDescribePriceAndCreateAlibabaPricing(t *testing.T) {
 		teststruct    interface{}
 		teststruct    interface{}
 		expectedError error
 		expectedError error
 	}{
 	}{
+		{
+			name: "test General Purpose Type g7 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.g7.4xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "16777216KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-01a",
+				InstanceTypeFamily: "g7",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test General Purpose Type g7a instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.g7a.8xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-01b",
+				InstanceTypeFamily: "g7a",
+			},
+			expectedError: nil,
+		},
 		{
 		{
 			name: "test Enhanced General Purpose Type g6e instance family",
 			name: "test Enhanced General Purpose Type g6e instance family",
 			teststruct: &SlimK8sNode{
 			teststruct: &SlimK8sNode{
@@ -153,6 +181,174 @@ func TestProcessDescribePriceAndCreateAlibabaPricing(t *testing.T) {
 			},
 			},
 			expectedError: nil,
 			expectedError: nil,
 		},
 		},
+		{
+			name: "test Memory Optmized instance type r7 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.r7.6xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "2013265592KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-06",
+				InstanceTypeFamily: "r7",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory Optmized instance type r7a instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.r7a.8xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-06a",
+				InstanceTypeFamily: "r7a",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Enhanced Memory Optmized instance type r6e instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.r6e.4xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "2013265592KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-07",
+				InstanceTypeFamily: "r6e",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory Optmized instance type r6a instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.r6a.8xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-07a",
+				InstanceTypeFamily: "r6a",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory Optmized instance type r6 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.r6.8xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-08",
+				InstanceTypeFamily: "r6",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory type instance and r5 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.r5.xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-09",
+				InstanceTypeFamily: "r5",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory Optmized instance type with se1 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.se1.4xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "16777216KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-10",
+				InstanceTypeFamily: "se1",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory Optmized instance type with Enhanced Network Performance se1ne instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.se1ne.3xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "100663296KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-11",
+				InstanceTypeFamily: "se1ne",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test High Memory type with re6 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.re6.8xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-12",
+				InstanceTypeFamily: "re6",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Persistent Memory Optimized type with re6p instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.re6p.4xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-13",
+				InstanceTypeFamily: "re6p",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory type with re4 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.re4.10xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "41943040KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-14",
+				InstanceTypeFamily: "re4",
+			},
+			expectedError: nil,
+		},
+		{
+			name: "test Memory optimized type with se1 instance family",
+			teststruct: &SlimK8sNode{
+				InstanceType:       "ecs.se1.8xlarge",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "33554432KiB",
+				IsIoOptimized:      true,
+				OSType:             "Linux",
+				ProviderID:         "cn-hangzhou.i-test-15",
+				InstanceTypeFamily: "se1",
+			},
+			expectedError: nil,
+		},
 		{
 		{
 			name:          "test for a nil information",
 			name:          "test for a nil information",
 			teststruct:    nil,
 			teststruct:    nil,
@@ -300,6 +496,55 @@ func TestDetermineKeyForPricing(t *testing.T) {
 			expectedKey:   "cn-hangzhou::linux::optimize",
 			expectedKey:   "cn-hangzhou::linux::optimize",
 			expectedError: nil,
 			expectedError: nil,
 		},
 		},
+		{
+			name: "test when node has a systemDisk Information with missing Performance level",
+			testVar: &SlimK8sNode{
+				InstanceType:       "ecs.sn2.large",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "16777216KiB",
+				IsIoOptimized:      true,
+				OSType:             "linux",
+				ProviderID:         "cn-hangzhou.i-test-04",
+				InstanceTypeFamily: "sn2",
+				SystemDisk: &SlimK8sDisk{
+					DiskType:     "system",
+					RegionID:     "cn-hangzhou",
+					PriceUnit:    "Hour",
+					SizeInGiB:    "40",
+					DiskCategory: "cloud_efficiency",
+					ProviderID:   "d-Ali-cloud-XXX-i1",
+					StorageClass: "",
+				},
+			},
+			expectedKey:   "cn-hangzhou::ecs.sn2.large::linux::optimize::cloud_efficiency::40",
+			expectedError: nil,
+		},
+		{
+			name: "test when node has a systemDisk Information with all information",
+			testVar: &SlimK8sNode{
+				InstanceType:       "ecs.sn2.large",
+				RegionID:           "cn-hangzhou",
+				PriceUnit:          "Hour",
+				MemorySizeInKiB:    "16777216KiB",
+				IsIoOptimized:      true,
+				OSType:             "linux",
+				ProviderID:         "cn-hangzhou.i-test-04",
+				InstanceTypeFamily: "sn2",
+				SystemDisk: &SlimK8sDisk{
+					DiskType:         "data",
+					RegionID:         "cn-hangzhou",
+					PriceUnit:        "Hour",
+					SizeInGiB:        "80",
+					DiskCategory:     "cloud_ssd",
+					PerformanceLevel: "PL2",
+					ProviderID:       "d-Ali-cloud-XXX-04",
+					StorageClass:     "",
+				},
+			},
+			expectedKey:   "cn-hangzhou::ecs.sn2.large::linux::optimize::cloud_ssd::80::PL2",
+			expectedError: nil,
+		},
 		{
 		{
 			name: "test random k8s struct should return unsupported error",
 			name: "test random k8s struct should return unsupported error",
 			testVar: &randomK8sStruct{
 			testVar: &randomK8sStruct{

+ 14 - 9
pkg/cloud/awsprovider.go

@@ -5,6 +5,7 @@ import (
 	"compress/gzip"
 	"compress/gzip"
 	"context"
 	"context"
 	"encoding/csv"
 	"encoding/csv"
+	"errors"
 	"fmt"
 	"fmt"
 	"io"
 	"io"
 	"net/http"
 	"net/http"
@@ -19,7 +20,7 @@ import (
 
 
 	"github.com/opencost/opencost/pkg/clustercache"
 	"github.com/opencost/opencost/pkg/clustercache"
 	"github.com/opencost/opencost/pkg/env"
 	"github.com/opencost/opencost/pkg/env"
-	"github.com/opencost/opencost/pkg/errors"
+	errs "github.com/opencost/opencost/pkg/errors"
 	"github.com/opencost/opencost/pkg/log"
 	"github.com/opencost/opencost/pkg/log"
 	"github.com/opencost/opencost/pkg/util"
 	"github.com/opencost/opencost/pkg/util"
 	"github.com/opencost/opencost/pkg/util/fileutil"
 	"github.com/opencost/opencost/pkg/util/fileutil"
@@ -851,7 +852,7 @@ func (aws *AWS) DownloadPricingData() error {
 		pvkeys[key.Features()] = key
 		pvkeys[key.Features()] = key
 	}
 	}
 
 
-	// RIDataRunning establishes the existance of the goroutine. Since it's possible we
+	// RIDataRunning establishes the existence of the goroutine. Since it's possible we
 	// run multiple downloads, we don't want to create multiple go routines if one already exists
 	// run multiple downloads, we don't want to create multiple go routines if one already exists
 	if !aws.RIDataRunning {
 	if !aws.RIDataRunning {
 		err = aws.GetReservationDataFromAthena() // Block until one run has completed.
 		err = aws.GetReservationDataFromAthena() // Block until one run has completed.
@@ -859,7 +860,7 @@ func (aws *AWS) DownloadPricingData() error {
 			log.Errorf("Failed to lookup reserved instance data: %s", err.Error())
 			log.Errorf("Failed to lookup reserved instance data: %s", err.Error())
 		} else { // If we make one successful run, check on new reservation data every hour
 		} else { // If we make one successful run, check on new reservation data every hour
 			go func() {
 			go func() {
-				defer errors.HandlePanic()
+				defer errs.HandlePanic()
 				aws.RIDataRunning = true
 				aws.RIDataRunning = true
 
 
 				for {
 				for {
@@ -879,7 +880,7 @@ func (aws *AWS) DownloadPricingData() error {
 			log.Errorf("Failed to lookup savings plan data: %s", err.Error())
 			log.Errorf("Failed to lookup savings plan data: %s", err.Error())
 		} else {
 		} else {
 			go func() {
 			go func() {
-				defer errors.HandlePanic()
+				defer errs.HandlePanic()
 				aws.SavingsPlanDataRunning = true
 				aws.SavingsPlanDataRunning = true
 				for {
 				for {
 					log.Infof("Savings Plan watcher running... next update in 1h")
 					log.Infof("Savings Plan watcher running... next update in 1h")
@@ -1056,7 +1057,7 @@ func (aws *AWS) DownloadPricingData() error {
 		aws.SpotRefreshRunning = true
 		aws.SpotRefreshRunning = true
 
 
 		go func() {
 		go func() {
-			defer errors.HandlePanic()
+			defer errs.HandlePanic()
 
 
 			for {
 			for {
 				log.Infof("Spot Pricing Refresh scheduled in %.2f minutes.", SpotRefreshDuration.Minutes())
 				log.Infof("Spot Pricing Refresh scheduled in %.2f minutes.", SpotRefreshDuration.Minutes())
@@ -1467,7 +1468,7 @@ func (aws *AWS) GetAddresses() ([]byte, error) {
 		// respective channels
 		// respective channels
 		go func(region string) {
 		go func(region string) {
 			defer wg.Done()
 			defer wg.Done()
-			defer errors.HandlePanic()
+			defer errs.HandlePanic()
 
 
 			// Query for first page of volume results
 			// Query for first page of volume results
 			resp, err := aws.getAddressesForRegion(context.TODO(), region)
 			resp, err := aws.getAddressesForRegion(context.TODO(), region)
@@ -1481,7 +1482,7 @@ func (aws *AWS) GetAddresses() ([]byte, error) {
 
 
 	// Close the result channels after everything has been sent
 	// Close the result channels after everything has been sent
 	go func() {
 	go func() {
-		defer errors.HandlePanic()
+		defer errs.HandlePanic()
 
 
 		wg.Wait()
 		wg.Wait()
 		close(errorCh)
 		close(errorCh)
@@ -1550,7 +1551,7 @@ func (aws *AWS) GetDisks() ([]byte, error) {
 		// respective channels
 		// respective channels
 		go func(region string) {
 		go func(region string) {
 			defer wg.Done()
 			defer wg.Done()
-			defer errors.HandlePanic()
+			defer errs.HandlePanic()
 
 
 			// Query for first page of volume results
 			// Query for first page of volume results
 			resp, err := aws.getDisksForRegion(context.TODO(), region, 1000, nil)
 			resp, err := aws.getDisksForRegion(context.TODO(), region, 1000, nil)
@@ -1575,7 +1576,7 @@ func (aws *AWS) GetDisks() ([]byte, error) {
 
 
 	// Close the result channels after everything has been sent
 	// Close the result channels after everything has been sent
 	go func() {
 	go func() {
-		defer errors.HandlePanic()
+		defer errs.HandlePanic()
 
 
 		wg.Wait()
 		wg.Wait()
 		close(errorCh)
 		close(errorCh)
@@ -1609,6 +1610,10 @@ func (aws *AWS) GetDisks() ([]byte, error) {
 	})
 	})
 }
 }
 
 
+func (*AWS) GetOrphanedResources() ([]OrphanedResource, error) {
+	return nil, errors.New("not implemented")
+}
+
 // QueryAthenaPaginated executes athena query and processes results.
 // QueryAthenaPaginated executes athena query and processes results.
 func (aws *AWS) QueryAthenaPaginated(ctx context.Context, query string, fn func(*athena.GetQueryResultsOutput) bool) error {
 func (aws *AWS) QueryAthenaPaginated(ctx context.Context, query string, fn func(*athena.GetQueryResultsOutput) bool) error {
 	awsAthenaInfo, err := aws.GetAWSAthenaInfo()
 	awsAthenaInfo, err := aws.GetAWSAthenaInfo()

+ 149 - 3
pkg/cloud/azureprovider.go

@@ -13,15 +13,19 @@ import (
 	"sync"
 	"sync"
 	"time"
 	"time"
 
 
+	"github.com/opencost/opencost/pkg/kubecost"
+
 	"github.com/opencost/opencost/pkg/clustercache"
 	"github.com/opencost/opencost/pkg/clustercache"
 	"github.com/opencost/opencost/pkg/env"
 	"github.com/opencost/opencost/pkg/env"
 	"github.com/opencost/opencost/pkg/log"
 	"github.com/opencost/opencost/pkg/log"
 	"github.com/opencost/opencost/pkg/util"
 	"github.com/opencost/opencost/pkg/util"
 	"github.com/opencost/opencost/pkg/util/fileutil"
 	"github.com/opencost/opencost/pkg/util/fileutil"
 	"github.com/opencost/opencost/pkg/util/json"
 	"github.com/opencost/opencost/pkg/util/json"
+	"github.com/opencost/opencost/pkg/util/timeutil"
 	"golang.org/x/text/cases"
 	"golang.org/x/text/cases"
 	"golang.org/x/text/language"
 	"golang.org/x/text/language"
 
 
+	"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2021-11-01/compute"
 	"github.com/Azure/azure-sdk-for-go/services/preview/commerce/mgmt/2015-06-01-preview/commerce"
 	"github.com/Azure/azure-sdk-for-go/services/preview/commerce/mgmt/2015-06-01-preview/commerce"
 	"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions"
 	"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions"
 	"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-05-01/resources"
 	"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-05-01/resources"
@@ -1169,8 +1173,150 @@ func (*Azure) GetAddresses() ([]byte, error) {
 	return nil, nil
 	return nil, nil
 }
 }
 
 
-func (*Azure) GetDisks() ([]byte, error) {
-	return nil, nil
+func (az *Azure) GetDisks() ([]byte, error) {
+	disks, err := az.getDisks()
+	if err != nil {
+		return nil, err
+	}
+
+	return json.Marshal(disks)
+}
+
+func (az *Azure) getDisks() ([]*compute.Disk, error) {
+	config, err := az.GetConfig()
+	if err != nil {
+		return nil, err
+	}
+
+	// Load the service provider keys
+	subscriptionID, clientID, clientSecret, tenantID := az.getAzureRateCardAuth(false, config)
+	config.AzureSubscriptionID = subscriptionID
+	config.AzureClientID = clientID
+	config.AzureClientSecret = clientSecret
+	config.AzureTenantID = tenantID
+
+	var authorizer autorest.Authorizer
+
+	azureEnv := determineCloudByRegion(az.clusterRegion)
+
+	if config.AzureClientID != "" && config.AzureClientSecret != "" && config.AzureTenantID != "" {
+		credentialsConfig := NewClientCredentialsConfig(config.AzureClientID, config.AzureClientSecret, config.AzureTenantID, azureEnv)
+		a, err := credentialsConfig.Authorizer()
+		if err != nil {
+			az.RateCardPricingError = err
+			return nil, err
+		}
+		authorizer = a
+	}
+
+	if authorizer == nil {
+		a, err := auth.NewAuthorizerFromEnvironment()
+		authorizer = a
+		if err != nil {
+			a, err := auth.NewAuthorizerFromFile(azureEnv.ResourceManagerEndpoint)
+			if err != nil {
+				az.RateCardPricingError = err
+				return nil, err
+			}
+			authorizer = a
+		}
+	}
+	client := compute.NewDisksClient(config.AzureSubscriptionID)
+	client.Authorizer = authorizer
+
+	ctx := context.TODO()
+
+	var disks []*compute.Disk
+
+	diskPage, err := client.List(ctx)
+	if err != nil {
+		return nil, fmt.Errorf("error getting disks: %v", err)
+	}
+
+	for diskPage.NotDone() {
+		for _, d := range diskPage.Values() {
+			d := d
+			disks = append(disks, &d)
+		}
+		err := diskPage.Next()
+		if err != nil {
+			return nil, fmt.Errorf("error getting next page: %v", err)
+		}
+	}
+
+	return disks, nil
+}
+
+func isDiskOrphaned(disk *compute.Disk) bool {
+	//TODO: needs better algorithm
+	return disk.DiskState == "Unattached" || disk.DiskState == "Reserved"
+}
+
+func (az *Azure) GetOrphanedResources() ([]OrphanedResource, error) {
+	disks, err := az.getDisks()
+	if err != nil {
+		return nil, err
+	}
+
+	var orphanedResources []OrphanedResource
+
+	for _, d := range disks {
+		if isDiskOrphaned(d) {
+			cost, err := az.findCostForDisk(d)
+			if err != nil {
+				return nil, err
+			}
+
+			diskName := ""
+			if d.Name != nil {
+				diskName = *d.Name
+			}
+
+			diskRegion := ""
+			if d.Location != nil {
+				diskRegion = *d.Location
+			}
+
+			or := OrphanedResource{
+				Kind:   "disk",
+				Region: diskRegion,
+				Description: map[string]string{
+					"diskState":   string(d.DiskState),
+					"timeCreated": d.TimeCreated.String(),
+				},
+				Size:        d.DiskSizeGB,
+				DiskName:    diskName,
+				MonthlyCost: &cost,
+			}
+			orphanedResources = append(orphanedResources, or)
+		}
+	}
+
+	return orphanedResources, nil
+}
+
+func (az *Azure) findCostForDisk(d *compute.Disk) (float64, error) {
+	if d == nil {
+		return 0.0, fmt.Errorf("disk is empty")
+	}
+	storageClass := string(d.Sku.Name)
+	if strings.EqualFold(storageClass, "Premium_LRS") {
+		storageClass = AzureDiskPremiumSSDStorageClass
+	} else if strings.EqualFold(storageClass, "StandardSSD_LRS") {
+		storageClass = AzureDiskStandardSSDStorageClass
+	} else if strings.EqualFold(storageClass, "Standard_LRS") {
+		storageClass = AzureDiskStandardStorageClass
+	}
+
+	key := *d.Location + "," + storageClass
+
+	diskPricePerGBHour, err := strconv.ParseFloat(az.Pricing[key].PV.Cost, 64)
+	if err != nil {
+		return 0.0, fmt.Errorf("error converting to float: %s", err)
+	}
+	cost := diskPricePerGBHour * timeutil.HoursPerMonth * float64(*d.DiskSizeGB)
+
+	return cost, nil
 }
 }
 
 
 func (az *Azure) ClusterInfo() (map[string]string, error) {
 func (az *Azure) ClusterInfo() (map[string]string, error) {
@@ -1185,7 +1331,7 @@ func (az *Azure) ClusterInfo() (map[string]string, error) {
 	if c.ClusterName != "" {
 	if c.ClusterName != "" {
 		m["name"] = c.ClusterName
 		m["name"] = c.ClusterName
 	}
 	}
-	m["provider"] = "Azure"
+	m["provider"] = kubecost.AzureProvider
 	m["account"] = az.clusterAccountId
 	m["account"] = az.clusterAccountId
 	m["region"] = az.clusterRegion
 	m["region"] = az.clusterRegion
 	m["remoteReadEnabled"] = strconv.FormatBool(remoteEnabled)
 	m["remoteReadEnabled"] = strconv.FormatBool(remoteEnabled)

+ 7 - 1
pkg/cloud/customprovider.go

@@ -1,7 +1,9 @@
 package cloud
 package cloud
 
 
 import (
 import (
+	"errors"
 	"fmt"
 	"fmt"
+	"github.com/opencost/opencost/pkg/kubecost"
 	"io"
 	"io"
 	"strconv"
 	"strconv"
 	"strings"
 	"strings"
@@ -107,7 +109,7 @@ func (cp *CustomProvider) ClusterInfo() (map[string]string, error) {
 	if conf.ClusterName != "" {
 	if conf.ClusterName != "" {
 		m["name"] = conf.ClusterName
 		m["name"] = conf.ClusterName
 	}
 	}
-	m["provider"] = "custom"
+	m["provider"] = kubecost.CustomProvider
 	m["id"] = env.GetClusterID()
 	m["id"] = env.GetClusterID()
 	return m, nil
 	return m, nil
 }
 }
@@ -120,6 +122,10 @@ func (*CustomProvider) GetDisks() ([]byte, error) {
 	return nil, nil
 	return nil, nil
 }
 }
 
 
+func (*CustomProvider) GetOrphanedResources() ([]OrphanedResource, error) {
+	return nil, errors.New("not implemented")
+}
+
 func (cp *CustomProvider) AllNodePricing() (interface{}, error) {
 func (cp *CustomProvider) AllNodePricing() (interface{}, error) {
 	cp.DownloadPricingDataLock.RLock()
 	cp.DownloadPricingDataLock.RLock()
 	defer cp.DownloadPricingDataLock.RUnlock()
 	defer cp.DownloadPricingDataLock.RUnlock()

+ 5 - 0
pkg/cloud/gcpprovider.go

@@ -2,6 +2,7 @@ package cloud
 
 
 import (
 import (
 	"context"
 	"context"
+	"errors"
 	"fmt"
 	"fmt"
 	"io"
 	"io"
 	"math"
 	"math"
@@ -382,6 +383,10 @@ func (gcp *GCP) GetDisks() ([]byte, error) {
 
 
 }
 }
 
 
+func (*GCP) GetOrphanedResources() ([]OrphanedResource, error) {
+	return nil, errors.New("not implemented")
+}
+
 // GCPPricing represents GCP pricing data for a SKU
 // GCPPricing represents GCP pricing data for a SKU
 type GCPPricing struct {
 type GCPPricing struct {
 	Name                string           `json:"name"`
 	Name                string           `json:"name"`

+ 11 - 0
pkg/cloud/provider.go

@@ -104,6 +104,16 @@ type Network struct {
 	InternetNetworkEgressCost float64
 	InternetNetworkEgressCost float64
 }
 }
 
 
+type OrphanedResource struct {
+	Kind        string            `json:"resourceKind"`
+	Region      string            `json:"region"`
+	Description map[string]string `json:"description"`
+	Size        *int32            `json:"diskSizeInGB,omitempty"`
+	DiskName    string            `json:"diskName,omitempty"`
+	Address     string            `json:"ipAddress,omitempty"`
+	MonthlyCost *float64          `json:"monthlyCost"`
+}
+
 // PV is the interface by which the provider and cost model communicate PV prices.
 // PV is the interface by which the provider and cost model communicate PV prices.
 // The provider will best-effort try to fill out this struct.
 // The provider will best-effort try to fill out this struct.
 type PV struct {
 type PV struct {
@@ -299,6 +309,7 @@ type Provider interface {
 	ClusterInfo() (map[string]string, error)
 	ClusterInfo() (map[string]string, error)
 	GetAddresses() ([]byte, error)
 	GetAddresses() ([]byte, error)
 	GetDisks() ([]byte, error)
 	GetDisks() ([]byte, error)
+	GetOrphanedResources() ([]OrphanedResource, error)
 	NodePricing(Key) (*Node, error)
 	NodePricing(Key) (*Node, error)
 	PVPricing(PVKey) (*PV, error)
 	PVPricing(PVKey) (*PV, error)
 	NetworkPricing() (*Network, error)           // TODO: add key interface arg for dynamic price fetching
 	NetworkPricing() (*Network, error)           // TODO: add key interface arg for dynamic price fetching

+ 7 - 1
pkg/cloud/scalewayprovider.go

@@ -1,7 +1,9 @@
 package cloud
 package cloud
 
 
 import (
 import (
+	"errors"
 	"fmt"
 	"fmt"
+	"github.com/opencost/opencost/pkg/kubecost"
 	"io"
 	"io"
 	"strconv"
 	"strconv"
 	"strings"
 	"strings"
@@ -250,6 +252,10 @@ func (*Scaleway) GetDisks() ([]byte, error) {
 	return nil, nil
 	return nil, nil
 }
 }
 
 
+func (*Scaleway) GetOrphanedResources() ([]OrphanedResource, error) {
+	return nil, errors.New("not implemented")
+}
+
 func (scw *Scaleway) ClusterInfo() (map[string]string, error) {
 func (scw *Scaleway) ClusterInfo() (map[string]string, error) {
 	remoteEnabled := env.IsRemoteEnabled()
 	remoteEnabled := env.IsRemoteEnabled()
 
 
@@ -262,7 +268,7 @@ func (scw *Scaleway) ClusterInfo() (map[string]string, error) {
 	if c.ClusterName != "" {
 	if c.ClusterName != "" {
 		m["name"] = c.ClusterName
 		m["name"] = c.ClusterName
 	}
 	}
-	m["provider"] = "Scaleway"
+	m["provider"] = kubecost.ScalewayProvider
 	m["remoteReadEnabled"] = strconv.FormatBool(remoteEnabled)
 	m["remoteReadEnabled"] = strconv.FormatBool(remoteEnabled)
 	m["id"] = env.GetClusterID()
 	m["id"] = env.GetClusterID()
 	return m, nil
 	return m, nil

+ 13 - 13
pkg/costmodel/cluster.go

@@ -227,17 +227,17 @@ func ClusterDisks(client prometheus.Client, provider cloud.Provider, start, end
 
 
 		volumeName, err := result.GetString("volumename")
 		volumeName, err := result.GetString("volumename")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv claim data missing volumename")
+			log.Debugf("ClusterDisks: pv claim data missing volumename")
 			continue
 			continue
 		}
 		}
 		claimName, err := result.GetString("persistentvolumeclaim")
 		claimName, err := result.GetString("persistentvolumeclaim")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv claim data missing persistentvolumeclaim")
+			log.Debugf("ClusterDisks: pv claim data missing persistentvolumeclaim")
 			continue
 			continue
 		}
 		}
 		claimNamespace, err := result.GetString("namespace")
 		claimNamespace, err := result.GetString("namespace")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv claim data missing namespace")
+			log.Debugf("ClusterDisks: pv claim data missing namespace")
 			continue
 			continue
 		}
 		}
 
 
@@ -1417,12 +1417,12 @@ func pvCosts(diskMap map[DiskIdentifier]*Disk, resolution time.Duration, resActi
 
 
 		claimName, err := result.GetString("persistentvolumeclaim")
 		claimName, err := result.GetString("persistentvolumeclaim")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv usage data missing persistentvolumeclaim")
+			log.Debugf("ClusterDisks: pv usage data missing persistentvolumeclaim")
 			continue
 			continue
 		}
 		}
 		claimNamespace, err := result.GetString("namespace")
 		claimNamespace, err := result.GetString("namespace")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv usage data missing namespace")
+			log.Debugf("ClusterDisks: pv usage data missing namespace")
 			continue
 			continue
 		}
 		}
 
 
@@ -1437,17 +1437,17 @@ func pvCosts(diskMap map[DiskIdentifier]*Disk, resolution time.Duration, resActi
 
 
 			thatVolumeName, err := thatRes.GetString("volumename")
 			thatVolumeName, err := thatRes.GetString("volumename")
 			if err != nil {
 			if err != nil {
-				log.Warnf("ClusterDisks: pv claim data missing volumename")
+				log.Debugf("ClusterDisks: pv claim data missing volumename")
 				continue
 				continue
 			}
 			}
 			thatClaimName, err := thatRes.GetString("persistentvolumeclaim")
 			thatClaimName, err := thatRes.GetString("persistentvolumeclaim")
 			if err != nil {
 			if err != nil {
-				log.Warnf("ClusterDisks: pv claim data missing persistentvolumeclaim")
+				log.Debugf("ClusterDisks: pv claim data missing persistentvolumeclaim")
 				continue
 				continue
 			}
 			}
 			thatClaimNamespace, err := thatRes.GetString("namespace")
 			thatClaimNamespace, err := thatRes.GetString("namespace")
 			if err != nil {
 			if err != nil {
-				log.Warnf("ClusterDisks: pv claim data missing namespace")
+				log.Debugf("ClusterDisks: pv claim data missing namespace")
 				continue
 				continue
 			}
 			}
 
 
@@ -1478,12 +1478,12 @@ func pvCosts(diskMap map[DiskIdentifier]*Disk, resolution time.Duration, resActi
 
 
 		claimName, err := result.GetString("persistentvolumeclaim")
 		claimName, err := result.GetString("persistentvolumeclaim")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv usage data missing persistentvolumeclaim")
+			log.Debugf("ClusterDisks: pv usage data missing persistentvolumeclaim")
 			continue
 			continue
 		}
 		}
 		claimNamespace, err := result.GetString("namespace")
 		claimNamespace, err := result.GetString("namespace")
 		if err != nil {
 		if err != nil {
-			log.Warnf("ClusterDisks: pv usage data missing namespace")
+			log.Debugf("ClusterDisks: pv usage data missing namespace")
 			continue
 			continue
 		}
 		}
 
 
@@ -1498,17 +1498,17 @@ func pvCosts(diskMap map[DiskIdentifier]*Disk, resolution time.Duration, resActi
 
 
 			thatVolumeName, err := thatRes.GetString("volumename")
 			thatVolumeName, err := thatRes.GetString("volumename")
 			if err != nil {
 			if err != nil {
-				log.Warnf("ClusterDisks: pv claim data missing volumename")
+				log.Debugf("ClusterDisks: pv claim data missing volumename")
 				continue
 				continue
 			}
 			}
 			thatClaimName, err := thatRes.GetString("persistentvolumeclaim")
 			thatClaimName, err := thatRes.GetString("persistentvolumeclaim")
 			if err != nil {
 			if err != nil {
-				log.Warnf("ClusterDisks: pv claim data missing persistentvolumeclaim")
+				log.Debugf("ClusterDisks: pv claim data missing persistentvolumeclaim")
 				continue
 				continue
 			}
 			}
 			thatClaimNamespace, err := thatRes.GetString("namespace")
 			thatClaimNamespace, err := thatRes.GetString("namespace")
 			if err != nil {
 			if err != nil {
-				log.Warnf("ClusterDisks: pv claim data missing namespace")
+				log.Debugf("ClusterDisks: pv claim data missing namespace")
 				continue
 				continue
 			}
 			}
 
 

+ 6 - 0
pkg/env/env.go

@@ -123,6 +123,12 @@ func GetDuration(key string, defaultValue time.Duration) time.Duration {
 	return envMapper.GetDuration(key, defaultValue)
 	return envMapper.GetDuration(key, defaultValue)
 }
 }
 
 
+// GetList parses a []string from the enviroment variable key parameter.  If the environment
+// // variable is empty or fails to parse, nil is returned.
+func GetList(key, delimiter string) []string {
+	return envMapper.GetList(key, delimiter)
+}
+
 // Set sets the environment variable for the key provided using the value provided.
 // Set sets the environment variable for the key provided using the value provided.
 func Set(key string, value string) error {
 func Set(key string, value string) error {
 	return envMapper.Set(key, value)
 	return envMapper.Set(key, value)

+ 10 - 0
pkg/filter/allcut.go

@@ -0,0 +1,10 @@
+package filter
+
+// AllCut is a filter that matches nothing. This is useful
+// for applications like authorization, where a user/group/role may be disallowed
+// from viewing data entirely.
+type AllCut[T any] struct{}
+
+func (ac AllCut[T]) String() string { return "(AllCut)" }
+
+func (ac AllCut[T]) Matches(T) bool { return false }

+ 9 - 0
pkg/filter/allpass.go

@@ -0,0 +1,9 @@
+package filter
+
+// AllPass is a filter that matches everything and is the same as no filter. It is implemented here as a guard
+// against universal operations occurring in the absence of filters.
+type AllPass[T any] struct{}
+
+func (n AllPass[T]) String() string { return "(AllPass)" }
+
+func (n AllPass[T]) Matches(T) bool { return true }

+ 36 - 0
pkg/filter/and.go

@@ -0,0 +1,36 @@
+package filter
+
+import (
+	"fmt"
+)
+
+// And is a set of filters that should be evaluated as a logical
+// AND.
+type And[T any] struct {
+	Filters []Filter[T]
+}
+
+func (a And[T]) String() string {
+	s := "(and"
+	for _, f := range a.Filters {
+		s += fmt.Sprintf(" %s", f)
+	}
+
+	s += ")"
+	return s
+}
+
+func (a And[T]) Matches(that T) bool {
+	filters := a.Filters
+	if len(filters) == 0 {
+		return true
+	}
+
+	for _, filter := range filters {
+		if !filter.Matches(that) {
+			return false
+		}
+	}
+
+	return true
+}

+ 20 - 0
pkg/filter/filter.go

@@ -0,0 +1,20 @@
+package filter
+
+// Filter represents anything that can be used to filter given generic type T.
+//
+// Implement this interface with caution. While it is generic, it
+// is intended to be introspectable so query handlers can perform various
+// optimizations. These optimizations include:
+// - Routing a query to the most optimal cache
+// - Querying backing data stores efficiently (e.g. translation to SQL)
+//
+// Custom implementations of this interface outside of this package should not
+// expect to receive these benefits. Passing a custom implementation to a
+// handler may in errors.
+type Filter[T any] interface {
+	String() string
+
+	// Matches is the canonical in-Go function for determining if T
+	// matches a filter.
+	Matches(T) bool
+}

+ 1073 - 0
pkg/filter/filter_test.go

@@ -0,0 +1,1073 @@
+package filter_test
+
+import (
+	"testing"
+
+	"github.com/opencost/opencost/pkg/filter"
+	"github.com/opencost/opencost/pkg/kubecost"
+)
+
+func Test_String_Matches(t *testing.T) {
+	cases := []struct {
+		name   string
+		a      *kubecost.Allocation
+		filter filter.Filter[*kubecost.Allocation]
+
+		expected bool
+	}{
+		{
+			name: "ClusterID Equals -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Cluster: "cluster-one",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationClusterProp,
+				Op:    filter.StringEquals,
+				Value: "cluster-one",
+			},
+
+			expected: true,
+		},
+		{
+			name: "ClusterID StartsWith -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Cluster: "cluster-one",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationClusterProp,
+				Op:    filter.StringStartsWith,
+				Value: "cluster",
+			},
+
+			expected: true,
+		},
+		{
+			name: "ClusterID StartsWith -> false",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Cluster: "k8s-one",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationClusterProp,
+				Op:    filter.StringStartsWith,
+				Value: "cluster",
+			},
+
+			expected: false,
+		},
+		{
+			name: "ClusterID empty StartsWith '' -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Cluster: "",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationClusterProp,
+				Op:    filter.StringStartsWith,
+				Value: "",
+			},
+
+			expected: true,
+		},
+		{
+			name: "ClusterID nonempty StartsWith '' -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Cluster: "abc",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationClusterProp,
+				Op:    filter.StringStartsWith,
+				Value: "",
+			},
+
+			expected: true,
+		},
+		{
+			name: "Node Equals -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Node: "node123",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationNodeProp,
+				Op:    filter.StringEquals,
+				Value: "node123",
+			},
+
+			expected: true,
+		},
+		{
+			name: "Namespace Equals Unallocated -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationNamespaceProp,
+				Op:    filter.StringEquals,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: true,
+		},
+		{
+			name: "ControllerKind Equals -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					ControllerKind: "deployment", // We generally store controller kinds as all lowercase
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationControllerKindProp,
+				Op:    filter.StringEquals,
+				Value: "deployment",
+			},
+
+			expected: true,
+		},
+		{
+			name: "ControllerName Equals -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Controller: "kc-cost-analyzer",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationControllerProp,
+				Op:    filter.StringEquals,
+				Value: "kc-cost-analyzer",
+			},
+
+			expected: true,
+		},
+		{
+			name: "Pod (with UID) Equals -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Pod: "pod-123 UID-ABC",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationPodProp,
+				Op:    filter.StringEquals,
+				Value: "pod-123 UID-ABC",
+			},
+
+			expected: true,
+		},
+		{
+			name: "Container Equals -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Container: "cost-model",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationContainerProp,
+				Op:    filter.StringEquals,
+				Value: "cost-model",
+			},
+
+			expected: true,
+		},
+		{
+			name: `namespace unallocated -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationNamespaceProp,
+				Op:    filter.StringEquals,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: true,
+		},
+	}
+
+	for _, c := range cases {
+		result := c.filter.Matches(c.a)
+
+		if result != c.expected {
+			t.Errorf("%s: expected %t, got %t", c.name, c.expected, result)
+		}
+	}
+}
+
+func Test_StringSlice_Matches(t *testing.T) {
+	cases := []struct {
+		name   string
+		a      *kubecost.Allocation
+		filter filter.Filter[*kubecost.Allocation]
+
+		expected bool
+	}{
+		{
+			name: `services contains -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: "serv2",
+			},
+
+			expected: true,
+		},
+		{
+			name: `services contains -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: "serv3",
+			},
+
+			expected: false,
+		},
+		{
+			name: `services contains unallocated -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: false,
+		},
+		{
+			name: `services contains unallocated -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: true,
+		},
+	}
+
+	for _, c := range cases {
+		result := c.filter.Matches(c.a)
+
+		if result != c.expected {
+			t.Errorf("%s: expected %t, got %t", c.name, c.expected, result)
+		}
+	}
+}
+
+func Test_StringMap_Matches(t *testing.T) {
+	cases := []struct {
+		name   string
+		a      *kubecost.Allocation
+		filter filter.Filter[*kubecost.Allocation]
+
+		expected bool
+	}{
+		{
+			name: `label[app]="foo" -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"app": "foo",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationLabelProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: true,
+		},
+		{
+			name: `label[app]="foo" -> different value -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"app": "bar",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationLabelProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: false,
+		},
+		{
+			name: `label[app]="foo" -> label missing -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"someotherlabel": "someothervalue",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationLabelProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: false,
+		},
+		{
+			name: `label[app]=Unallocated -> label missing -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"someotherlabel": "someothervalue",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationLabelProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: true,
+		},
+		{
+			name: `label[app]=Unallocated -> label present -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"app": "test",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationLabelProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: false,
+		},
+		{
+			name: `annotation[prom_modified_name]="testing123" -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"prom_modified_name": "testing123",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationAnnotationProp,
+				Op:    filter.StringMapEquals,
+				Key:   "prom_modified_name",
+				Value: "testing123",
+			},
+
+			expected: true,
+		},
+		{
+			name: `annotation[app]="foo" -> different value -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"app": "bar",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationAnnotationProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: false,
+		},
+		{
+			name: `annotation[app]="foo" -> annotation missing -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"someotherannotation": "someothervalue",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationAnnotationProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: false,
+		},
+	}
+
+	for _, c := range cases {
+		result := c.filter.Matches(c.a)
+
+		if result != c.expected {
+			t.Errorf("%s: expected %t, got %t", c.name, c.expected, result)
+		}
+	}
+}
+
+func Test_Not_Matches(t *testing.T) {
+	cases := []struct {
+		name   string
+		a      *kubecost.Allocation
+		filter filter.Filter[*kubecost.Allocation]
+
+		expected bool
+	}{
+		{
+			name: "Namespace NotEquals -> false",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kube-system",
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: "kube-system",
+				},
+			},
+
+			expected: false,
+		},
+		{
+			name: "Namespace NotEquals Unallocated -> true",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kube-system",
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: kubecost.UnallocatedSuffix,
+				},
+			},
+			expected: true,
+		},
+		{
+			name: "Namespace NotEquals Unallocated -> false",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "",
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: kubecost.UnallocatedSuffix,
+				},
+			},
+
+			expected: false,
+		},
+
+		{
+			name: `label[app]!=Unallocated -> label missing -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"someotherlabel": "someothervalue",
+					},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: kubecost.UnallocatedSuffix,
+				},
+			},
+			expected: false,
+		},
+		{
+			name: `label[app]!=Unallocated -> label present -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"app": "test",
+					},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: kubecost.UnallocatedSuffix,
+				},
+			},
+			expected: true,
+		},
+		{
+			name: `label[app]!="foo" -> label missing -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"someotherlabel": "someothervalue",
+					},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: "foo",
+				},
+			},
+
+			expected: true,
+		},
+		{
+			name: `annotation[prom_modified_name]="testing123" -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"prom_modified_name": "testing123",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationAnnotationProp,
+				Op:    filter.StringMapEquals,
+				Key:   "prom_modified_name",
+				Value: "testing123",
+			},
+
+			expected: true,
+		},
+		{
+			name: `annotation[app]="foo" -> different value -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"app": "bar",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationAnnotationProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: false,
+		},
+		{
+			name: `annotation[app]="foo" -> annotation missing -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"someotherannotation": "someothervalue",
+					},
+				},
+			},
+			filter: filter.StringMapProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationAnnotationProp,
+				Op:    filter.StringMapEquals,
+				Key:   "app",
+				Value: "foo",
+			},
+
+			expected: false,
+		},
+		{
+			name: `annotation[app]!="foo" -> annotation missing -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"someotherannotation": "someothervalue",
+					},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationAnnotationProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: "foo",
+				},
+			},
+
+			expected: true,
+		},
+		{
+			name: `namespace unallocated -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "",
+				},
+			},
+			filter: filter.StringProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationNamespaceProp,
+				Op:    filter.StringEquals,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: true,
+		},
+		{
+			name: `services contains -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: "serv2",
+			},
+
+			expected: true,
+		},
+		{
+			name: `services contains -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: "serv3",
+			},
+
+			expected: false,
+		},
+		{
+			name: `services notcontains -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringSliceProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationServiceProp,
+					Op:    filter.StringSliceContains,
+					Value: "serv3",
+				},
+			},
+			expected: true,
+		},
+		{
+			name: `services notcontains -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringSliceProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationServiceProp,
+					Op:    filter.StringSliceContains,
+					Value: "serv2",
+				},
+			},
+
+			expected: false,
+		},
+		{
+			name: `services notcontains unallocated -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringSliceProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationServiceProp,
+					Op:    filter.StringSliceContains,
+					Value: kubecost.UnallocatedSuffix,
+				},
+			},
+
+			expected: true,
+		},
+		{
+			name: `services notcontains unallocated -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{},
+				},
+			},
+			filter: filter.Not[*kubecost.Allocation]{
+				Filter: filter.StringSliceProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationServiceProp,
+					Op:    filter.StringSliceContains,
+					Value: kubecost.UnallocatedSuffix,
+				},
+			},
+
+			expected: false,
+		},
+		{
+			name: `services containsprefix -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContainsPrefix,
+				Value: "serv",
+			},
+
+			expected: true,
+		},
+		{
+			name: `services containsprefix -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"foo", "bar"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContainsPrefix,
+				Value: "serv",
+			},
+
+			expected: false,
+		},
+		{
+			name: `services contains unallocated -> false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: false,
+		},
+		{
+			name: `services contains unallocated -> true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{},
+				},
+			},
+			filter: filter.StringSliceProperty[*kubecost.Allocation]{
+				Field: kubecost.AllocationServiceProp,
+				Op:    filter.StringSliceContains,
+				Value: kubecost.UnallocatedSuffix,
+			},
+
+			expected: true,
+		},
+	}
+
+	for _, c := range cases {
+		result := c.filter.Matches(c.a)
+
+		if result != c.expected {
+			t.Errorf("%s: expected %t, got %t", c.name, c.expected, result)
+		}
+	}
+}
+
+func Test_None_Matches(t *testing.T) {
+	cases := []struct {
+		name string
+		a    *kubecost.Allocation
+	}{
+		{
+			name: "nil",
+			a:    nil,
+		},
+		{
+			name: "nil properties",
+			a: &kubecost.Allocation{
+				Properties: nil,
+			},
+		},
+		{
+			name: "empty properties",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{},
+			},
+		},
+		{
+			name: "ClusterID",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Cluster: "cluster-one",
+				},
+			},
+		},
+		{
+			name: "Node",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Node: "node123",
+				},
+			},
+		},
+		{
+			name: "Namespace",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kube-system",
+				},
+			},
+		},
+		{
+			name: "ControllerKind",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					ControllerKind: "deployment", // We generally store controller kinds as all lowercase
+				},
+			},
+		},
+		{
+			name: "ControllerName",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Controller: "kc-cost-analyzer",
+				},
+			},
+		},
+		{
+			name: "Pod",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Pod: "pod-123 UID-ABC",
+				},
+			},
+		},
+		{
+			name: "Container",
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Container: "cost-model",
+				},
+			},
+		},
+		{
+			name: `label`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Labels: map[string]string{
+						"app": "foo",
+					},
+				},
+			},
+		},
+		{
+			name: `annotation`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Annotations: map[string]string{
+						"prom_modified_name": "testing123",
+					},
+				},
+			},
+		},
+		{
+			name: `services`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Services: []string{"serv1", "serv2"},
+				},
+			},
+		},
+	}
+
+	for _, c := range cases {
+		result := filter.AllCut[*kubecost.Allocation]{}.Matches(c.a)
+
+		if result {
+			t.Errorf("%s: should have been rejected", c.name)
+		}
+	}
+}
+
+func Test_And_Matches(t *testing.T) {
+	cases := []struct {
+		name   string
+		a      *kubecost.Allocation
+		filter filter.Filter[*kubecost.Allocation]
+
+		expected bool
+	}{
+		{
+			name: `label[app]="foo" and namespace="kubecost" -> both true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kubecost",
+					Labels: map[string]string{
+						"app": "foo",
+					},
+				},
+			},
+			filter: filter.And[*kubecost.Allocation]{[]filter.Filter[*kubecost.Allocation]{
+				filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: "foo",
+				},
+				filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: "kubecost",
+				},
+			}},
+			expected: true,
+		},
+		{
+			name: `label[app]="foo" and namespace="kubecost" -> first true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kube-system",
+					Labels: map[string]string{
+						"app": "foo",
+					},
+				},
+			},
+			filter: filter.And[*kubecost.Allocation]{[]filter.Filter[*kubecost.Allocation]{
+				filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: "foo",
+				},
+				filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: "kubecost",
+				},
+			}},
+			expected: false,
+		},
+		{
+			name: `label[app]="foo" and namespace="kubecost" -> second true`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kubecost",
+					Labels: map[string]string{
+						"app": "bar",
+					},
+				},
+			},
+			filter: filter.And[*kubecost.Allocation]{[]filter.Filter[*kubecost.Allocation]{
+				filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: "foo",
+				},
+				filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: "kubecost",
+				},
+			}},
+			expected: false,
+		},
+		{
+			name: `label[app]="foo" and namespace="kubecost" -> both false`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kube-system",
+					Labels: map[string]string{
+						"app": "bar",
+					},
+				},
+			},
+			filter: filter.And[*kubecost.Allocation]{[]filter.Filter[*kubecost.Allocation]{
+				filter.StringMapProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationLabelProp,
+					Op:    filter.StringMapEquals,
+					Key:   "app",
+					Value: "foo",
+				},
+				filter.StringProperty[*kubecost.Allocation]{
+					Field: kubecost.AllocationNamespaceProp,
+					Op:    filter.StringEquals,
+					Value: "kubecost",
+				},
+			}},
+			expected: false,
+		},
+		{
+			name: `(and none) matches nothing`,
+			a: &kubecost.Allocation{
+				Properties: &kubecost.AllocationProperties{
+					Namespace: "kube-system",
+					Labels: map[string]string{
+						"app": "bar",
+					},
+				},
+			},
+			filter: filter.And[*kubecost.Allocation]{[]filter.Filter[*kubecost.Allocation]{
+				filter.AllCut[*kubecost.Allocation]{},
+			}},
+			expected: false,
+		},
+	}
+
+	for _, c := range cases {
+		result := c.filter.Matches(c.a)
+
+		if result != c.expected {
+			t.Errorf("%s: expected %t, got %t", c.name, c.expected, result)
+		}
+	}
+}

+ 17 - 0
pkg/filter/not.go

@@ -0,0 +1,17 @@
+package filter
+
+import "fmt"
+
+// Not negates any filter contained within it
+type Not[T any] struct {
+	Filter Filter[T]
+}
+
+func (n Not[T]) String() string {
+	return fmt.Sprintf("(not %s)", n.Filter.String())
+}
+
+// Matches inverts the result of the child filter
+func (n Not[T]) Matches(that T) bool {
+	return !n.Filter.Matches(that)
+}

+ 36 - 0
pkg/filter/or.go

@@ -0,0 +1,36 @@
+package filter
+
+import (
+	"fmt"
+)
+
+// Or is a set of filters that should be evaluated as a logical
+// OR.
+type Or[T any] struct {
+	Filters []Filter[T]
+}
+
+func (o Or[T]) String() string {
+	s := "(or"
+	for _, f := range o.Filters {
+		s += fmt.Sprintf(" %s", f)
+	}
+
+	s += ")"
+	return s
+}
+
+func (o Or[T]) Matches(that T) bool {
+	filters := o.Filters
+	if len(filters) == 0 {
+		return true
+	}
+
+	for _, filter := range filters {
+		if filter.Matches(that) {
+			return true
+		}
+	}
+
+	return false
+}

+ 83 - 0
pkg/filter/stringmapproperty.go

@@ -0,0 +1,83 @@
+package filter
+
+import (
+	"fmt"
+	"strings"
+
+	"github.com/opencost/opencost/pkg/log"
+)
+
+const unallocatedSuffix = "__unallocated__"
+
+type StringMapPropertied interface {
+	StringMapProperty(string) (map[string]string, error)
+}
+
+// StringMapOperation is an enum that represents operations that can be performed
+// when filtering (equality, inequality, etc.)
+type StringMapOperation string
+
+const (
+	// StringMapHasKey passes if the map has the provided key
+	StringMapHasKey StringMapOperation = "stringmapcontains"
+
+	StringMapStartsWith = "stringmapstartswith"
+
+	// StringMapEquals when the given key and value match
+	StringMapEquals = "stringmapequals"
+)
+
+// StringMapProperty is the lowest-level type of filter. It represents
+// a filter operation (equality, inequality, etc.) on a property that contains a string map
+type StringMapProperty[T StringMapPropertied] struct {
+	Field string
+	Op    StringMapOperation
+	Key   string
+	Value string
+}
+
+func (smp StringMapProperty[T]) String() string {
+	return fmt.Sprintf(`(%s %s[%s] "%s")`, smp.Op, smp.Field, smp.Key, smp.Value)
+}
+
+func (smp StringMapProperty[T]) Matches(that T) bool {
+
+	thatMap, err := that.StringMapProperty(smp.Field)
+	if err != nil {
+		log.Errorf("Filter: StringMapProperty: could not retrieve field %s: %s", smp.Field, err.Error())
+		return false
+	}
+
+	valueToCompare, keyIsPresent := thatMap[smp.Key]
+
+	switch smp.Op {
+	case StringMapHasKey:
+		return keyIsPresent
+	case StringMapEquals:
+		// namespace:"__unallocated__" should match a.Properties.Namespace = ""
+		// label[app]:"__unallocated__" should match _, ok := Labels[app]; !ok
+		if !keyIsPresent || valueToCompare == "" {
+			return smp.Value == unallocatedSuffix
+		}
+
+		if valueToCompare == smp.Value {
+			return true
+		}
+
+	case StringMapStartsWith:
+		if !keyIsPresent {
+			return false
+		}
+
+		// We don't need special __unallocated__ logic here because a query
+		// asking for "__unallocated__" won't have a wildcard and unallocated
+		// properties are the empty string.
+
+		return strings.HasPrefix(valueToCompare, smp.Value)
+	default:
+		log.Errorf("Filter: StringMapProperty: Unhandled filter op. This is a filter implementation error and requires immediate patching. Op: %s", smp.Op)
+		return false
+	}
+
+	return false
+}

+ 83 - 0
pkg/filter/stringproperty.go

@@ -0,0 +1,83 @@
+package filter
+
+import (
+	"fmt"
+	"strings"
+
+	"github.com/opencost/opencost/pkg/log"
+)
+
+// StringPropertied is used to validate the name of a property field and return its value
+type StringPropertied interface {
+	// StringProperty acts as a validator and getter for a structs string properties
+	StringProperty(string) (string, error)
+}
+
+// StringOperation is an enum that represents operations that can be performed
+// when filtering (equality, inequality, etc.)
+type StringOperation string
+
+// If you add a FilterOp, MAKE SURE TO UPDATE ALL FILTER IMPLEMENTATIONS! Go
+// does not enforce exhaustive pattern matching on "enum" types.
+const (
+	// StringEquals is the equality operator
+	// "kube-system" FilterEquals "kube-system" = true
+	// "kube-syste" FilterEquals "kube-system" = false
+	StringEquals StringOperation = "stringequals"
+
+	// StringStartsWith matches strings with the given prefix.
+	// "kube-system" StartsWith "kube" = true
+	//
+	// When comparing with a field represented by an array/slice, this is like
+	// applying FilterContains to every element of the slice.
+	StringStartsWith = "stringstartswith"
+)
+
+// StringProperty is the lowest-level type of filter. It represents
+// a filter operation (equality, inequality, etc.) on a field with a string value (namespace,
+// node, pod, etc.).
+type StringProperty[T StringPropertied] struct {
+	Field string
+	Op    StringOperation
+
+	// Value is for _all_ filters. A filter of 'namespace:"kubecost"' has
+	// Value="kubecost"
+	Value string
+}
+
+func (sp StringProperty[T]) String() string {
+	return fmt.Sprintf(`(%s %s "%s")`, sp.Op, sp.Field, sp.Value)
+}
+
+func (sp StringProperty[T]) Matches(that T) bool {
+
+	thatString, err := that.StringProperty(sp.Field)
+	if err != nil {
+		log.Errorf("Filter: StringProperty: could not retrieve field %s: %s", sp.Field, err.Error())
+		return false
+	}
+
+	switch sp.Op {
+	case StringEquals:
+		// namespace:"__unallocated__" should match a.Properties.Namespace = ""
+		if thatString == "" {
+			return sp.Value == unallocatedSuffix
+		}
+
+		if thatString == sp.Value {
+			return true
+		}
+	case StringStartsWith:
+
+		// We don't need special __unallocated__ logic here because a query
+		// asking for "__unallocated__" won't have a wildcard and unallocated
+		// properties are the empty string.
+
+		return strings.HasPrefix(thatString, sp.Value)
+	default:
+		log.Errorf("Filter: StringProperty: Unhandled filter op. This is a filter implementation error and requires immediate patching. Op: %s", sp.Op)
+		return false
+	}
+
+	return false
+}

+ 80 - 0
pkg/filter/stringsliceproperty.go

@@ -0,0 +1,80 @@
+package filter
+
+import (
+	"fmt"
+	"strings"
+
+	"github.com/opencost/opencost/pkg/log"
+)
+
+type StringSlicePropertied interface {
+	StringSliceProperty(string) ([]string, error)
+}
+
+// StringSliceOperation is an enum that represents operations that can be performed
+// when filtering (equality, inequality, etc.)
+type StringSliceOperation string
+
+const (
+	// StringSliceContains is an array/slice membership operator
+	// ["a", "b", "c"] FilterContains "a" = true
+	StringSliceContains StringSliceOperation = "stringslicecontains"
+
+	// StringSliceContainsPrefix is like FilterContains, but using StartsWith instead
+	// of Equals.
+	// ["kube-system", "abc123"] ContainsPrefix ["kube"] = true
+	StringSliceContainsPrefix = "stringslicecontainsprefix"
+)
+
+// StringSliceProperty is the lowest-level type of filter. It represents
+// a filter operation (equality, inequality, etc.) on a property that contains a string slice
+type StringSliceProperty[T StringSlicePropertied] struct {
+	Field string
+	Op    StringSliceOperation
+
+	Value string
+}
+
+func (ssp StringSliceProperty[T]) String() string {
+	return fmt.Sprintf(`(%s %s "%s")`, ssp.Op, ssp.Field, ssp.Value)
+}
+
+func (ssp StringSliceProperty[T]) Matches(that T) bool {
+
+	thatSlice, err := that.StringSliceProperty(ssp.Field)
+	if err != nil {
+		log.Errorf("Filter: StringSliceProperty: could not retrieve field %s: %s", ssp.Field, err.Error())
+		return false
+	}
+
+	switch ssp.Op {
+
+	case StringSliceContains:
+		if len(thatSlice) == 0 {
+			return ssp.Value == unallocatedSuffix
+		}
+
+		for _, s := range thatSlice {
+			if s == ssp.Value {
+				return true
+			}
+		}
+	case StringSliceContainsPrefix:
+		// We don't need special __unallocated__ logic here because a query
+		// asking for "__unallocated__" won't have a wildcard and unallocated
+		// properties are the empty string.
+
+		for _, s := range thatSlice {
+			if strings.HasPrefix(s, ssp.Value) {
+				return true
+			}
+		}
+
+		return false
+	default:
+		log.Errorf("Filter: StringSliceProperty: Unhandled filter op. This is a filter implementation error and requires immediate patching. Op: %s", ssp.Op)
+		return false
+	}
+
+	return false
+}

+ 70 - 0
pkg/filter/util/cloudcostaggregate.go

@@ -0,0 +1,70 @@
+package util
+
+import (
+	"strings"
+
+	"github.com/opencost/opencost/pkg/filter"
+	"github.com/opencost/opencost/pkg/kubecost"
+	"github.com/opencost/opencost/pkg/util/mapper"
+)
+
+func parseWildcardEnd(rawFilterValue string) (string, bool) {
+	return strings.TrimSuffix(rawFilterValue, "*"), strings.HasSuffix(rawFilterValue, "*")
+}
+
+func CloudCostAggregateFilterFromParams(pmr mapper.PrimitiveMapReader) filter.Filter[*kubecost.CloudCostAggregate] {
+	filter := filter.And[*kubecost.CloudCostAggregate]{
+		Filters: []filter.Filter[*kubecost.CloudCostAggregate]{},
+	}
+
+	if raw := pmr.GetList("filterAccounts", ","); len(raw) > 0 {
+		filter.Filters = append(filter.Filters, filterV1SingleValueFromList(raw, kubecost.CloudCostAccountProp))
+	}
+
+	if raw := pmr.GetList("filterProjects", ","); len(raw) > 0 {
+		filter.Filters = append(filter.Filters, filterV1SingleValueFromList(raw, kubecost.CloudCostProjectProp))
+	}
+
+	if raw := pmr.GetList("filterProviders", ","); len(raw) > 0 {
+		filter.Filters = append(filter.Filters, filterV1SingleValueFromList(raw, kubecost.CloudCostProviderProp))
+	}
+
+	if raw := pmr.GetList("filterServices", ","); len(raw) > 0 {
+		filter.Filters = append(filter.Filters, filterV1SingleValueFromList(raw, kubecost.CloudCostServiceProp))
+	}
+
+	if raw := pmr.GetList("filterLabelValues", ","); len(raw) > 0 {
+		filter.Filters = append(filter.Filters, filterV1SingleValueFromList(raw, kubecost.CloudCostLabelProp))
+	}
+
+	if len(filter.Filters) == 0 {
+		return nil
+	}
+
+	return filter
+}
+
+func filterV1SingleValueFromList(rawFilterValues []string, field string) filter.Filter[*kubecost.CloudCostAggregate] {
+	result := filter.Or[*kubecost.CloudCostAggregate]{
+		Filters: []filter.Filter[*kubecost.CloudCostAggregate]{},
+	}
+
+	for _, filterValue := range rawFilterValues {
+		filterValue = strings.TrimSpace(filterValue)
+		filterValue, wildcard := parseWildcardEnd(filterValue)
+
+		subFilter := filter.StringProperty[*kubecost.CloudCostAggregate]{
+			Field: field,
+			Op:    filter.StringEquals,
+			Value: filterValue,
+		}
+
+		if wildcard {
+			subFilter.Op = kubecost.FilterStartsWith
+		}
+
+		result.Filters = append(result.Filters, subFilter)
+	}
+
+	return result
+}

+ 40 - 0
pkg/filter/window.go

@@ -0,0 +1,40 @@
+package filter
+
+//
+//import (
+//	"fmt"
+//	"github.com/opencost/opencost/pkg/kubecost"
+//	"github.com/opencost/opencost/pkg/log"
+//)
+//
+//type Windowed interface {
+//	GetWindow() kubecost.Window
+//}
+//
+//// WindowOperation are operations that can be performed on types that have windows
+//type WindowOperation string
+//
+//const (
+//	WindowContains WindowOperation = "windowcontains"
+//)
+//
+//// WindowCondition is a filter can be used on any type that has a window and implements GetWindow()
+//type WindowCondition[T Windowed] struct {
+//	Window kubecost.Window
+//	Op     WindowOperation
+//}
+//
+//func (wc WindowCondition[T]) String() string {
+//	return fmt.Sprintf(`(%s "%s")`, wc.Op, wc.Window.String())
+//}
+//
+//func (wc WindowCondition[T]) Matches(that T) bool {
+//	thatWindow := that.GetWindow()
+//	switch wc.Op {
+//	case WindowContains:
+//		return wc.Window.ContainsWindow(thatWindow)
+//	default:
+//		log.Errorf("Filter: Window: Unhandled filter operation. This is a filter implementation error and requires immediate patching. Op: %s", wc.Op)
+//		return false
+//	}
+//}

+ 112 - 0
pkg/filter/window_test.go

@@ -0,0 +1,112 @@
+package filter_test
+
+// import (
+// 	"github.com/opencost/opencost/pkg/kubecost"
+// 	"testing"
+// 	"time"
+// )
+
+// type windowedImpl struct {
+// 	kubecost.Window
+// }
+
+// func (w *windowedImpl) GetWindow() kubecost.Window {
+// 	return w.Window
+// }
+
+// func newWindowedImpl(start, end *time.Time) *windowedImpl {
+// 	return &windowedImpl{kubecost.NewWindow(start, end)}
+// }
+
+// func Test_WindowContains_Matches(t *testing.T) {
+// 	noon := time.Date(2022, 9, 29, 12, 0, 0, 0, time.UTC)
+// 	one := noon.Add(time.Hour)
+// 	two := one.Add(time.Hour)
+// 	three := two.Add(time.Hour)
+// 	cases := map[string]struct {
+// 		windowed *windowedImpl
+// 		filter   Filter[*windowedImpl]
+// 		expected bool
+// 	}{
+// 		"fully contains": {
+// 			windowed: newWindowedImpl(&one, &two),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&noon, &three),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: true,
+// 		},
+// 		"window matches": {
+// 			windowed: newWindowedImpl(&one, &two),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&one, &two),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: true,
+// 		},
+// 		"contains start": {
+// 			windowed: newWindowedImpl(&one, &three),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&noon, &two),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: false,
+// 		},
+// 		"contains end": {
+// 			windowed: newWindowedImpl(&noon, &two),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&one, &three),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: false,
+// 		},
+// 		"window start = filter end": {
+// 			windowed: newWindowedImpl(&one, &two),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&noon, &one),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: false,
+// 		},
+// 		"window end = filter start": {
+// 			windowed: newWindowedImpl(&noon, &one),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&one, &two),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: false,
+// 		},
+// 		"window before": {
+// 			windowed: newWindowedImpl(&noon, &one),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&two, &three),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: false,
+// 		},
+// 		"window after": {
+// 			windowed: newWindowedImpl(&two, &three),
+// 			filter: WindowCondition[*windowedImpl]{
+// 				Window: kubecost.NewWindow(&noon, &one),
+// 				Op:     WindowContains,
+// 			},
+
+// 			expected: false,
+// 		},
+// 	}
+
+// 	for name, c := range cases {
+// 		result := c.filter.Matches(c.windowed)
+
+// 		if result != c.expected {
+// 			t.Errorf("%s: expected %t, got %t", name, c.expected, result)
+// 		}
+// 	}
+// }

+ 82 - 1
pkg/kubecost/allocation.go

@@ -237,6 +237,11 @@ func (pva *PVAllocation) Equal(that *PVAllocation) bool {
 		util.IsApproximately(pva.Cost, that.Cost)
 		util.IsApproximately(pva.Cost, that.Cost)
 }
 }
 
 
+// GetWindow returns the window of the struct
+func (a *Allocation) GetWindow() Window {
+	return a.Window
+}
+
 // AllocationMatchFunc is a function that can be used to match Allocations by
 // AllocationMatchFunc is a function that can be used to match Allocations by
 // returning true for any given Allocation if a condition is met.
 // returning true for any given Allocation if a condition is met.
 type AllocationMatchFunc func(*Allocation) bool
 type AllocationMatchFunc func(*Allocation) bool
@@ -1628,6 +1633,82 @@ func (a *Allocation) generateKey(aggregateBy []string, labelConfig *LabelConfig)
 	return a.Properties.GenerateKey(aggregateBy, labelConfig)
 	return a.Properties.GenerateKey(aggregateBy, labelConfig)
 }
 }
 
 
+func (a *Allocation) StringProperty(property string) (string, error) {
+	switch property {
+	case AllocationClusterProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.Cluster, nil
+	case AllocationNodeProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.Node, nil
+	case AllocationContainerProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.Container, nil
+	case AllocationControllerProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.Controller, nil
+	case AllocationControllerKindProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.ControllerKind, nil
+	case AllocationNamespaceProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.Namespace, nil
+	case AllocationPodProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.Pod, nil
+	case AllocationProviderIDProp:
+		if a.Properties == nil {
+			return "", nil
+		}
+		return a.Properties.ProviderID, nil
+	default:
+		return "", fmt.Errorf("Allocation: StringProperty: invalid property name: %s", property)
+	}
+}
+
+func (a *Allocation) StringSliceProperty(property string) ([]string, error) {
+	switch property {
+	case AllocationServiceProp:
+		if a.Properties == nil {
+			return nil, nil
+		}
+		return a.Properties.Services, nil
+	default:
+		return nil, fmt.Errorf("Allocation: StringSliceProperty: invalid property name: %s", property)
+	}
+}
+
+func (a *Allocation) StringMapProperty(property string) (map[string]string, error) {
+	switch property {
+	case AllocationLabelProp:
+		if a.Properties == nil {
+			return nil, nil
+		}
+		return a.Properties.Labels, nil
+	case AllocationAnnotationProp:
+		if a.Properties == nil {
+			return nil, nil
+		}
+		return a.Properties.Annotations, nil
+	default:
+		return nil, fmt.Errorf("Allocation: StringMapProperty: invalid property name: %s", property)
+	}
+}
+
 // Clone returns a new AllocationSet with a deep copy of the given
 // Clone returns a new AllocationSet with a deep copy of the given
 // AllocationSet's allocations.
 // AllocationSet's allocations.
 func (as *AllocationSet) Clone() *AllocationSet {
 func (as *AllocationSet) Clone() *AllocationSet {
@@ -1812,7 +1893,7 @@ func (as *AllocationSet) IsEmpty() bool {
 		return true
 		return true
 	}
 	}
 
 
-	return as.Allocations == nil || len(as.Allocations) == 0
+	return false
 }
 }
 
 
 // Length returns the number of Allocations in the set
 // Length returns the number of Allocations in the set

+ 54 - 19
pkg/kubecost/asset.go

@@ -254,9 +254,13 @@ func AssetToExternalAllocation(asset Asset, aggregateBy []string, labelConfig *L
 // values will key by only those values.
 // values will key by only those values.
 // Valid values of `aggregateBy` elements are strings which are an `AssetProperty`, and strings prefixed
 // Valid values of `aggregateBy` elements are strings which are an `AssetProperty`, and strings prefixed
 // with `"label:"`.
 // with `"label:"`.
-func key(a Asset, aggregateBy []string) (string, error) {
+func key(a Asset, aggregateBy []string, labelConfig *LabelConfig) (string, error) {
 	var buffer strings.Builder
 	var buffer strings.Builder
 
 
+	if labelConfig == nil {
+		labelConfig = NewLabelConfig()
+	}
+
 	if aggregateBy == nil {
 	if aggregateBy == nil {
 		aggregateBy = []string{
 		aggregateBy = []string{
 			string(AssetProviderProp),
 			string(AssetProviderProp),
@@ -292,6 +296,16 @@ func key(a Asset, aggregateBy []string) (string, error) {
 			key = a.GetProperties().ProviderID
 			key = a.GetProperties().ProviderID
 		case s == string(AssetNameProp):
 		case s == string(AssetNameProp):
 			key = a.GetProperties().Name
 			key = a.GetProperties().Name
+		case s == string(AssetDepartmentProp):
+			key = getKeyFromLabelConfig(a, labelConfig, labelConfig.DepartmentExternalLabel)
+		case s == string(AssetEnvironmentProp):
+			key = getKeyFromLabelConfig(a, labelConfig, labelConfig.EnvironmentExternalLabel)
+		case s == string(AssetOwnerProp):
+			key = getKeyFromLabelConfig(a, labelConfig, labelConfig.OwnerExternalLabel)
+		case s == string(AssetProductProp):
+			key = getKeyFromLabelConfig(a, labelConfig, labelConfig.ProductExternalLabel)
+		case s == string(AssetTeamProp):
+			key = getKeyFromLabelConfig(a, labelConfig, labelConfig.TeamExternalLabel)
 		case strings.HasPrefix(s, "label:"):
 		case strings.HasPrefix(s, "label:"):
 			if labelKey := strings.TrimPrefix(s, "label:"); labelKey != "" {
 			if labelKey := strings.TrimPrefix(s, "label:"); labelKey != "" {
 				labelVal := a.GetLabels()[labelKey]
 				labelVal := a.GetLabels()[labelKey]
@@ -320,8 +334,26 @@ func key(a Asset, aggregateBy []string) (string, error) {
 	return buffer.String(), nil
 	return buffer.String(), nil
 }
 }
 
 
+func getKeyFromLabelConfig(a Asset, labelConfig *LabelConfig, label string) string {
+	labels := a.GetLabels()
+	if labels == nil {
+		return UnallocatedSuffix
+	} else {
+		key := UnallocatedSuffix
+		labelNames := strings.Split(label, ",")
+		for _, labelName := range labelNames {
+			name := labelConfig.Sanitize(labelName)
+			if labelValue, ok := labels[name]; ok {
+				key = labelValue
+				break
+			}
+		}
+		return key
+	}
+}
+
 func GetAssetKey(a Asset, aggregateBy []string) (string, error) {
 func GetAssetKey(a Asset, aggregateBy []string) (string, error) {
-	return key(a, aggregateBy)
+	return key(a, aggregateBy, nil)
 }
 }
 
 
 func toString(a Asset) string {
 func toString(a Asset) string {
@@ -2675,7 +2707,7 @@ func NewAssetSet(start, end time.Time, assets ...Asset) *AssetSet {
 	}
 	}
 
 
 	for _, a := range assets {
 	for _, a := range assets {
-		as.Insert(a)
+		as.Insert(a, nil)
 	}
 	}
 
 
 	return as
 	return as
@@ -2718,7 +2750,7 @@ func (as *AssetSet) AggregateBy(aggregateBy []string, opts *AssetAggregationOpti
 			}
 			}
 		}
 		}
 		if insert {
 		if insert {
-			err := aggSet.Insert(sa)
+			err := aggSet.Insert(sa, opts.LabelConfig)
 			if err != nil {
 			if err != nil {
 				return err
 				return err
 			}
 			}
@@ -2737,7 +2769,7 @@ func (as *AssetSet) AggregateBy(aggregateBy []string, opts *AssetAggregationOpti
 	// Insert each asset into the new set, which will be keyed by the `aggregateBy`
 	// Insert each asset into the new set, which will be keyed by the `aggregateBy`
 	// on aggSet, resulting in aggregation.
 	// on aggSet, resulting in aggregation.
 	for _, asset := range as.Assets {
 	for _, asset := range as.Assets {
-		err := aggSet.Insert(asset)
+		err := aggSet.Insert(asset, opts.LabelConfig)
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}
@@ -2849,13 +2881,13 @@ func (as *AssetSet) End() time.Time {
 // FindMatch attempts to find a match in the AssetSet for the given Asset on
 // FindMatch attempts to find a match in the AssetSet for the given Asset on
 // the provided Properties and labels. If a match is not found, FindMatch
 // the provided Properties and labels. If a match is not found, FindMatch
 // returns nil and a Not Found error.
 // returns nil and a Not Found error.
-func (as *AssetSet) FindMatch(query Asset, aggregateBy []string) (Asset, error) {
-	matchKey, err := key(query, aggregateBy)
+func (as *AssetSet) FindMatch(query Asset, aggregateBy []string, labelConfig *LabelConfig) (Asset, error) {
+	matchKey, err := key(query, aggregateBy, labelConfig)
 	if err != nil {
 	if err != nil {
 		return nil, err
 		return nil, err
 	}
 	}
 	for _, asset := range as.Assets {
 	for _, asset := range as.Assets {
-		if k, err := key(asset, aggregateBy); err != nil {
+		if k, err := key(asset, aggregateBy, labelConfig); err != nil {
 			return nil, err
 			return nil, err
 		} else if k == matchKey {
 		} else if k == matchKey {
 			return asset, nil
 			return asset, nil
@@ -2873,7 +2905,7 @@ func (as *AssetSet) FindMatch(query Asset, aggregateBy []string) (Asset, error)
 func (as *AssetSet) ReconciliationMatch(query Asset) (Asset, bool, error) {
 func (as *AssetSet) ReconciliationMatch(query Asset) (Asset, bool, error) {
 	// Full match means matching on (Category, ProviderID)
 	// Full match means matching on (Category, ProviderID)
 	fullMatchProps := []string{string(AssetCategoryProp), string(AssetProviderIDProp)}
 	fullMatchProps := []string{string(AssetCategoryProp), string(AssetProviderIDProp)}
-	fullMatchKey, err := key(query, fullMatchProps)
+	fullMatchKey, err := key(query, fullMatchProps, nil)
 
 
 	// This should never happen because we are using enumerated Properties,
 	// This should never happen because we are using enumerated Properties,
 	// but the check is here in case that changes
 	// but the check is here in case that changes
@@ -2883,7 +2915,7 @@ func (as *AssetSet) ReconciliationMatch(query Asset) (Asset, bool, error) {
 
 
 	// Partial match means matching only on (ProviderID)
 	// Partial match means matching only on (ProviderID)
 	providerIDMatchProps := []string{string(AssetProviderIDProp)}
 	providerIDMatchProps := []string{string(AssetProviderIDProp)}
-	providerIDMatchKey, err := key(query, providerIDMatchProps)
+	providerIDMatchKey, err := key(query, providerIDMatchProps, nil)
 
 
 	// This should never happen because we are using enumerated Properties,
 	// This should never happen because we are using enumerated Properties,
 	// but the check is here in case that changes
 	// but the check is here in case that changes
@@ -2897,13 +2929,13 @@ func (as *AssetSet) ReconciliationMatch(query Asset) (Asset, bool, error) {
 		if asset.Type() == CloudAssetType {
 		if asset.Type() == CloudAssetType {
 			continue
 			continue
 		}
 		}
-		if k, err := key(asset, fullMatchProps); err != nil {
+		if k, err := key(asset, fullMatchProps, nil); err != nil {
 			return nil, false, err
 			return nil, false, err
 		} else if k == fullMatchKey {
 		} else if k == fullMatchKey {
 			log.DedupedInfof(10, "Asset ETL: Reconciliation[rcnw]: ReconcileRange Match: %s", fullMatchKey)
 			log.DedupedInfof(10, "Asset ETL: Reconciliation[rcnw]: ReconcileRange Match: %s", fullMatchKey)
 			return asset, true, nil
 			return asset, true, nil
 		}
 		}
-		if k, err := key(asset, providerIDMatchProps); err != nil {
+		if k, err := key(asset, providerIDMatchProps, nil); err != nil {
 			return nil, false, err
 			return nil, false, err
 		} else if k == providerIDMatchKey {
 		} else if k == providerIDMatchKey {
 			// Found a partial match. Save it until after all other options
 			// Found a partial match. Save it until after all other options
@@ -2975,7 +3007,7 @@ func (as *AssetSet) Get(key string) (Asset, bool) {
 // Insert inserts the given Asset into the AssetSet, using the AssetSet's
 // Insert inserts the given Asset into the AssetSet, using the AssetSet's
 // configured Properties to determine the key under which the Asset will
 // configured Properties to determine the key under which the Asset will
 // be inserted.
 // be inserted.
-func (as *AssetSet) Insert(asset Asset) error {
+func (as *AssetSet) Insert(asset Asset, labelConfig *LabelConfig) error {
 	if as == nil {
 	if as == nil {
 		return fmt.Errorf("cannot Insert into nil AssetSet")
 		return fmt.Errorf("cannot Insert into nil AssetSet")
 	}
 	}
@@ -2984,8 +3016,10 @@ func (as *AssetSet) Insert(asset Asset) error {
 		as.Assets = map[string]Asset{}
 		as.Assets = map[string]Asset{}
 	}
 	}
 
 
+	// need a label config
+
 	// Determine key into which to Insert the Asset.
 	// Determine key into which to Insert the Asset.
-	k, err := key(asset, as.AggregationKeys)
+	k, err := key(asset, as.AggregationKeys, labelConfig)
 	if err != nil {
 	if err != nil {
 		return err
 		return err
 	}
 	}
@@ -3038,14 +3072,14 @@ func (as *AssetSet) Resolution() time.Duration {
 	return as.Window.Duration()
 	return as.Window.Duration()
 }
 }
 
 
-func (as *AssetSet) Set(asset Asset, aggregateBy []string) error {
+func (as *AssetSet) Set(asset Asset, aggregateBy []string, labelConfig *LabelConfig) error {
 	if as.IsEmpty() {
 	if as.IsEmpty() {
 		as.Assets = map[string]Asset{}
 		as.Assets = map[string]Asset{}
 	}
 	}
 
 
 	// Expand the window to match the AssetSet, then set it
 	// Expand the window to match the AssetSet, then set it
 	asset.ExpandWindow(as.Window)
 	asset.ExpandWindow(as.Window)
-	k, err := key(asset, aggregateBy)
+	k, err := key(asset, aggregateBy, labelConfig)
 	if err != nil {
 	if err != nil {
 		return err
 		return err
 	}
 	}
@@ -3113,14 +3147,14 @@ func (as *AssetSet) accumulate(that *AssetSet) (*AssetSet, error) {
 	acc.AggregationKeys = as.AggregationKeys
 	acc.AggregationKeys = as.AggregationKeys
 
 
 	for _, asset := range as.Assets {
 	for _, asset := range as.Assets {
-		err := acc.Insert(asset)
+		err := acc.Insert(asset, nil)
 		if err != nil {
 		if err != nil {
 			return nil, err
 			return nil, err
 		}
 		}
 	}
 	}
 
 
 	for _, asset := range that.Assets {
 	for _, asset := range that.Assets {
-		err := acc.Insert(asset)
+		err := acc.Insert(asset, nil)
 		if err != nil {
 		if err != nil {
 			return nil, err
 			return nil, err
 		}
 		}
@@ -3240,6 +3274,7 @@ func (asr *AssetSetRange) NewAccumulation() (*AssetSet, error) {
 type AssetAggregationOptions struct {
 type AssetAggregationOptions struct {
 	SharedHourlyCosts map[string]float64
 	SharedHourlyCosts map[string]float64
 	FilterFuncs       []AssetMatchFunc
 	FilterFuncs       []AssetMatchFunc
+	LabelConfig       *LabelConfig
 }
 }
 
 
 func (asr *AssetSetRange) AggregateBy(aggregateBy []string, opts *AssetAggregationOptions) error {
 func (asr *AssetSetRange) AggregateBy(aggregateBy []string, opts *AssetAggregationOptions) error {
@@ -3326,7 +3361,7 @@ func (asr *AssetSetRange) InsertRange(that *AssetSetRange) error {
 
 
 		// Insert each Asset from the given set
 		// Insert each Asset from the given set
 		for _, asset := range thatAS.Assets {
 		for _, asset := range thatAS.Assets {
-			err = as.Insert(asset)
+			err = as.Insert(asset, nil)
 			if err != nil {
 			if err != nil {
 				err = fmt.Errorf("error inserting asset: %s", err)
 				err = fmt.Errorf("error inserting asset: %s", err)
 				continue
 				continue

+ 5 - 5
pkg/kubecost/asset_test.go

@@ -786,7 +786,7 @@ func TestAssetSet_FindMatch(t *testing.T) {
 	// Assert success of a simple match of Type and ProviderID
 	// Assert success of a simple match of Type and ProviderID
 	as = GenerateMockAssetSet(startYesterday)
 	as = GenerateMockAssetSet(startYesterday)
 	query = NewNode("", "", "gcp-node3", s, e, w)
 	query = NewNode("", "", "gcp-node3", s, e, w)
-	match, err = as.FindMatch(query, []string{string(AssetTypeProp), string(AssetProviderIDProp)})
+	match, err = as.FindMatch(query, []string{string(AssetTypeProp), string(AssetProviderIDProp)}, nil)
 	if err != nil {
 	if err != nil {
 		t.Fatalf("AssetSet.FindMatch: unexpected error: %s", err)
 		t.Fatalf("AssetSet.FindMatch: unexpected error: %s", err)
 	}
 	}
@@ -794,7 +794,7 @@ func TestAssetSet_FindMatch(t *testing.T) {
 	// Assert error of a simple non-match of Type and ProviderID
 	// Assert error of a simple non-match of Type and ProviderID
 	as = GenerateMockAssetSet(startYesterday)
 	as = GenerateMockAssetSet(startYesterday)
 	query = NewNode("", "", "aws-node3", s, e, w)
 	query = NewNode("", "", "aws-node3", s, e, w)
-	match, err = as.FindMatch(query, []string{string(AssetTypeProp), string(AssetProviderIDProp)})
+	match, err = as.FindMatch(query, []string{string(AssetTypeProp), string(AssetProviderIDProp)}, nil)
 	if err == nil {
 	if err == nil {
 		t.Fatalf("AssetSet.FindMatch: expected error (no match); found %s", match)
 		t.Fatalf("AssetSet.FindMatch: expected error (no match); found %s", match)
 	}
 	}
@@ -802,7 +802,7 @@ func TestAssetSet_FindMatch(t *testing.T) {
 	// Assert error of matching ProviderID, but not Type
 	// Assert error of matching ProviderID, but not Type
 	as = GenerateMockAssetSet(startYesterday)
 	as = GenerateMockAssetSet(startYesterday)
 	query = NewCloud(ComputeCategory, "gcp-node3", s, e, w)
 	query = NewCloud(ComputeCategory, "gcp-node3", s, e, w)
-	match, err = as.FindMatch(query, []string{string(AssetTypeProp), string(AssetProviderIDProp)})
+	match, err = as.FindMatch(query, []string{string(AssetTypeProp), string(AssetProviderIDProp)}, nil)
 	if err == nil {
 	if err == nil {
 		t.Fatalf("AssetSet.FindMatch: expected error (no match); found %s", match)
 		t.Fatalf("AssetSet.FindMatch: expected error (no match); found %s", match)
 	}
 	}
@@ -833,8 +833,8 @@ func TestAssetSet_InsertMatchingWindow(t *testing.T) {
 	a2.Window = NewClosedWindow(a2WindowStart, a2WindowEnd)
 	a2.Window = NewClosedWindow(a2WindowStart, a2WindowEnd)
 
 
 	as := NewAssetSet(setStart, setEnd)
 	as := NewAssetSet(setStart, setEnd)
-	as.Insert(a1)
-	as.Insert(a2)
+	as.Insert(a1, nil)
+	as.Insert(a2, nil)
 
 
 	if as.Length() != 2 {
 	if as.Length() != 2 {
 		t.Errorf("AS length got %d, expected %d", as.Length(), 2)
 		t.Errorf("AS length got %d, expected %d", as.Length(), 2)

+ 28 - 0
pkg/kubecost/assetprops.go

@@ -41,6 +41,21 @@ const (
 
 
 	// AssetTypeProp describes the type of the Asset
 	// AssetTypeProp describes the type of the Asset
 	AssetTypeProp AssetProperty = "type"
 	AssetTypeProp AssetProperty = "type"
+
+	// AssetDepartmentProp describes the department of the Asset
+	AssetDepartmentProp AssetProperty = "department"
+
+	// AssetEnvironmentProp describes the environment of the Asset
+	AssetEnvironmentProp AssetProperty = "environment"
+
+	// AssetOwnerProp describes the owner of the Asset
+	AssetOwnerProp AssetProperty = "owner"
+
+	// AssetProductProp describes the product of the Asset
+	AssetProductProp AssetProperty = "product"
+
+	// AssetTeamProp describes the team of the Asset
+	AssetTeamProp AssetProperty = "team"
 )
 )
 
 
 // ParseAssetProperty attempts to parse a string into an AssetProperty
 // ParseAssetProperty attempts to parse a string into an AssetProperty
@@ -64,6 +79,16 @@ func ParseAssetProperty(text string) (AssetProperty, error) {
 		return AssetServiceProp, nil
 		return AssetServiceProp, nil
 	case "type":
 	case "type":
 		return AssetTypeProp, nil
 		return AssetTypeProp, nil
+	case "department":
+		return AssetDepartmentProp, nil
+	case "environment":
+		return AssetEnvironmentProp, nil
+	case "owner":
+		return AssetOwnerProp, nil
+	case "product":
+		return AssetProductProp, nil
+	case "team":
+		return AssetTeamProp, nil
 	}
 	}
 	return AssetNilProp, fmt.Errorf("invalid asset property: %s", text)
 	return AssetNilProp, fmt.Errorf("invalid asset property: %s", text)
 }
 }
@@ -105,6 +130,9 @@ const AlibabaProvider = "Alibaba"
 // CSVProvider describes the provider a CSV
 // CSVProvider describes the provider a CSV
 const CSVProvider = "CSV"
 const CSVProvider = "CSV"
 
 
+// CustomProvider describes a custom provider
+const CustomProvider = "custom"
+
 // ScalewayProvider describes the provider Scaleway
 // ScalewayProvider describes the provider Scaleway
 const ScalewayProvider = "Scaleway"
 const ScalewayProvider = "Scaleway"
 
 

+ 19 - 1
pkg/kubecost/bingen.go

@@ -22,6 +22,8 @@ package kubecost
 
 
 // Default Version Set (uses -version flag passed) includes shared resources
 // Default Version Set (uses -version flag passed) includes shared resources
 // @bingen:generate:Window
 // @bingen:generate:Window
+// @bingen:generate:Coverage
+// @bingen:generate:CoverageSet
 
 
 // Asset Version Set: Includes Asset pipeline specific resources
 // Asset Version Set: Includes Asset pipeline specific resources
 // @bingen:set[name=Assets,version=18]
 // @bingen:set[name=Assets,version=18]
@@ -71,4 +73,20 @@ package kubecost
 // @bingen:generate:AuditSetRange
 // @bingen:generate:AuditSetRange
 // @bingen:end
 // @bingen:end
 
 
-//go:generate bingen -package=kubecost -version=15 -buffer=github.com/opencost/opencost/pkg/util
+// @bingen:set[name=CloudCostAggregate,version=1]
+// @bingen:generate:CloudCostAggregate
+// @bingen:generate[stringtable]:CloudCostAggregateSet
+// @bingen:generate:CloudCostAggregateSetRange
+// @bingen:generate:CloudCostAggregateProperties
+// @bingen:generate:CloudCostAggregateLabels
+// @bingen:end
+
+// @bingen:set[name=CloudCostItem,version=1]
+// @bingen:generate:CloudCostItem
+// @bingen:generate[stringtable]:CloudCostItemSet
+// @bingen:generate:CloudCostItemSetRange
+// @bingen:generate:CloudCostItemProperties
+// @bingen:generate:CloudCostItemLabels
+// @bingen:end
+
+//go:generate bingen -package=kubecost -version=17 -buffer=github.com/opencost/opencost/pkg/util

+ 422 - 0
pkg/kubecost/cloudcostaggregate.go

@@ -0,0 +1,422 @@
+package kubecost
+
+import (
+	"errors"
+	"fmt"
+	"strings"
+	"time"
+
+	"github.com/opencost/opencost/pkg/filter"
+	"github.com/opencost/opencost/pkg/log"
+)
+
+const (
+	CloudCostAccountProp  string = "account"
+	CloudCostProjectProp  string = "project"
+	CloudCostProviderProp string = "provider"
+	CloudCostServiceProp  string = "service"
+	CloudCostLabelProp    string = "label"
+)
+
+// CloudCostAggregateProperties unique property set for CloudCostAggregate within a window
+type CloudCostAggregateProperties struct {
+	Provider   string `json:"provider"`
+	Account    string `json:"account"`
+	Project    string `json:"project"`
+	Service    string `json:"service"`
+	LabelValue string `json:"label"`
+}
+
+func (ccap CloudCostAggregateProperties) Equal(that CloudCostAggregateProperties) bool {
+	return ccap.Provider == that.Provider &&
+		ccap.Account == that.Account &&
+		ccap.Project == that.Project &&
+		ccap.Service == that.Service &&
+		ccap.LabelValue == that.LabelValue
+}
+
+func (ccap CloudCostAggregateProperties) Key(props []string) string {
+	if len(props) == 0 {
+		return fmt.Sprintf("%s/%s/%s/%s/%s", ccap.Provider, ccap.Account, ccap.Project, ccap.Service, ccap.LabelValue)
+	}
+
+	keys := make([]string, len(props))
+	for i, prop := range props {
+		key := UnallocatedSuffix
+
+		switch prop {
+		case CloudCostProviderProp:
+			if ccap.Provider != "" {
+				key = ccap.Provider
+			}
+		case CloudCostAccountProp:
+			if ccap.Account != "" {
+				key = ccap.Account
+			}
+		case CloudCostProjectProp:
+			if ccap.Project != "" {
+				key = ccap.Project
+			}
+		case CloudCostServiceProp:
+			if ccap.Service != "" {
+				key = ccap.Service
+			}
+		case CloudCostLabelProp:
+			if ccap.LabelValue != "" {
+				key = ccap.LabelValue
+			}
+		}
+
+		keys[i] = key
+	}
+
+	return strings.Join(keys, "/")
+}
+
+// CloudCostAggregate represents an aggregation of Billing Integration data on the properties listed
+// - KubernetesPercent is the percent of the CloudCostAggregates cost which was from an item which could be identified
+//   as coming from a kubernetes resources.
+// - Cost is the sum of the cost of each item in the CloudCostAggregate
+// - Credit is the sum of credits applied to each item in the CloudCostAggregate
+
+type CloudCostAggregate struct {
+	Properties        CloudCostAggregateProperties `json:"properties"`
+	KubernetesPercent float64                      `json:"kubernetesPercent"`
+	Cost              float64                      `json:"cost"`
+	Credit            float64                      `json:"credit"`
+}
+
+func (cca *CloudCostAggregate) Clone() *CloudCostAggregate {
+	return &CloudCostAggregate{
+		Properties:        cca.Properties,
+		KubernetesPercent: cca.KubernetesPercent,
+		Cost:              cca.Cost,
+		Credit:            cca.Credit,
+	}
+}
+
+func (cca *CloudCostAggregate) Equal(that *CloudCostAggregate) bool {
+	if that == nil {
+		return false
+	}
+
+	return cca.Cost == that.Cost &&
+		cca.Credit == that.Credit &&
+		cca.Properties.Equal(that.Properties)
+}
+
+func (cca *CloudCostAggregate) Key(props []string) string {
+	return cca.Properties.Key(props)
+}
+
+func (cca *CloudCostAggregate) StringProperty(prop string) (string, error) {
+	if cca == nil {
+		return "", nil
+	}
+
+	switch prop {
+	case CloudCostAccountProp:
+		return cca.Properties.Account, nil
+	case CloudCostProjectProp:
+		return cca.Properties.Project, nil
+	case CloudCostProviderProp:
+		return cca.Properties.Provider, nil
+	case CloudCostServiceProp:
+		return cca.Properties.Service, nil
+	case CloudCostLabelProp:
+		return cca.Properties.LabelValue, nil
+	default:
+		return "", fmt.Errorf("invalid property name: %s", prop)
+	}
+}
+
+func (cca *CloudCostAggregate) add(that *CloudCostAggregate) {
+	if cca == nil {
+		log.Warnf("cannot add to nil CloudCostAggregate")
+		return
+	}
+
+	// Compute KubernetesPercent for sum
+	k8sPct := 0.0
+	sumCost := cca.Cost + that.Cost
+	if sumCost > 0.0 {
+		thisK8sCost := cca.Cost * cca.KubernetesPercent
+		thatK8sCost := that.Cost * that.KubernetesPercent
+		k8sPct = (thisK8sCost + thatK8sCost) / sumCost
+	}
+
+	cca.Cost = sumCost
+	cca.Credit += that.Credit
+	cca.KubernetesPercent = k8sPct
+}
+
+type CloudCostAggregateSet struct {
+	CloudCostAggregates   map[string]*CloudCostAggregate `json:"items"`
+	AggregationProperties []string                       `json:"-"`
+	Integration           string                         `json:"-"`
+	LabelName             string                         `json:"labelName,omitempty"`
+	Window                Window                         `json:"window"`
+}
+
+func NewCloudCostAggregateSet(start, end time.Time, cloudCostAggregates ...*CloudCostAggregate) *CloudCostAggregateSet {
+	ccas := &CloudCostAggregateSet{
+		CloudCostAggregates: map[string]*CloudCostAggregate{},
+		Window:              NewWindow(&start, &end),
+	}
+
+	for _, cca := range cloudCostAggregates {
+		ccas.insertByProperty(cca, nil)
+	}
+
+	return ccas
+}
+
+func (ccas *CloudCostAggregateSet) Aggregate(props []string) (*CloudCostAggregateSet, error) {
+	if ccas == nil {
+		return nil, errors.New("cannot aggregate a nil CloudCostAggregateSet")
+	}
+
+	if ccas.Window.IsOpen() {
+		return nil, fmt.Errorf("cannot aggregate a CloudCostAggregateSet with an open window: %s", ccas.Window)
+	}
+
+	// Create a new result set, with the given aggregation property
+	result := NewCloudCostAggregateSet(*ccas.Window.Start(), *ccas.Window.End())
+	result.AggregationProperties = props
+	result.LabelName = ccas.LabelName
+	result.Integration = ccas.Integration
+
+	// Insert clones of each item in the set, keyed by the given property.
+	// The underlying insert logic will add binned items together.
+	for name, cca := range ccas.CloudCostAggregates {
+		ccaClone := cca.Clone()
+		err := result.insertByProperty(ccaClone, props)
+		if err != nil {
+			return nil, fmt.Errorf("error aggregating %s by %v: %s", name, props, err)
+		}
+	}
+
+	return result, nil
+}
+
+func (ccas *CloudCostAggregateSet) Filter(filters filter.Filter[*CloudCostAggregate]) *CloudCostAggregateSet {
+	if ccas == nil {
+		return nil
+	}
+
+	result := ccas.Clone()
+	result.filter(filters)
+
+	return result
+}
+
+func (ccas *CloudCostAggregateSet) filter(filters filter.Filter[*CloudCostAggregate]) {
+	if ccas == nil {
+		return
+	}
+
+	if filters == nil {
+		return
+	}
+
+	for name, cca := range ccas.CloudCostAggregates {
+		if !filters.Matches(cca) {
+			delete(ccas.CloudCostAggregates, name)
+		}
+	}
+}
+
+func (ccas *CloudCostAggregateSet) Insert(that *CloudCostAggregate) error {
+	// Publicly, only allow Inserting as a basic operation (i.e. without causing
+	// an aggregation on a property).
+	return ccas.insertByProperty(that, nil)
+}
+
+func (ccas *CloudCostAggregateSet) insertByProperty(that *CloudCostAggregate, props []string) error {
+	if ccas == nil {
+		return fmt.Errorf("cannot insert into nil CloudCostAggregateSet")
+	}
+
+	if ccas.CloudCostAggregates == nil {
+		ccas.CloudCostAggregates = map[string]*CloudCostAggregate{}
+	}
+
+	// Add the given CloudCostAggregate to the existing entry, if there is one;
+	// otherwise just set directly into allocations
+	if _, ok := ccas.CloudCostAggregates[that.Key(props)]; !ok {
+		ccas.CloudCostAggregates[that.Key(props)] = that
+	} else {
+		ccas.CloudCostAggregates[that.Key(props)].add(that)
+	}
+
+	return nil
+}
+
+func (ccas *CloudCostAggregateSet) Clone() *CloudCostAggregateSet {
+	aggs := make(map[string]*CloudCostAggregate, len(ccas.CloudCostAggregates))
+	for k, v := range ccas.CloudCostAggregates {
+		aggs[k] = v.Clone()
+	}
+
+	return &CloudCostAggregateSet{
+		CloudCostAggregates: aggs,
+		Integration:         ccas.Integration,
+		LabelName:           ccas.LabelName,
+		Window:              ccas.Window.Clone(),
+	}
+}
+
+func (ccas *CloudCostAggregateSet) Equal(that *CloudCostAggregateSet) bool {
+	if ccas.Integration != that.Integration {
+		return false
+	}
+
+	if ccas.LabelName != that.LabelName {
+		return false
+	}
+
+	if !ccas.Window.Equal(that.Window) {
+		return false
+	}
+
+	if len(ccas.CloudCostAggregates) != len(that.CloudCostAggregates) {
+		return false
+	}
+
+	for k, cca := range ccas.CloudCostAggregates {
+		tcca, ok := that.CloudCostAggregates[k]
+		if !ok {
+			return false
+		}
+		if !cca.Equal(tcca) {
+			return false
+		}
+	}
+
+	return true
+}
+
+func (ccas *CloudCostAggregateSet) IsEmpty() bool {
+	if ccas == nil {
+		return true
+	}
+
+	if len(ccas.CloudCostAggregates) == 0 {
+		return true
+	}
+
+	return false
+}
+
+func (ccas *CloudCostAggregateSet) Length() int {
+	if ccas == nil {
+		return 0
+	}
+	return len(ccas.CloudCostAggregates)
+}
+
+func (ccas *CloudCostAggregateSet) GetWindow() Window {
+	return ccas.Window
+}
+
+func (ccas *CloudCostAggregateSet) Merge(that *CloudCostAggregateSet) (*CloudCostAggregateSet, error) {
+	if ccas == nil || that == nil {
+		return nil, fmt.Errorf("cannot merge nil CloudCostAggregateSets")
+	}
+
+	if that.IsEmpty() {
+		return ccas.Clone(), nil
+	}
+
+	if !ccas.Window.Equal(that.Window) {
+		return nil, fmt.Errorf("cannot merge CloudCostAggregateSets with different windows")
+	}
+
+	if ccas.LabelName != that.LabelName {
+		return nil, fmt.Errorf("cannot merge CloudCostAggregateSets with different label names: '%s' != '%s'", ccas.LabelName, that.LabelName)
+	}
+
+	start, end := *ccas.Window.Start(), *ccas.Window.End()
+	result := NewCloudCostAggregateSet(start, end)
+	result.LabelName = ccas.LabelName
+
+	for _, cca := range ccas.CloudCostAggregates {
+		result.insertByProperty(cca, nil)
+	}
+
+	for _, cca := range that.CloudCostAggregates {
+		result.insertByProperty(cca, nil)
+	}
+
+	return result, nil
+}
+
+func GetCloudCostAggregateSets(start, end time.Time, windowDuration time.Duration, integration string, labelName string) ([]*CloudCostAggregateSet, error) {
+	windows, err := GetWindows(start, end, windowDuration)
+	if err != nil {
+		return nil, err
+	}
+
+	// Build slice of CloudCostAggregateSet to cover the range
+	CloudCostAggregateSets := []*CloudCostAggregateSet{}
+	for _, w := range windows {
+		ccas := NewCloudCostAggregateSet(*w.Start(), *w.End())
+		ccas.Integration = integration
+		ccas.LabelName = labelName
+		CloudCostAggregateSets = append(CloudCostAggregateSets, ccas)
+	}
+	return CloudCostAggregateSets, nil
+}
+
+// LoadCloudCostAggregateSets creates and loads CloudCostAggregates into provided CloudCostAggregateSets. This method makes it so
+// that the input windows do not have to match the one day frame of the Athena queries. CloudCostAggregates being generated from a
+// CUR which may be the identical except for the pricing model used (default, RI or savings plan)
+// are accumulated here so that the resulting CloudCostAggregate with the 1d window has the correct price for the entire day.
+func LoadCloudCostAggregateSets(itemStart time.Time, itemEnd time.Time, properties CloudCostAggregateProperties, K8sPercent, cost, credit float64, CloudCostAggregateSets []*CloudCostAggregateSet) {
+	// Disperse cost of the current item across one or more CloudCostAggregates in
+	// across each relevant CloudCostAggregateSet. Stop when the end of the current
+	// block reaches the item's end time or the end of the range.
+	for _, ccas := range CloudCostAggregateSets {
+		pct := ccas.GetWindow().GetPercentInWindow(itemStart, itemEnd)
+
+		// Insert an CloudCostAggregate with that cost into the CloudCostAggregateSet at the given index
+		cca := &CloudCostAggregate{
+			Properties:        properties,
+			KubernetesPercent: K8sPercent * pct,
+			Cost:              cost * pct,
+			Credit:            credit * pct,
+		}
+		err := ccas.insertByProperty(cca, nil)
+		if err != nil {
+			log.Errorf("LoadCloudCostAggregateSets: failed to load CloudCostAggregate with key %s and window %s", cca.Key(nil), ccas.GetWindow().String())
+		}
+	}
+}
+
+type CloudCostAggregateSetRange struct {
+	CloudCostAggregateSets []*CloudCostAggregateSet `json:"sets"`
+	Window                 Window                   `json:"window"`
+}
+
+func (ccasr *CloudCostAggregateSetRange) Accumulate() (*CloudCostAggregateSet, error) {
+	if ccasr == nil {
+		return nil, errors.New("cannot accumulate a nil CloudCostAggregateSetRange")
+	}
+
+	if ccasr.Window.IsOpen() {
+		return nil, fmt.Errorf("cannot accumulate a CloudCostAggregateSetRange with an open window: %s", ccasr.Window)
+	}
+
+	result := NewCloudCostAggregateSet(*ccasr.Window.Start(), *ccasr.Window.End())
+
+	for _, ccas := range ccasr.CloudCostAggregateSets {
+		for name, cca := range ccas.CloudCostAggregates {
+			err := result.insertByProperty(cca.Clone(), ccas.AggregationProperties)
+			if err != nil {
+				return nil, fmt.Errorf("error accumulating CloudCostAggregateSetRange[%s][%s]: %s", ccas.Window.String(), name, err)
+			}
+		}
+	}
+
+	return result, nil
+}

+ 321 - 0
pkg/kubecost/cloudcostitem.go

@@ -0,0 +1,321 @@
+package kubecost
+
+import (
+	"fmt"
+	"time"
+
+	"github.com/opencost/opencost/pkg/filter"
+	"github.com/opencost/opencost/pkg/log"
+)
+
+type CloudCostItemLabels map[string]string
+
+func (ccil CloudCostItemLabels) Clone() CloudCostItemLabels {
+	result := make(map[string]string, len(ccil))
+	for k, v := range ccil {
+		result[k] = v
+	}
+	return result
+}
+
+func (ccil CloudCostItemLabels) Equal(that CloudCostItemLabels) bool {
+	if len(ccil) != len(that) {
+		return false
+	}
+
+	// Maps are of equal length, so if all keys are in both maps, we don't
+	// have to check the keys of the other map.
+	for k, v := range ccil {
+		if tv, ok := that[k]; !ok || v != tv {
+			return false
+		}
+	}
+
+	return true
+}
+
+type CloudCostItemProperties struct {
+	ProviderID string              `json:"providerID,omitempty"`
+	Provider   string              `json:"provider,omitempty"`
+	Account    string              `json:"account,omitempty"`
+	Project    string              `json:"project,omitempty"`
+	Service    string              `json:"service,omitempty"`
+	Category   string              `json:"category,omitempty"`
+	Labels     CloudCostItemLabels `json:"labels,omitempty"`
+}
+
+func (ccip CloudCostItemProperties) Equal(that CloudCostItemProperties) bool {
+	return ccip.ProviderID == that.ProviderID &&
+		ccip.Provider == that.Provider &&
+		ccip.Account == that.Account &&
+		ccip.Project == that.Project &&
+		ccip.Service == that.Service &&
+		ccip.Category == that.Category &&
+		ccip.Labels.Equal(that.Labels)
+}
+
+func (ccip CloudCostItemProperties) Clone() CloudCostItemProperties {
+	return CloudCostItemProperties{
+		ProviderID: ccip.ProviderID,
+		Provider:   ccip.Provider,
+		Account:    ccip.Account,
+		Project:    ccip.Project,
+		Service:    ccip.Service,
+		Category:   ccip.Category,
+		Labels:     ccip.Labels.Clone(),
+	}
+}
+
+func (ccip CloudCostItemProperties) Key() string {
+	return fmt.Sprintf("%s/%s/%s/%s/%s/%s", ccip.Provider, ccip.Account, ccip.Project, ccip.Category, ccip.Service, ccip.ProviderID)
+}
+
+// CloudCostItem represents a CUR line item, identifying a cloud resource and
+// its cost over some period of time.
+type CloudCostItem struct {
+	Properties   CloudCostItemProperties
+	IsKubernetes bool
+	Window       Window
+	Cost         float64
+	Credit       float64
+}
+
+func (cci *CloudCostItem) Clone() *CloudCostItem {
+	return &CloudCostItem{
+		Properties:   cci.Properties.Clone(),
+		IsKubernetes: cci.IsKubernetes,
+		Window:       cci.Window.Clone(),
+		Cost:         cci.Cost,
+		Credit:       cci.Credit,
+	}
+}
+
+func (cci *CloudCostItem) Equal(that *CloudCostItem) bool {
+	if that == nil {
+		return false
+	}
+
+	return cci.Properties.Equal(that.Properties) &&
+		cci.IsKubernetes == that.IsKubernetes &&
+		cci.Window.Equal(that.Window) &&
+		cci.Cost == that.Cost &&
+		cci.Credit == that.Credit
+}
+
+func (cci *CloudCostItem) Key() string {
+	return cci.Properties.Key()
+}
+
+func (cci *CloudCostItem) add(that *CloudCostItem) {
+	if cci == nil {
+		log.Warnf("cannot add to nil CloudCostItem")
+		return
+	}
+
+	cci.Cost += that.Cost
+	cci.Credit += that.Credit
+	cci.Window = cci.Window.Expand(that.Window)
+}
+
+type CloudCostItemSet struct {
+	CloudCostItems map[string]*CloudCostItem
+	Window         Window
+	Integration    string
+}
+
+// NewAssetSet instantiates a new AssetSet and, optionally, inserts
+// the given list of Assets
+func NewCloudCostItemSet(start, end time.Time, cloudCostItems ...*CloudCostItem) *CloudCostItemSet {
+	ccis := &CloudCostItemSet{
+		CloudCostItems: map[string]*CloudCostItem{},
+		Window:         NewWindow(&start, &end),
+	}
+
+	for _, cci := range cloudCostItems {
+		ccis.Insert(cci)
+	}
+
+	return ccis
+}
+
+func (ccis *CloudCostItemSet) Equal(that *CloudCostItemSet) bool {
+	if ccis.Integration != that.Integration {
+		return false
+	}
+
+	if !ccis.Window.Equal(that.Window) {
+		return false
+	}
+
+	if len(ccis.CloudCostItems) != len(that.CloudCostItems) {
+		return false
+	}
+
+	for k, cci := range ccis.CloudCostItems {
+		tcci, ok := that.CloudCostItems[k]
+		if !ok {
+			return false
+		}
+		if !cci.Equal(tcci) {
+			return false
+		}
+	}
+
+	return true
+}
+
+func (ccis *CloudCostItemSet) Filter(filters filter.Filter[*CloudCostItem]) *CloudCostItemSet {
+	if ccis == nil {
+		return nil
+	}
+
+	if filters == nil {
+		return ccis.Clone()
+	}
+
+	result := NewCloudCostItemSet(*ccis.Window.start, *ccis.Window.end)
+
+	for _, cci := range ccis.CloudCostItems {
+		if filters.Matches(cci) {
+			result.Insert(cci.Clone())
+		}
+	}
+
+	return result
+}
+
+func (ccis *CloudCostItemSet) Insert(that *CloudCostItem) error {
+	if ccis == nil {
+		return fmt.Errorf("cannot insert into nil CloudCostItemSet")
+	}
+
+	if that == nil {
+		return fmt.Errorf("cannot insert nil CloudCostItem into CloudCostItemSet")
+	}
+
+	if ccis.CloudCostItems == nil {
+		ccis.CloudCostItems = map[string]*CloudCostItem{}
+	}
+
+	// Add the given CloudCostItem to the existing entry, if there is one;
+	// otherwise just set directly into allocations
+	if _, ok := ccis.CloudCostItems[that.Key()]; !ok {
+		ccis.CloudCostItems[that.Key()] = that.Clone()
+	} else {
+		ccis.CloudCostItems[that.Key()].add(that)
+	}
+
+	return nil
+}
+
+func (ccis *CloudCostItemSet) Clone() *CloudCostItemSet {
+	items := make(map[string]*CloudCostItem, len(ccis.CloudCostItems))
+	for k, v := range ccis.CloudCostItems {
+		items[k] = v.Clone()
+	}
+
+	return &CloudCostItemSet{
+		CloudCostItems: items,
+		Integration:    ccis.Integration,
+		Window:         ccis.Window.Clone(),
+	}
+}
+
+func (ccis *CloudCostItemSet) IsEmpty() bool {
+	if ccis == nil {
+		return true
+	}
+
+	if len(ccis.CloudCostItems) == 0 {
+		return true
+	}
+
+	return false
+}
+
+func (ccis *CloudCostItemSet) Length() int {
+	if ccis == nil {
+		return 0
+	}
+	return len(ccis.CloudCostItems)
+}
+
+func (ccis *CloudCostItemSet) GetWindow() Window {
+	return ccis.Window
+}
+
+func (ccis *CloudCostItemSet) Merge(that *CloudCostItemSet) (*CloudCostItemSet, error) {
+	if ccis == nil {
+		return nil, fmt.Errorf("cannot merge nil CloudCostItemSets")
+	}
+
+	if that.IsEmpty() {
+		return ccis.Clone(), nil
+	}
+
+	if !ccis.Window.Equal(that.Window) {
+		return nil, fmt.Errorf("cannot merge CloudCostItemSets with different windows")
+	}
+
+	start, end := *ccis.Window.Start(), *ccis.Window.End()
+	result := NewCloudCostItemSet(start, end)
+
+	for _, cci := range ccis.CloudCostItems {
+		result.Insert(cci)
+	}
+
+	for _, cci := range that.CloudCostItems {
+		result.Insert(cci)
+	}
+
+	return result, nil
+}
+
+// GetCloudCostItemSets
+func GetCloudCostItemSets(start time.Time, end time.Time, window time.Duration, integration string) ([]*CloudCostItemSet, error) {
+	windows, err := GetWindows(start, end, window)
+	if err != nil {
+		return nil, err
+	}
+
+	// Build slice of CloudCostItemSet to cover the range
+	CloudCostItemSets := []*CloudCostItemSet{}
+	for _, w := range windows {
+		ccis := NewCloudCostItemSet(*w.Start(), *w.End())
+		ccis.Integration = integration
+		CloudCostItemSets = append(CloudCostItemSets, ccis)
+	}
+	return CloudCostItemSets, nil
+}
+
+// LoadCloudCostItemSets creates and loads CloudCostItems into provided CloudCostItemSets. This method makes it so
+// that the input windows do not have to match the one day frame of the Athena queries. CloudCostItems being generated from a
+// CUR which may be the identical except for the pricing model used (default, RI or savings plan)
+// are accumulated here so that the resulting CloudCostItem with the 1d window has the correct price for the entire day.
+func LoadCloudCostItemSets(itemStart time.Time, itemEnd time.Time, properties CloudCostItemProperties, isK8s bool, cost, credit float64, CloudCostItemSets []*CloudCostItemSet) {
+
+	// Disperse cost of the current item across one or more CloudCostItems in
+	// across each relevant CloudCostItemSet. Stop when the end of the current
+	// block reaches the item's end time or the end of the range.
+	for _, ccis := range CloudCostItemSets {
+		pct := ccis.GetWindow().GetPercentInWindow(itemStart, itemEnd)
+
+		// Insert an CloudCostItem with that cost into the CloudCostItemSet at the given index
+		cci := &CloudCostItem{
+			Properties:   properties,
+			IsKubernetes: isK8s,
+			Window:       ccis.GetWindow(),
+			Cost:         cost * pct,
+			Credit:       credit * pct,
+		}
+		err := ccis.Insert(cci)
+		if err != nil {
+			log.Errorf("LoadCloudCostItemSets: failed to load CloudCostItem with key %s and window %s: %s", cci.Key(), ccis.GetWindow().String(), err.Error())
+		}
+	}
+}
+
+type CloudCostItemSetRange struct {
+	CloudCostItemSets []*CloudCostItemSet `json:"sets"`
+	Window            Window              `json:"window"`
+}

+ 118 - 0
pkg/kubecost/coverage.go

@@ -0,0 +1,118 @@
+package kubecost
+
+import (
+	"time"
+
+	"github.com/opencost/opencost/pkg/filter"
+)
+
+// Coverage This is a placeholder struct which can be replaced by a more specific implementation later
+type Coverage struct {
+	Window   Window    `json:"window"`
+	Type     string    `json:"type"`
+	Count    int       `json:"count"`
+	Updated  time.Time `json:"updated"`
+	Errors   []string  `json:"errors"`
+	Warnings []string  `json:"warnings"`
+}
+
+func (c *Coverage) GetWindow() Window {
+	return c.Window
+}
+
+func (c *Coverage) Key() string {
+	return c.Type
+}
+
+func (c *Coverage) IsEmpty() bool {
+	return c.Type == "" && c.Count == 0 && len(c.Errors) == 0 && len(c.Warnings) == 0 && c.Updated == time.Time{}
+}
+
+func (c *Coverage) Clone() *Coverage {
+	var errors []string
+	if len(c.Errors) > 0 {
+		errors = make([]string, len(c.Errors))
+		copy(errors, c.Errors)
+	}
+	var warnings []string
+	if len(c.Warnings) > 0 {
+		warnings = make([]string, len(c.Warnings))
+		copy(warnings, c.Warnings)
+	}
+	return &Coverage{
+		Window:   c.Window.Clone(),
+		Type:     c.Type,
+		Count:    c.Count,
+		Updated:  c.Updated,
+		Errors:   errors,
+		Warnings: warnings,
+	}
+}
+
+// Coverage This is a placeholder struct which can be replaced by a more specific implementation later
+type CoverageSet struct {
+	Window Window               `json:"window"`
+	Items  map[string]*Coverage `json:"items"`
+}
+
+func NewCoverageSet(start, end time.Time) *CoverageSet {
+	return &CoverageSet{
+		Window: NewWindow(&start, &end),
+		Items:  map[string]*Coverage{},
+	}
+}
+
+func (cs *CoverageSet) GetWindow() Window {
+	return cs.Window
+}
+
+func (cs *CoverageSet) IsEmpty() bool {
+	for _, item := range cs.Items {
+		if !item.IsEmpty() {
+			return false
+		}
+	}
+	return true
+}
+
+func (cs *CoverageSet) Clone() *CoverageSet {
+	var items map[string]*Coverage
+	if cs.Items != nil {
+		items = make(map[string]*Coverage, len(cs.Items))
+		for k, item := range cs.Items {
+			items[k] = item.Clone()
+		}
+
+	}
+	return &CoverageSet{
+		Window: cs.Window.Clone(),
+		Items:  items,
+	}
+}
+
+func (cs *CoverageSet) Insert(coverage *Coverage) {
+	if cs.Items == nil {
+		cs.Items = map[string]*Coverage{}
+	}
+	cs.Items[coverage.Key()] = coverage
+}
+
+func (cs *CoverageSet) Filter(filters filter.Filter[*Coverage]) *CoverageSet {
+	if cs == nil {
+		return nil
+	}
+
+	if filters == nil {
+		return cs.Clone()
+	}
+
+	result := NewCoverageSet(*cs.Window.start, *cs.Window.end)
+
+	for _, c := range cs.Items {
+		if filters.Matches(c) {
+			result.Insert(c.Clone())
+		}
+	}
+
+	return result
+}

+ 6 - 6
pkg/kubecost/diff_test.go

@@ -16,20 +16,20 @@ func TestDiff(t *testing.T) {
 	node1.CPUCost = 10
 	node1.CPUCost = 10
 	node1b := node1.Clone().(*Node)
 	node1b := node1.Clone().(*Node)
 	node1b.CPUCost = 20
 	node1b.CPUCost = 20
-	node1Key, _ := key(node1, nil)
+	node1Key, _ := key(node1, nil, nil)
 	node2 := NewNode("node2", "cluster1", "123abc", start, end, window1)
 	node2 := NewNode("node2", "cluster1", "123abc", start, end, window1)
 	node2.CPUCost = 100
 	node2.CPUCost = 100
 	node2b := node2.Clone().(*Node)
 	node2b := node2.Clone().(*Node)
 	node2b.CPUCost = 105
 	node2b.CPUCost = 105
-	node2Key, _ := key(node2, nil)
+	node2Key, _ := key(node2, nil, nil)
 	node3 := NewNode("node3", "cluster1", "123abc", start, end, window1)
 	node3 := NewNode("node3", "cluster1", "123abc", start, end, window1)
-	node3Key, _ := key(node3, nil)
+	node3Key, _ := key(node3, nil, nil)
 	node4 := NewNode("node4", "cluster1", "123abc", start, end, window1)
 	node4 := NewNode("node4", "cluster1", "123abc", start, end, window1)
-	node4Key, _ := key(node4, nil)
+	node4Key, _ := key(node4, nil, nil)
 	disk1 := NewDisk("disk1", "cluster1", "123abc", start, end, window1)
 	disk1 := NewDisk("disk1", "cluster1", "123abc", start, end, window1)
-	disk1Key, _ := key(disk1, nil)
+	disk1Key, _ := key(disk1, nil, nil)
 	disk2 := NewDisk("disk2", "cluster1", "123abc", start, end, window1)
 	disk2 := NewDisk("disk2", "cluster1", "123abc", start, end, window1)
-	disk2Key, _ := key(disk2, nil)
+	disk2Key, _ := key(disk2, nil, nil)
 
 
 	cases := map[string]struct {
 	cases := map[string]struct {
 		inputAssetsBefore []Asset
 		inputAssetsBefore []Asset

+ 2108 - 125
pkg/kubecost/kubecost_codecs.go

@@ -13,12 +13,11 @@ package kubecost
 
 
 import (
 import (
 	"fmt"
 	"fmt"
+	util "github.com/opencost/opencost/pkg/util"
 	"reflect"
 	"reflect"
 	"strings"
 	"strings"
 	"sync"
 	"sync"
 	"time"
 	"time"
-
-	util "github.com/opencost/opencost/pkg/util"
 )
 )
 
 
 const (
 const (
@@ -34,17 +33,23 @@ const (
 )
 )
 
 
 const (
 const (
+	// DefaultCodecVersion is used for any resources listed in the Default version set
+	DefaultCodecVersion uint8 = 17
+
+	// AssetsCodecVersion is used for any resources listed in the Assets version set
+	AssetsCodecVersion uint8 = 18
+
 	// AllocationCodecVersion is used for any resources listed in the Allocation version set
 	// AllocationCodecVersion is used for any resources listed in the Allocation version set
 	AllocationCodecVersion uint8 = 15
 	AllocationCodecVersion uint8 = 15
 
 
 	// AuditCodecVersion is used for any resources listed in the Audit version set
 	// AuditCodecVersion is used for any resources listed in the Audit version set
 	AuditCodecVersion uint8 = 1
 	AuditCodecVersion uint8 = 1
 
 
-	// DefaultCodecVersion is used for any resources listed in the Default version set
-	DefaultCodecVersion uint8 = 15
+	// CloudCostAggregateCodecVersion is used for any resources listed in the CloudCostAggregate version set
+	CloudCostAggregateCodecVersion uint8 = 1
 
 
-	// AssetsCodecVersion is used for any resources listed in the Assets version set
-	AssetsCodecVersion uint8 = 18
+	// CloudCostItemCodecVersion is used for any resources listed in the CloudCostItem version set
+	CloudCostItemCodecVersion uint8 = 1
 )
 )
 
 
 //--------------------------------------------------------------------------
 //--------------------------------------------------------------------------
@@ -71,7 +76,17 @@ var typeMap map[string]reflect.Type = map[string]reflect.Type{
 	"AuditSetRange":                 reflect.TypeOf((*AuditSetRange)(nil)).Elem(),
 	"AuditSetRange":                 reflect.TypeOf((*AuditSetRange)(nil)).Elem(),
 	"Breakdown":                     reflect.TypeOf((*Breakdown)(nil)).Elem(),
 	"Breakdown":                     reflect.TypeOf((*Breakdown)(nil)).Elem(),
 	"Cloud":                         reflect.TypeOf((*Cloud)(nil)).Elem(),
 	"Cloud":                         reflect.TypeOf((*Cloud)(nil)).Elem(),
+	"CloudCostAggregate":            reflect.TypeOf((*CloudCostAggregate)(nil)).Elem(),
+	"CloudCostAggregateProperties":  reflect.TypeOf((*CloudCostAggregateProperties)(nil)).Elem(),
+	"CloudCostAggregateSet":         reflect.TypeOf((*CloudCostAggregateSet)(nil)).Elem(),
+	"CloudCostAggregateSetRange":    reflect.TypeOf((*CloudCostAggregateSetRange)(nil)).Elem(),
+	"CloudCostItem":                 reflect.TypeOf((*CloudCostItem)(nil)).Elem(),
+	"CloudCostItemProperties":       reflect.TypeOf((*CloudCostItemProperties)(nil)).Elem(),
+	"CloudCostItemSet":              reflect.TypeOf((*CloudCostItemSet)(nil)).Elem(),
+	"CloudCostItemSetRange":         reflect.TypeOf((*CloudCostItemSetRange)(nil)).Elem(),
 	"ClusterManagement":             reflect.TypeOf((*ClusterManagement)(nil)).Elem(),
 	"ClusterManagement":             reflect.TypeOf((*ClusterManagement)(nil)).Elem(),
+	"Coverage":                      reflect.TypeOf((*Coverage)(nil)).Elem(),
+	"CoverageSet":                   reflect.TypeOf((*CoverageSet)(nil)).Elem(),
 	"Disk":                          reflect.TypeOf((*Disk)(nil)).Elem(),
 	"Disk":                          reflect.TypeOf((*Disk)(nil)).Elem(),
 	"EqualityAudit":                 reflect.TypeOf((*EqualityAudit)(nil)).Elem(),
 	"EqualityAudit":                 reflect.TypeOf((*EqualityAudit)(nil)).Elem(),
 	"LoadBalancer":                  reflect.TypeOf((*LoadBalancer)(nil)).Elem(),
 	"LoadBalancer":                  reflect.TypeOf((*LoadBalancer)(nil)).Elem(),
@@ -4621,12 +4636,12 @@ func (target *Cloud) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error
 }
 }
 
 
 //--------------------------------------------------------------------------
 //--------------------------------------------------------------------------
-//  ClusterManagement
+//  CloudCostAggregate
 //--------------------------------------------------------------------------
 //--------------------------------------------------------------------------
 
 
-// MarshalBinary serializes the internal properties of this ClusterManagement instance
+// MarshalBinary serializes the internal properties of this CloudCostAggregate instance
 // into a byte array
 // into a byte array
-func (target *ClusterManagement) MarshalBinary() (data []byte, err error) {
+func (target *CloudCostAggregate) MarshalBinary() (data []byte, err error) {
 	ctx := &EncodingContext{
 	ctx := &EncodingContext{
 		Buffer: util.NewBuffer(),
 		Buffer: util.NewBuffer(),
 		Table:  nil,
 		Table:  nil,
@@ -4641,9 +4656,9 @@ func (target *ClusterManagement) MarshalBinary() (data []byte, err error) {
 	return encBytes, nil
 	return encBytes, nil
 }
 }
 
 
-// MarshalBinaryWithContext serializes the internal properties of this ClusterManagement instance
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostAggregate instance
 // into a byte array leveraging a predefined context.
 // into a byte array leveraging a predefined context.
-func (target *ClusterManagement) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+func (target *CloudCostAggregate) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
 	// panics are recovered and propagated as errors
 	// panics are recovered and propagated as errors
 	defer func() {
 	defer func() {
 		if r := recover(); r != nil {
 		if r := recover(); r != nil {
@@ -4658,65 +4673,25 @@ func (target *ClusterManagement) MarshalBinaryWithContext(ctx *EncodingContext)
 	}()
 	}()
 
 
 	buff := ctx.Buffer
 	buff := ctx.Buffer
-	buff.WriteUInt8(AssetsCodecVersion) // version
-
-	// --- [begin][write][alias](AssetLabels) ---
-	if map[string]string(target.Labels) == nil {
-		buff.WriteUInt8(uint8(0)) // write nil byte
-	} else {
-		buff.WriteUInt8(uint8(1)) // write non-nil byte
-
-		// --- [begin][write][map](map[string]string) ---
-		buff.WriteInt(len(map[string]string(target.Labels))) // map length
-		for v, z := range map[string]string(target.Labels) {
-			if ctx.IsStringTable() {
-				a := ctx.Table.AddOrGet(v)
-				buff.WriteInt(a) // write table index
-			} else {
-				buff.WriteString(v) // write string
-			}
-			if ctx.IsStringTable() {
-				b := ctx.Table.AddOrGet(z)
-				buff.WriteInt(b) // write table index
-			} else {
-				buff.WriteString(z) // write string
-			}
-		}
-		// --- [end][write][map](map[string]string) ---
-
-	}
-	// --- [end][write][alias](AssetLabels) ---
-
-	if target.Properties == nil {
-		buff.WriteUInt8(uint8(0)) // write nil byte
-	} else {
-		buff.WriteUInt8(uint8(1)) // write non-nil byte
-
-		// --- [begin][write][struct](AssetProperties) ---
-		buff.WriteInt(0) // [compatibility, unused]
-		errA := target.Properties.MarshalBinaryWithContext(ctx)
-		if errA != nil {
-			return errA
-		}
-		// --- [end][write][struct](AssetProperties) ---
+	buff.WriteUInt8(CloudCostAggregateCodecVersion) // version
 
 
-	}
-	// --- [begin][write][struct](Window) ---
+	// --- [begin][write][struct](CloudCostAggregateProperties) ---
 	buff.WriteInt(0) // [compatibility, unused]
 	buff.WriteInt(0) // [compatibility, unused]
-	errB := target.Window.MarshalBinaryWithContext(ctx)
-	if errB != nil {
-		return errB
+	errA := target.Properties.MarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
 	}
 	}
-	// --- [end][write][struct](Window) ---
+	// --- [end][write][struct](CloudCostAggregateProperties) ---
 
 
-	buff.WriteFloat64(target.Cost)       // write float64
-	buff.WriteFloat64(target.Adjustment) // write float64
+	buff.WriteFloat64(target.KubernetesPercent) // write float64
+	buff.WriteFloat64(target.Cost)              // write float64
+	buff.WriteFloat64(target.Credit)            // write float64
 	return nil
 	return nil
 }
 }
 
 
 // UnmarshalBinary uses the data passed byte array to set all the internal properties of
 // UnmarshalBinary uses the data passed byte array to set all the internal properties of
-// the ClusterManagement type
-func (target *ClusterManagement) UnmarshalBinary(data []byte) error {
+// the CloudCostAggregate type
+func (target *CloudCostAggregate) UnmarshalBinary(data []byte) error {
 	var table []string
 	var table []string
 	buff := util.NewBufferFromBytes(data)
 	buff := util.NewBufferFromBytes(data)
 
 
@@ -4746,8 +4721,8 @@ func (target *ClusterManagement) UnmarshalBinary(data []byte) error {
 }
 }
 
 
 // UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
 // UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
-// the ClusterManagement type
-func (target *ClusterManagement) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+// the CloudCostAggregate type
+func (target *CloudCostAggregate) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
 	// panics are recovered and propagated as errors
 	// panics are recovered and propagated as errors
 	defer func() {
 	defer func() {
 		if r := recover(); r != nil {
 		if r := recover(); r != nil {
@@ -4764,86 +4739,2094 @@ func (target *ClusterManagement) UnmarshalBinaryWithContext(ctx *DecodingContext
 	buff := ctx.Buffer
 	buff := ctx.Buffer
 	version := buff.ReadUInt8()
 	version := buff.ReadUInt8()
 
 
-	if version > AssetsCodecVersion {
-		return fmt.Errorf("Invalid Version Unmarshaling ClusterManagement. Expected %d or less, got %d", AssetsCodecVersion, version)
+	if version > CloudCostAggregateCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostAggregate. Expected %d or less, got %d", CloudCostAggregateCodecVersion, version)
 	}
 	}
 
 
-	// --- [begin][read][alias](AssetLabels) ---
-	var a map[string]string
-	if buff.ReadUInt8() == uint8(0) {
-		a = nil
-	} else {
-		// --- [begin][read][map](map[string]string) ---
-		c := buff.ReadInt() // map len
-		b := make(map[string]string, c)
-		for i := 0; i < c; i++ {
-			var v string
-			var e string
-			if ctx.IsStringTable() {
-				f := buff.ReadInt() // read string index
-				e = ctx.Table[f]
-			} else {
-				e = buff.ReadString() // read string
-			}
-			d := e
-			v = d
+	// --- [begin][read][struct](CloudCostAggregateProperties) ---
+	a := &CloudCostAggregateProperties{}
+	buff.ReadInt() // [compatibility, unused]
+	errA := a.UnmarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	target.Properties = *a
+	// --- [end][read][struct](CloudCostAggregateProperties) ---
 
 
-			var z string
-			var h string
-			if ctx.IsStringTable() {
-				k := buff.ReadInt() // read string index
-				h = ctx.Table[k]
+	b := buff.ReadFloat64() // read float64
+	target.KubernetesPercent = b
+
+	c := buff.ReadFloat64() // read float64
+	target.Cost = c
+
+	d := buff.ReadFloat64() // read float64
+	target.Credit = d
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostAggregateProperties
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostAggregateProperties instance
+// into a byte array
+func (target *CloudCostAggregateProperties) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostAggregateProperties instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostAggregateProperties) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
 			} else {
 			} else {
-				h = buff.ReadString() // read string
+				err = fmt.Errorf("Unexpected panic: %+v", r)
 			}
 			}
-			g := h
-			z = g
-
-			b[v] = z
 		}
 		}
-		a = b
-		// --- [end][read][map](map[string]string) ---
+	}()
 
 
-	}
-	target.Labels = AssetLabels(a)
-	// --- [end][read][alias](AssetLabels) ---
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostAggregateCodecVersion) // version
 
 
-	if buff.ReadUInt8() == uint8(0) {
-		target.Properties = nil
+	if ctx.IsStringTable() {
+		a := ctx.Table.AddOrGet(target.Provider)
+		buff.WriteInt(a) // write table index
 	} else {
 	} else {
-		// --- [begin][read][struct](AssetProperties) ---
-		l := &AssetProperties{}
-		buff.ReadInt() // [compatibility, unused]
-		errA := l.UnmarshalBinaryWithContext(ctx)
-		if errA != nil {
-			return errA
+		buff.WriteString(target.Provider) // write string
+	}
+	if ctx.IsStringTable() {
+		b := ctx.Table.AddOrGet(target.Account)
+		buff.WriteInt(b) // write table index
+	} else {
+		buff.WriteString(target.Account) // write string
+	}
+	if ctx.IsStringTable() {
+		c := ctx.Table.AddOrGet(target.Project)
+		buff.WriteInt(c) // write table index
+	} else {
+		buff.WriteString(target.Project) // write string
+	}
+	if ctx.IsStringTable() {
+		d := ctx.Table.AddOrGet(target.Service)
+		buff.WriteInt(d) // write table index
+	} else {
+		buff.WriteString(target.Service) // write string
+	}
+	if ctx.IsStringTable() {
+		e := ctx.Table.AddOrGet(target.LabelValue)
+		buff.WriteInt(e) // write table index
+	} else {
+		buff.WriteString(target.LabelValue) // write string
+	}
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostAggregateProperties type
+func (target *CloudCostAggregateProperties) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
 		}
 		}
-		target.Properties = l
-		// --- [end][read][struct](AssetProperties) ---
+	}
 
 
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
 	}
 	}
-	// --- [begin][read][struct](Window) ---
-	m := &Window{}
-	buff.ReadInt() // [compatibility, unused]
-	errB := m.UnmarshalBinaryWithContext(ctx)
-	if errB != nil {
-		return errB
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
 	}
 	}
-	target.Window = *m
-	// --- [end][read][struct](Window) ---
 
 
-	n := buff.ReadFloat64() // read float64
-	target.Cost = n
+	return nil
+}
 
 
-	// field version check
-	if uint8(16) <= version {
-		o := buff.ReadFloat64() // read float64
-		target.Adjustment = o
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostAggregateProperties type
+func (target *CloudCostAggregateProperties) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostAggregateCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostAggregateProperties. Expected %d or less, got %d", CloudCostAggregateCodecVersion, version)
+	}
 
 
+	var b string
+	if ctx.IsStringTable() {
+		c := buff.ReadInt() // read string index
+		b = ctx.Table[c]
 	} else {
 	} else {
-		target.Adjustment = float64(0) // default
+		b = buff.ReadString() // read string
+	}
+	a := b
+	target.Provider = a
+
+	var e string
+	if ctx.IsStringTable() {
+		f := buff.ReadInt() // read string index
+		e = ctx.Table[f]
+	} else {
+		e = buff.ReadString() // read string
 	}
 	}
+	d := e
+	target.Account = d
 
 
+	var h string
+	if ctx.IsStringTable() {
+		k := buff.ReadInt() // read string index
+		h = ctx.Table[k]
+	} else {
+		h = buff.ReadString() // read string
+	}
+	g := h
+	target.Project = g
+
+	var m string
+	if ctx.IsStringTable() {
+		n := buff.ReadInt() // read string index
+		m = ctx.Table[n]
+	} else {
+		m = buff.ReadString() // read string
+	}
+	l := m
+	target.Service = l
+
+	var p string
+	if ctx.IsStringTable() {
+		q := buff.ReadInt() // read string index
+		p = ctx.Table[q]
+	} else {
+		p = buff.ReadString() // read string
+	}
+	o := p
+	target.LabelValue = o
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostAggregateSet
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostAggregateSet instance
+// into a byte array
+func (target *CloudCostAggregateSet) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  NewStringTable(),
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	sTableBytes := ctx.Table.ToBytes()
+	merged := appendBytes(sTableBytes, encBytes)
+	return merged, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostAggregateSet instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostAggregateSet) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostAggregateCodecVersion) // version
+
+	if target.CloudCostAggregates == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][map](map[string]*CloudCostAggregate) ---
+		buff.WriteInt(len(target.CloudCostAggregates)) // map length
+		for v, z := range target.CloudCostAggregates {
+			if ctx.IsStringTable() {
+				a := ctx.Table.AddOrGet(v)
+				buff.WriteInt(a) // write table index
+			} else {
+				buff.WriteString(v) // write string
+			}
+			if z == nil {
+				buff.WriteUInt8(uint8(0)) // write nil byte
+			} else {
+				buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+				// --- [begin][write][struct](CloudCostAggregate) ---
+				buff.WriteInt(0) // [compatibility, unused]
+				errA := z.MarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				// --- [end][write][struct](CloudCostAggregate) ---
+
+			}
+		}
+		// --- [end][write][map](map[string]*CloudCostAggregate) ---
+
+	}
+	if target.AggregationProperties == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][slice]([]string) ---
+		buff.WriteInt(len(target.AggregationProperties)) // array length
+		for i := 0; i < len(target.AggregationProperties); i++ {
+			if ctx.IsStringTable() {
+				b := ctx.Table.AddOrGet(target.AggregationProperties[i])
+				buff.WriteInt(b) // write table index
+			} else {
+				buff.WriteString(target.AggregationProperties[i]) // write string
+			}
+		}
+		// --- [end][write][slice]([]string) ---
+
+	}
+	if ctx.IsStringTable() {
+		c := ctx.Table.AddOrGet(target.Integration)
+		buff.WriteInt(c) // write table index
+	} else {
+		buff.WriteString(target.Integration) // write string
+	}
+	if ctx.IsStringTable() {
+		d := ctx.Table.AddOrGet(target.LabelName)
+		buff.WriteInt(d) // write table index
+	} else {
+		buff.WriteString(target.LabelName) // write string
+	}
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errB := target.Window.MarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	// --- [end][write][struct](Window) ---
+
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostAggregateSet type
+func (target *CloudCostAggregateSet) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostAggregateSet type
+func (target *CloudCostAggregateSet) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostAggregateCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostAggregateSet. Expected %d or less, got %d", CloudCostAggregateCodecVersion, version)
+	}
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.CloudCostAggregates = nil
+	} else {
+		// --- [begin][read][map](map[string]*CloudCostAggregate) ---
+		b := buff.ReadInt() // map len
+		a := make(map[string]*CloudCostAggregate, b)
+		for i := 0; i < b; i++ {
+			var v string
+			var d string
+			if ctx.IsStringTable() {
+				e := buff.ReadInt() // read string index
+				d = ctx.Table[e]
+			} else {
+				d = buff.ReadString() // read string
+			}
+			c := d
+			v = c
+
+			var z *CloudCostAggregate
+			if buff.ReadUInt8() == uint8(0) {
+				z = nil
+			} else {
+				// --- [begin][read][struct](CloudCostAggregate) ---
+				f := &CloudCostAggregate{}
+				buff.ReadInt() // [compatibility, unused]
+				errA := f.UnmarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				z = f
+				// --- [end][read][struct](CloudCostAggregate) ---
+
+			}
+			a[v] = z
+		}
+		target.CloudCostAggregates = a
+		// --- [end][read][map](map[string]*CloudCostAggregate) ---
+
+	}
+	if buff.ReadUInt8() == uint8(0) {
+		target.AggregationProperties = nil
+	} else {
+		// --- [begin][read][slice]([]string) ---
+		h := buff.ReadInt() // array len
+		g := make([]string, h)
+		for j := 0; j < h; j++ {
+			var k string
+			var m string
+			if ctx.IsStringTable() {
+				n := buff.ReadInt() // read string index
+				m = ctx.Table[n]
+			} else {
+				m = buff.ReadString() // read string
+			}
+			l := m
+			k = l
+
+			g[j] = k
+		}
+		target.AggregationProperties = g
+		// --- [end][read][slice]([]string) ---
+
+	}
+	var p string
+	if ctx.IsStringTable() {
+		q := buff.ReadInt() // read string index
+		p = ctx.Table[q]
+	} else {
+		p = buff.ReadString() // read string
+	}
+	o := p
+	target.Integration = o
+
+	var s string
+	if ctx.IsStringTable() {
+		t := buff.ReadInt() // read string index
+		s = ctx.Table[t]
+	} else {
+		s = buff.ReadString() // read string
+	}
+	r := s
+	target.LabelName = r
+
+	// --- [begin][read][struct](Window) ---
+	u := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errB := u.UnmarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	target.Window = *u
+	// --- [end][read][struct](Window) ---
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostAggregateSetRange
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostAggregateSetRange instance
+// into a byte array
+func (target *CloudCostAggregateSetRange) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostAggregateSetRange instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostAggregateSetRange) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostAggregateCodecVersion) // version
+
+	if target.CloudCostAggregateSets == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][slice]([]*CloudCostAggregateSet) ---
+		buff.WriteInt(len(target.CloudCostAggregateSets)) // array length
+		for i := 0; i < len(target.CloudCostAggregateSets); i++ {
+			if target.CloudCostAggregateSets[i] == nil {
+				buff.WriteUInt8(uint8(0)) // write nil byte
+			} else {
+				buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+				// --- [begin][write][struct](CloudCostAggregateSet) ---
+				buff.WriteInt(0) // [compatibility, unused]
+				errA := target.CloudCostAggregateSets[i].MarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				// --- [end][write][struct](CloudCostAggregateSet) ---
+
+			}
+		}
+		// --- [end][write][slice]([]*CloudCostAggregateSet) ---
+
+	}
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errB := target.Window.MarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	// --- [end][write][struct](Window) ---
+
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostAggregateSetRange type
+func (target *CloudCostAggregateSetRange) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostAggregateSetRange type
+func (target *CloudCostAggregateSetRange) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostAggregateCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostAggregateSetRange. Expected %d or less, got %d", CloudCostAggregateCodecVersion, version)
+	}
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.CloudCostAggregateSets = nil
+	} else {
+		// --- [begin][read][slice]([]*CloudCostAggregateSet) ---
+		b := buff.ReadInt() // array len
+		a := make([]*CloudCostAggregateSet, b)
+		for i := 0; i < b; i++ {
+			var c *CloudCostAggregateSet
+			if buff.ReadUInt8() == uint8(0) {
+				c = nil
+			} else {
+				// --- [begin][read][struct](CloudCostAggregateSet) ---
+				d := &CloudCostAggregateSet{}
+				buff.ReadInt() // [compatibility, unused]
+				errA := d.UnmarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				c = d
+				// --- [end][read][struct](CloudCostAggregateSet) ---
+
+			}
+			a[i] = c
+		}
+		target.CloudCostAggregateSets = a
+		// --- [end][read][slice]([]*CloudCostAggregateSet) ---
+
+	}
+	// --- [begin][read][struct](Window) ---
+	e := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errB := e.UnmarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	target.Window = *e
+	// --- [end][read][struct](Window) ---
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostItem
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostItem instance
+// into a byte array
+func (target *CloudCostItem) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostItem instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostItem) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostItemCodecVersion) // version
+
+	// --- [begin][write][struct](CloudCostItemProperties) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errA := target.Properties.MarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	// --- [end][write][struct](CloudCostItemProperties) ---
+
+	buff.WriteBool(target.IsKubernetes) // write bool
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errB := target.Window.MarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	// --- [end][write][struct](Window) ---
+
+	buff.WriteFloat64(target.Cost)   // write float64
+	buff.WriteFloat64(target.Credit) // write float64
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostItem type
+func (target *CloudCostItem) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostItem type
+func (target *CloudCostItem) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostItemCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostItem. Expected %d or less, got %d", CloudCostItemCodecVersion, version)
+	}
+
+	// --- [begin][read][struct](CloudCostItemProperties) ---
+	a := &CloudCostItemProperties{}
+	buff.ReadInt() // [compatibility, unused]
+	errA := a.UnmarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	target.Properties = *a
+	// --- [end][read][struct](CloudCostItemProperties) ---
+
+	b := buff.ReadBool() // read bool
+	target.IsKubernetes = b
+
+	// --- [begin][read][struct](Window) ---
+	c := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errB := c.UnmarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	target.Window = *c
+	// --- [end][read][struct](Window) ---
+
+	d := buff.ReadFloat64() // read float64
+	target.Cost = d
+
+	e := buff.ReadFloat64() // read float64
+	target.Credit = e
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostItemProperties
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostItemProperties instance
+// into a byte array
+func (target *CloudCostItemProperties) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostItemProperties instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostItemProperties) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostItemCodecVersion) // version
+
+	if ctx.IsStringTable() {
+		a := ctx.Table.AddOrGet(target.ProviderID)
+		buff.WriteInt(a) // write table index
+	} else {
+		buff.WriteString(target.ProviderID) // write string
+	}
+	if ctx.IsStringTable() {
+		b := ctx.Table.AddOrGet(target.Provider)
+		buff.WriteInt(b) // write table index
+	} else {
+		buff.WriteString(target.Provider) // write string
+	}
+	if ctx.IsStringTable() {
+		c := ctx.Table.AddOrGet(target.Account)
+		buff.WriteInt(c) // write table index
+	} else {
+		buff.WriteString(target.Account) // write string
+	}
+	if ctx.IsStringTable() {
+		d := ctx.Table.AddOrGet(target.Project)
+		buff.WriteInt(d) // write table index
+	} else {
+		buff.WriteString(target.Project) // write string
+	}
+	if ctx.IsStringTable() {
+		e := ctx.Table.AddOrGet(target.Service)
+		buff.WriteInt(e) // write table index
+	} else {
+		buff.WriteString(target.Service) // write string
+	}
+	if ctx.IsStringTable() {
+		f := ctx.Table.AddOrGet(target.Category)
+		buff.WriteInt(f) // write table index
+	} else {
+		buff.WriteString(target.Category) // write string
+	}
+	// --- [begin][write][alias](CloudCostItemLabels) ---
+	if map[string]string(target.Labels) == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][map](map[string]string) ---
+		buff.WriteInt(len(map[string]string(target.Labels))) // map length
+		for v, z := range map[string]string(target.Labels) {
+			if ctx.IsStringTable() {
+				g := ctx.Table.AddOrGet(v)
+				buff.WriteInt(g) // write table index
+			} else {
+				buff.WriteString(v) // write string
+			}
+			if ctx.IsStringTable() {
+				h := ctx.Table.AddOrGet(z)
+				buff.WriteInt(h) // write table index
+			} else {
+				buff.WriteString(z) // write string
+			}
+		}
+		// --- [end][write][map](map[string]string) ---
+
+	}
+	// --- [end][write][alias](CloudCostItemLabels) ---
+
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostItemProperties type
+func (target *CloudCostItemProperties) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostItemProperties type
+func (target *CloudCostItemProperties) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostItemCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostItemProperties. Expected %d or less, got %d", CloudCostItemCodecVersion, version)
+	}
+
+	var b string
+	if ctx.IsStringTable() {
+		c := buff.ReadInt() // read string index
+		b = ctx.Table[c]
+	} else {
+		b = buff.ReadString() // read string
+	}
+	a := b
+	target.ProviderID = a
+
+	var e string
+	if ctx.IsStringTable() {
+		f := buff.ReadInt() // read string index
+		e = ctx.Table[f]
+	} else {
+		e = buff.ReadString() // read string
+	}
+	d := e
+	target.Provider = d
+
+	var h string
+	if ctx.IsStringTable() {
+		k := buff.ReadInt() // read string index
+		h = ctx.Table[k]
+	} else {
+		h = buff.ReadString() // read string
+	}
+	g := h
+	target.Account = g
+
+	var m string
+	if ctx.IsStringTable() {
+		n := buff.ReadInt() // read string index
+		m = ctx.Table[n]
+	} else {
+		m = buff.ReadString() // read string
+	}
+	l := m
+	target.Project = l
+
+	var p string
+	if ctx.IsStringTable() {
+		q := buff.ReadInt() // read string index
+		p = ctx.Table[q]
+	} else {
+		p = buff.ReadString() // read string
+	}
+	o := p
+	target.Service = o
+
+	var s string
+	if ctx.IsStringTable() {
+		t := buff.ReadInt() // read string index
+		s = ctx.Table[t]
+	} else {
+		s = buff.ReadString() // read string
+	}
+	r := s
+	target.Category = r
+
+	// --- [begin][read][alias](CloudCostItemLabels) ---
+	var u map[string]string
+	if buff.ReadUInt8() == uint8(0) {
+		u = nil
+	} else {
+		// --- [begin][read][map](map[string]string) ---
+		x := buff.ReadInt() // map len
+		w := make(map[string]string, x)
+		for i := 0; i < x; i++ {
+			var v string
+			var aa string
+			if ctx.IsStringTable() {
+				bb := buff.ReadInt() // read string index
+				aa = ctx.Table[bb]
+			} else {
+				aa = buff.ReadString() // read string
+			}
+			y := aa
+			v = y
+
+			var z string
+			var dd string
+			if ctx.IsStringTable() {
+				ee := buff.ReadInt() // read string index
+				dd = ctx.Table[ee]
+			} else {
+				dd = buff.ReadString() // read string
+			}
+			cc := dd
+			z = cc
+
+			w[v] = z
+		}
+		u = w
+		// --- [end][read][map](map[string]string) ---
+
+	}
+	target.Labels = CloudCostItemLabels(u)
+	// --- [end][read][alias](CloudCostItemLabels) ---
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostItemSet
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostItemSet instance
+// into a byte array
+func (target *CloudCostItemSet) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  NewStringTable(),
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	sTableBytes := ctx.Table.ToBytes()
+	merged := appendBytes(sTableBytes, encBytes)
+	return merged, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostItemSet instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostItemSet) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostItemCodecVersion) // version
+
+	if target.CloudCostItems == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][map](map[string]*CloudCostItem) ---
+		buff.WriteInt(len(target.CloudCostItems)) // map length
+		for v, z := range target.CloudCostItems {
+			if ctx.IsStringTable() {
+				a := ctx.Table.AddOrGet(v)
+				buff.WriteInt(a) // write table index
+			} else {
+				buff.WriteString(v) // write string
+			}
+			if z == nil {
+				buff.WriteUInt8(uint8(0)) // write nil byte
+			} else {
+				buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+				// --- [begin][write][struct](CloudCostItem) ---
+				buff.WriteInt(0) // [compatibility, unused]
+				errA := z.MarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				// --- [end][write][struct](CloudCostItem) ---
+
+			}
+		}
+		// --- [end][write][map](map[string]*CloudCostItem) ---
+
+	}
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errB := target.Window.MarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	// --- [end][write][struct](Window) ---
+
+	if ctx.IsStringTable() {
+		b := ctx.Table.AddOrGet(target.Integration)
+		buff.WriteInt(b) // write table index
+	} else {
+		buff.WriteString(target.Integration) // write string
+	}
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostItemSet type
+func (target *CloudCostItemSet) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostItemSet type
+func (target *CloudCostItemSet) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostItemCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostItemSet. Expected %d or less, got %d", CloudCostItemCodecVersion, version)
+	}
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.CloudCostItems = nil
+	} else {
+		// --- [begin][read][map](map[string]*CloudCostItem) ---
+		b := buff.ReadInt() // map len
+		a := make(map[string]*CloudCostItem, b)
+		for i := 0; i < b; i++ {
+			var v string
+			var d string
+			if ctx.IsStringTable() {
+				e := buff.ReadInt() // read string index
+				d = ctx.Table[e]
+			} else {
+				d = buff.ReadString() // read string
+			}
+			c := d
+			v = c
+
+			var z *CloudCostItem
+			if buff.ReadUInt8() == uint8(0) {
+				z = nil
+			} else {
+				// --- [begin][read][struct](CloudCostItem) ---
+				f := &CloudCostItem{}
+				buff.ReadInt() // [compatibility, unused]
+				errA := f.UnmarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				z = f
+				// --- [end][read][struct](CloudCostItem) ---
+
+			}
+			a[v] = z
+		}
+		target.CloudCostItems = a
+		// --- [end][read][map](map[string]*CloudCostItem) ---
+
+	}
+	// --- [begin][read][struct](Window) ---
+	g := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errB := g.UnmarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	target.Window = *g
+	// --- [end][read][struct](Window) ---
+
+	var k string
+	if ctx.IsStringTable() {
+		l := buff.ReadInt() // read string index
+		k = ctx.Table[l]
+	} else {
+		k = buff.ReadString() // read string
+	}
+	h := k
+	target.Integration = h
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CloudCostItemSetRange
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CloudCostItemSetRange instance
+// into a byte array
+func (target *CloudCostItemSetRange) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CloudCostItemSetRange instance
+// into a byte array leveraging a predefined context.
+func (target *CloudCostItemSetRange) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(CloudCostItemCodecVersion) // version
+
+	if target.CloudCostItemSets == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][slice]([]*CloudCostItemSet) ---
+		buff.WriteInt(len(target.CloudCostItemSets)) // array length
+		for i := 0; i < len(target.CloudCostItemSets); i++ {
+			if target.CloudCostItemSets[i] == nil {
+				buff.WriteUInt8(uint8(0)) // write nil byte
+			} else {
+				buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+				// --- [begin][write][struct](CloudCostItemSet) ---
+				buff.WriteInt(0) // [compatibility, unused]
+				errA := target.CloudCostItemSets[i].MarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				// --- [end][write][struct](CloudCostItemSet) ---
+
+			}
+		}
+		// --- [end][write][slice]([]*CloudCostItemSet) ---
+
+	}
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errB := target.Window.MarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	// --- [end][write][struct](Window) ---
+
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CloudCostItemSetRange type
+func (target *CloudCostItemSetRange) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CloudCostItemSetRange type
+func (target *CloudCostItemSetRange) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > CloudCostItemCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CloudCostItemSetRange. Expected %d or less, got %d", CloudCostItemCodecVersion, version)
+	}
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.CloudCostItemSets = nil
+	} else {
+		// --- [begin][read][slice]([]*CloudCostItemSet) ---
+		b := buff.ReadInt() // array len
+		a := make([]*CloudCostItemSet, b)
+		for i := 0; i < b; i++ {
+			var c *CloudCostItemSet
+			if buff.ReadUInt8() == uint8(0) {
+				c = nil
+			} else {
+				// --- [begin][read][struct](CloudCostItemSet) ---
+				d := &CloudCostItemSet{}
+				buff.ReadInt() // [compatibility, unused]
+				errA := d.UnmarshalBinaryWithContext(ctx)
+				if errA != nil {
+					return errA
+				}
+				c = d
+				// --- [end][read][struct](CloudCostItemSet) ---
+
+			}
+			a[i] = c
+		}
+		target.CloudCostItemSets = a
+		// --- [end][read][slice]([]*CloudCostItemSet) ---
+
+	}
+	// --- [begin][read][struct](Window) ---
+	e := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errB := e.UnmarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	target.Window = *e
+	// --- [end][read][struct](Window) ---
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  ClusterManagement
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this ClusterManagement instance
+// into a byte array
+func (target *ClusterManagement) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this ClusterManagement instance
+// into a byte array leveraging a predefined context.
+func (target *ClusterManagement) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(AssetsCodecVersion) // version
+
+	// --- [begin][write][alias](AssetLabels) ---
+	if map[string]string(target.Labels) == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][map](map[string]string) ---
+		buff.WriteInt(len(map[string]string(target.Labels))) // map length
+		for v, z := range map[string]string(target.Labels) {
+			if ctx.IsStringTable() {
+				a := ctx.Table.AddOrGet(v)
+				buff.WriteInt(a) // write table index
+			} else {
+				buff.WriteString(v) // write string
+			}
+			if ctx.IsStringTable() {
+				b := ctx.Table.AddOrGet(z)
+				buff.WriteInt(b) // write table index
+			} else {
+				buff.WriteString(z) // write string
+			}
+		}
+		// --- [end][write][map](map[string]string) ---
+
+	}
+	// --- [end][write][alias](AssetLabels) ---
+
+	if target.Properties == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][struct](AssetProperties) ---
+		buff.WriteInt(0) // [compatibility, unused]
+		errA := target.Properties.MarshalBinaryWithContext(ctx)
+		if errA != nil {
+			return errA
+		}
+		// --- [end][write][struct](AssetProperties) ---
+
+	}
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errB := target.Window.MarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	// --- [end][write][struct](Window) ---
+
+	buff.WriteFloat64(target.Cost)       // write float64
+	buff.WriteFloat64(target.Adjustment) // write float64
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the ClusterManagement type
+func (target *ClusterManagement) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the ClusterManagement type
+func (target *ClusterManagement) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > AssetsCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling ClusterManagement. Expected %d or less, got %d", AssetsCodecVersion, version)
+	}
+
+	// --- [begin][read][alias](AssetLabels) ---
+	var a map[string]string
+	if buff.ReadUInt8() == uint8(0) {
+		a = nil
+	} else {
+		// --- [begin][read][map](map[string]string) ---
+		c := buff.ReadInt() // map len
+		b := make(map[string]string, c)
+		for i := 0; i < c; i++ {
+			var v string
+			var e string
+			if ctx.IsStringTable() {
+				f := buff.ReadInt() // read string index
+				e = ctx.Table[f]
+			} else {
+				e = buff.ReadString() // read string
+			}
+			d := e
+			v = d
+
+			var z string
+			var h string
+			if ctx.IsStringTable() {
+				k := buff.ReadInt() // read string index
+				h = ctx.Table[k]
+			} else {
+				h = buff.ReadString() // read string
+			}
+			g := h
+			z = g
+
+			b[v] = z
+		}
+		a = b
+		// --- [end][read][map](map[string]string) ---
+
+	}
+	target.Labels = AssetLabels(a)
+	// --- [end][read][alias](AssetLabels) ---
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.Properties = nil
+	} else {
+		// --- [begin][read][struct](AssetProperties) ---
+		l := &AssetProperties{}
+		buff.ReadInt() // [compatibility, unused]
+		errA := l.UnmarshalBinaryWithContext(ctx)
+		if errA != nil {
+			return errA
+		}
+		target.Properties = l
+		// --- [end][read][struct](AssetProperties) ---
+
+	}
+	// --- [begin][read][struct](Window) ---
+	m := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errB := m.UnmarshalBinaryWithContext(ctx)
+	if errB != nil {
+		return errB
+	}
+	target.Window = *m
+	// --- [end][read][struct](Window) ---
+
+	n := buff.ReadFloat64() // read float64
+	target.Cost = n
+
+	// field version check
+	if uint8(16) <= version {
+		o := buff.ReadFloat64() // read float64
+		target.Adjustment = o
+
+	} else {
+		target.Adjustment = float64(0) // default
+	}
+
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  Coverage
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this Coverage instance
+// into a byte array
+func (target *Coverage) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this Coverage instance
+// into a byte array leveraging a predefined context.
+func (target *Coverage) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(DefaultCodecVersion) // version
+
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errA := target.Window.MarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	// --- [end][write][struct](Window) ---
+
+	if ctx.IsStringTable() {
+		a := ctx.Table.AddOrGet(target.Type)
+		buff.WriteInt(a) // write table index
+	} else {
+		buff.WriteString(target.Type) // write string
+	}
+	buff.WriteInt(target.Count) // write int
+	// --- [begin][write][reference](time.Time) ---
+	b, errB := target.Updated.MarshalBinary()
+	if errB != nil {
+		return errB
+	}
+	buff.WriteInt(len(b))
+	buff.WriteBytes(b)
+	// --- [end][write][reference](time.Time) ---
+
+	if target.Errors == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][slice]([]string) ---
+		buff.WriteInt(len(target.Errors)) // array length
+		for i := 0; i < len(target.Errors); i++ {
+			if ctx.IsStringTable() {
+				c := ctx.Table.AddOrGet(target.Errors[i])
+				buff.WriteInt(c) // write table index
+			} else {
+				buff.WriteString(target.Errors[i]) // write string
+			}
+		}
+		// --- [end][write][slice]([]string) ---
+
+	}
+	if target.Warnings == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][slice]([]string) ---
+		buff.WriteInt(len(target.Warnings)) // array length
+		for j := 0; j < len(target.Warnings); j++ {
+			if ctx.IsStringTable() {
+				d := ctx.Table.AddOrGet(target.Warnings[j])
+				buff.WriteInt(d) // write table index
+			} else {
+				buff.WriteString(target.Warnings[j]) // write string
+			}
+		}
+		// --- [end][write][slice]([]string) ---
+
+	}
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the Coverage type
+func (target *Coverage) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the Coverage type
+func (target *Coverage) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > DefaultCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling Coverage. Expected %d or less, got %d", DefaultCodecVersion, version)
+	}
+
+	// --- [begin][read][struct](Window) ---
+	a := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errA := a.UnmarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	target.Window = *a
+	// --- [end][read][struct](Window) ---
+
+	var c string
+	if ctx.IsStringTable() {
+		d := buff.ReadInt() // read string index
+		c = ctx.Table[d]
+	} else {
+		c = buff.ReadString() // read string
+	}
+	b := c
+	target.Type = b
+
+	e := buff.ReadInt() // read int
+	target.Count = e
+
+	// --- [begin][read][reference](time.Time) ---
+	f := &time.Time{}
+	g := buff.ReadInt()    // byte array length
+	h := buff.ReadBytes(g) // byte array
+	errB := f.UnmarshalBinary(h)
+	if errB != nil {
+		return errB
+	}
+	target.Updated = *f
+	// --- [end][read][reference](time.Time) ---
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.Errors = nil
+	} else {
+		// --- [begin][read][slice]([]string) ---
+		l := buff.ReadInt() // array len
+		k := make([]string, l)
+		for i := 0; i < l; i++ {
+			var m string
+			var o string
+			if ctx.IsStringTable() {
+				p := buff.ReadInt() // read string index
+				o = ctx.Table[p]
+			} else {
+				o = buff.ReadString() // read string
+			}
+			n := o
+			m = n
+
+			k[i] = m
+		}
+		target.Errors = k
+		// --- [end][read][slice]([]string) ---
+
+	}
+	if buff.ReadUInt8() == uint8(0) {
+		target.Warnings = nil
+	} else {
+		// --- [begin][read][slice]([]string) ---
+		r := buff.ReadInt() // array len
+		q := make([]string, r)
+		for j := 0; j < r; j++ {
+			var s string
+			var u string
+			if ctx.IsStringTable() {
+				w := buff.ReadInt() // read string index
+				u = ctx.Table[w]
+			} else {
+				u = buff.ReadString() // read string
+			}
+			t := u
+			s = t
+
+			q[j] = s
+		}
+		target.Warnings = q
+		// --- [end][read][slice]([]string) ---
+
+	}
+	return nil
+}
+
+//--------------------------------------------------------------------------
+//  CoverageSet
+//--------------------------------------------------------------------------
+
+// MarshalBinary serializes the internal properties of this CoverageSet instance
+// into a byte array
+func (target *CoverageSet) MarshalBinary() (data []byte, err error) {
+	ctx := &EncodingContext{
+		Buffer: util.NewBuffer(),
+		Table:  nil,
+	}
+
+	e := target.MarshalBinaryWithContext(ctx)
+	if e != nil {
+		return nil, e
+	}
+
+	encBytes := ctx.Buffer.Bytes()
+	return encBytes, nil
+}
+
+// MarshalBinaryWithContext serializes the internal properties of this CoverageSet instance
+// into a byte array leveraging a predefined context.
+func (target *CoverageSet) MarshalBinaryWithContext(ctx *EncodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	buff.WriteUInt8(DefaultCodecVersion) // version
+
+	// --- [begin][write][struct](Window) ---
+	buff.WriteInt(0) // [compatibility, unused]
+	errA := target.Window.MarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	// --- [end][write][struct](Window) ---
+
+	if target.Items == nil {
+		buff.WriteUInt8(uint8(0)) // write nil byte
+	} else {
+		buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+		// --- [begin][write][map](map[string]*Coverage) ---
+		buff.WriteInt(len(target.Items)) // map length
+		for v, z := range target.Items {
+			if ctx.IsStringTable() {
+				a := ctx.Table.AddOrGet(v)
+				buff.WriteInt(a) // write table index
+			} else {
+				buff.WriteString(v) // write string
+			}
+			if z == nil {
+				buff.WriteUInt8(uint8(0)) // write nil byte
+			} else {
+				buff.WriteUInt8(uint8(1)) // write non-nil byte
+
+				// --- [begin][write][struct](Coverage) ---
+				buff.WriteInt(0) // [compatibility, unused]
+				errB := z.MarshalBinaryWithContext(ctx)
+				if errB != nil {
+					return errB
+				}
+				// --- [end][write][struct](Coverage) ---
+
+			}
+		}
+		// --- [end][write][map](map[string]*Coverage) ---
+
+	}
+	return nil
+}
+
+// UnmarshalBinary uses the data passed byte array to set all the internal properties of
+// the CoverageSet type
+func (target *CoverageSet) UnmarshalBinary(data []byte) error {
+	var table []string
+	buff := util.NewBufferFromBytes(data)
+
+	// string table header validation
+	if isBinaryTag(data, BinaryTagStringTable) {
+		buff.ReadBytes(len(BinaryTagStringTable)) // strip tag length
+		tl := buff.ReadInt()                      // table length
+		if tl > 0 {
+			table = make([]string, tl, tl)
+			for i := 0; i < tl; i++ {
+				table[i] = buff.ReadString()
+			}
+		}
+	}
+
+	ctx := &DecodingContext{
+		Buffer: buff,
+		Table:  table,
+	}
+
+	err := target.UnmarshalBinaryWithContext(ctx)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// UnmarshalBinaryWithContext uses the context containing a string table and binary buffer to set all the internal properties of
+// the CoverageSet type
+func (target *CoverageSet) UnmarshalBinaryWithContext(ctx *DecodingContext) (err error) {
+	// panics are recovered and propagated as errors
+	defer func() {
+		if r := recover(); r != nil {
+			if e, ok := r.(error); ok {
+				err = e
+			} else if s, ok := r.(string); ok {
+				err = fmt.Errorf("Unexpected panic: %s", s)
+			} else {
+				err = fmt.Errorf("Unexpected panic: %+v", r)
+			}
+		}
+	}()
+
+	buff := ctx.Buffer
+	version := buff.ReadUInt8()
+
+	if version > DefaultCodecVersion {
+		return fmt.Errorf("Invalid Version Unmarshaling CoverageSet. Expected %d or less, got %d", DefaultCodecVersion, version)
+	}
+
+	// --- [begin][read][struct](Window) ---
+	a := &Window{}
+	buff.ReadInt() // [compatibility, unused]
+	errA := a.UnmarshalBinaryWithContext(ctx)
+	if errA != nil {
+		return errA
+	}
+	target.Window = *a
+	// --- [end][read][struct](Window) ---
+
+	if buff.ReadUInt8() == uint8(0) {
+		target.Items = nil
+	} else {
+		// --- [begin][read][map](map[string]*Coverage) ---
+		c := buff.ReadInt() // map len
+		b := make(map[string]*Coverage, c)
+		for i := 0; i < c; i++ {
+			var v string
+			var e string
+			if ctx.IsStringTable() {
+				f := buff.ReadInt() // read string index
+				e = ctx.Table[f]
+			} else {
+				e = buff.ReadString() // read string
+			}
+			d := e
+			v = d
+
+			var z *Coverage
+			if buff.ReadUInt8() == uint8(0) {
+				z = nil
+			} else {
+				// --- [begin][read][struct](Coverage) ---
+				g := &Coverage{}
+				buff.ReadInt() // [compatibility, unused]
+				errB := g.UnmarshalBinaryWithContext(ctx)
+				if errB != nil {
+					return errB
+				}
+				z = g
+				// --- [end][read][struct](Coverage) ---
+
+			}
+			b[v] = z
+		}
+		target.Items = b
+		// --- [end][read][map](map[string]*Coverage) ---
+
+	}
 	return nil
 	return nil
 }
 }
 
 

+ 2 - 0
pkg/kubecost/query.go

@@ -67,6 +67,7 @@ type AssetQueryOptions struct {
 	IncludeCloud            bool
 	IncludeCloud            bool
 	SharedHourlyCosts       map[string]float64
 	SharedHourlyCosts       map[string]float64
 	Step                    time.Duration
 	Step                    time.Duration
+	LabelConfig             *LabelConfig
 }
 }
 
 
 // CloudUsageQueryOptions define optional parameters for querying a Store
 // CloudUsageQueryOptions define optional parameters for querying a Store
@@ -76,6 +77,7 @@ type CloudUsageQueryOptions struct {
 	Compute      bool
 	Compute      bool
 	FilterFuncs  []CloudUsageMatchFunc
 	FilterFuncs  []CloudUsageMatchFunc
 	FilterValues CloudUsageFilter
 	FilterValues CloudUsageFilter
+	LabelConfig  *LabelConfig
 }
 }
 
 
 type CloudUsageFilter struct {
 type CloudUsageFilter struct {

+ 24 - 19
pkg/kubecost/summaryallocation.go

@@ -1179,20 +1179,23 @@ func (sas *SummaryAllocationSet) RAMEfficiency() float64 {
 	sas.RLock()
 	sas.RLock()
 	defer sas.RUnlock()
 	defer sas.RUnlock()
 
 
-	totalRAMBytesUsage := 0.0
-	totalRAMBytesRequest := 0.0
+	totalRAMBytesMinutesUsage := 0.0
+	totalRAMBytesMinutesRequest := 0.0
 	totalRAMCost := 0.0
 	totalRAMCost := 0.0
 	for _, sa := range sas.SummaryAllocations {
 	for _, sa := range sas.SummaryAllocations {
-		totalRAMBytesUsage += sa.RAMBytesUsageAverage
-		totalRAMBytesRequest += sa.RAMBytesRequestAverage
+		if sa.IsIdle() {
+			continue
+		}
+		totalRAMBytesMinutesUsage += sa.RAMBytesUsageAverage * sa.Minutes()
+		totalRAMBytesMinutesRequest += sa.RAMBytesRequestAverage * sa.Minutes()
 		totalRAMCost += sa.RAMCost
 		totalRAMCost += sa.RAMCost
 	}
 	}
 
 
-	if totalRAMBytesRequest > 0 {
-		return totalRAMBytesUsage / totalRAMBytesRequest
+	if totalRAMBytesMinutesRequest > 0 {
+		return totalRAMBytesMinutesUsage / totalRAMBytesMinutesRequest
 	}
 	}
 
 
-	if totalRAMBytesUsage == 0.0 || totalRAMCost == 0.0 {
+	if totalRAMBytesMinutesUsage == 0.0 || totalRAMCost == 0.0 {
 		return 0.0
 		return 0.0
 	}
 	}
 
 
@@ -1208,20 +1211,23 @@ func (sas *SummaryAllocationSet) CPUEfficiency() float64 {
 	sas.RLock()
 	sas.RLock()
 	defer sas.RUnlock()
 	defer sas.RUnlock()
 
 
-	totalCPUCoreUsage := 0.0
-	totalCPUCoreRequest := 0.0
+	totalCPUCoreMinutesUsage := 0.0
+	totalCPUCoreMinutesRequest := 0.0
 	totalCPUCost := 0.0
 	totalCPUCost := 0.0
 	for _, sa := range sas.SummaryAllocations {
 	for _, sa := range sas.SummaryAllocations {
-		totalCPUCoreUsage += sa.CPUCoreUsageAverage
-		totalCPUCoreRequest += sa.CPUCoreRequestAverage
+		if sa.IsIdle() {
+			continue
+		}
+		totalCPUCoreMinutesUsage += sa.CPUCoreUsageAverage * sa.Minutes()
+		totalCPUCoreMinutesRequest += sa.CPUCoreRequestAverage * sa.Minutes()
 		totalCPUCost += sa.CPUCost
 		totalCPUCost += sa.CPUCost
 	}
 	}
 
 
-	if totalCPUCoreRequest > 0 {
-		return totalCPUCoreUsage / totalCPUCoreRequest
+	if totalCPUCoreMinutesRequest > 0 {
+		return totalCPUCoreMinutesUsage / totalCPUCoreMinutesRequest
 	}
 	}
 
 
-	if totalCPUCoreUsage == 0.0 || totalCPUCost == 0.0 {
+	if totalCPUCoreMinutesUsage == 0.0 || totalCPUCost == 0.0 {
 		return 0.0
 		return 0.0
 	}
 	}
 
 
@@ -1237,19 +1243,18 @@ func (sas *SummaryAllocationSet) TotalEfficiency() float64 {
 	sas.RLock()
 	sas.RLock()
 	defer sas.RUnlock()
 	defer sas.RUnlock()
 
 
-	totalRAMCostEff := 0.0
-	totalCPUCostEff := 0.0
 	totalRAMCost := 0.0
 	totalRAMCost := 0.0
 	totalCPUCost := 0.0
 	totalCPUCost := 0.0
 	for _, sa := range sas.SummaryAllocations {
 	for _, sa := range sas.SummaryAllocations {
-		totalRAMCostEff += sa.RAMEfficiency() * sa.RAMCost
-		totalCPUCostEff += sa.CPUEfficiency() * sa.CPUCost
+		if sa.IsIdle() {
+			continue
+		}
 		totalRAMCost += sa.RAMCost
 		totalRAMCost += sa.RAMCost
 		totalCPUCost += sa.CPUCost
 		totalCPUCost += sa.CPUCost
 	}
 	}
 
 
 	if totalRAMCost+totalCPUCost > 0 {
 	if totalRAMCost+totalCPUCost > 0 {
-		return (totalRAMCostEff + totalCPUCostEff) / (totalRAMCost + totalCPUCost)
+		return (totalRAMCost*sas.RAMEfficiency() + totalCPUCost*sas.CPUEfficiency()) / (totalRAMCost + totalCPUCost)
 	}
 	}
 
 
 	return 0.0
 	return 0.0

+ 65 - 5
pkg/kubecost/summaryallocation_test.go

@@ -213,10 +213,10 @@ func TestSummaryAllocation_Add(t *testing.T) {
 
 
 func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 	// Generating 6 sample summary allocations for testing
 	// Generating 6 sample summary allocations for testing
-	var sa1, sa2, sa3, sa4, sa5, sa6 *SummaryAllocation
+	var sa1, sa2, sa3, sa4, sa5, sa6, idlesa *SummaryAllocation
 
 
 	// Generating accumulated summary allocation sets for testing
 	// Generating accumulated summary allocation sets for testing
-	var sas1, sas2, sas3, sas4, sas5 *SummaryAllocationSet
+	var sas1, sas2, sas3, sas4, sas5, sas6 *SummaryAllocationSet
 
 
 	window, _ := ParseWindowUTC("7d")
 	window, _ := ParseWindowUTC("7d")
 
 
@@ -314,6 +314,20 @@ func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 		RAMCost:                0.10,
 		RAMCost:                0.10,
 	}
 	}
 
 
+	idlesa = &SummaryAllocation{
+		Name: IdleSuffix,
+		Properties: &AllocationProperties{
+			Cluster:   "cluster1",
+			Namespace: "namespace1",
+			Pod:       "pod1",
+			Container: "container7",
+		},
+		Start:   saStart,
+		End:     saEnd,
+		CPUCost: 1.0,
+		RAMCost: 1.0,
+	}
+
 	testcase1Map := map[string]*SummaryAllocation{
 	testcase1Map := map[string]*SummaryAllocation{
 		"cluster1/namespace1/pod1/container1": sa1,
 		"cluster1/namespace1/pod1/container1": sa1,
 		"cluster1/namespace1/pod1/container2": sa2,
 		"cluster1/namespace1/pod1/container2": sa2,
@@ -340,6 +354,12 @@ func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 		"cluster1/namespace1/pod1/container6": sa6,
 		"cluster1/namespace1/pod1/container6": sa6,
 	}
 	}
 
 
+	testcase6Map := map[string]*SummaryAllocation{
+		"cluster1/namespace1/pod1/container1": sa1,
+		"cluster1/namespace1/pod1/container2": sa2,
+		"cluster1/__idle__":                   idlesa,
+	}
+
 	sas1 = &SummaryAllocationSet{
 	sas1 = &SummaryAllocationSet{
 		SummaryAllocations: testcase1Map,
 		SummaryAllocations: testcase1Map,
 		Window:             window,
 		Window:             window,
@@ -365,6 +385,11 @@ func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 		Window:             window,
 		Window:             window,
 	}
 	}
 
 
+	sas6 = &SummaryAllocationSet{
+		SummaryAllocations: testcase6Map,
+		Window:             window,
+	}
+
 	cases := []struct {
 	cases := []struct {
 		name               string
 		name               string
 		testsas            *SummaryAllocationSet
 		testsas            *SummaryAllocationSet
@@ -395,6 +420,11 @@ func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 			testsas:            sas5,
 			testsas:            sas5,
 			expectedEfficiency: 0.65,
 			expectedEfficiency: 0.65,
 		},
 		},
+		{
+			name:               "Check RAMEfficiency in presense of an idle allocation",
+			testsas:            sas6,
+			expectedEfficiency: 0.25,
+		},
 	}
 	}
 
 
 	for _, c := range cases {
 	for _, c := range cases {
@@ -410,10 +440,10 @@ func TestSummaryAllocationSet_RAMEfficiency(t *testing.T) {
 
 
 func TestSummaryAllocationSet_CPUEfficiency(t *testing.T) {
 func TestSummaryAllocationSet_CPUEfficiency(t *testing.T) {
 	// Generating 6 sample summary allocations for testing
 	// Generating 6 sample summary allocations for testing
-	var sa1, sa2, sa3, sa4, sa5, sa6 *SummaryAllocation
+	var sa1, sa2, sa3, sa4, sa5, sa6, idlesa *SummaryAllocation
 
 
 	// Generating accumulated summary allocation sets for testing
 	// Generating accumulated summary allocation sets for testing
-	var sas1, sas2, sas3, sas4, sas5 *SummaryAllocationSet
+	var sas1, sas2, sas3, sas4, sas5, sas6 *SummaryAllocationSet
 
 
 	window, _ := ParseWindowUTC("7d")
 	window, _ := ParseWindowUTC("7d")
 
 
@@ -511,6 +541,20 @@ func TestSummaryAllocationSet_CPUEfficiency(t *testing.T) {
 		CPUCost:               0.2,
 		CPUCost:               0.2,
 	}
 	}
 
 
+	idlesa = &SummaryAllocation{
+		Name: IdleSuffix,
+		Properties: &AllocationProperties{
+			Cluster:   "cluster1",
+			Namespace: "namespace1",
+			Pod:       "pod1",
+			Container: "container7",
+		},
+		Start:   saStart,
+		End:     saEnd,
+		CPUCost: 1.0,
+		RAMCost: 1.0,
+	}
+
 	testcase1Map := map[string]*SummaryAllocation{
 	testcase1Map := map[string]*SummaryAllocation{
 		"cluster1/namespace1/pod1/container1": sa1,
 		"cluster1/namespace1/pod1/container1": sa1,
 		"cluster1/namespace1/pod1/container2": sa2,
 		"cluster1/namespace1/pod1/container2": sa2,
@@ -537,6 +581,12 @@ func TestSummaryAllocationSet_CPUEfficiency(t *testing.T) {
 		"cluster1/namespace1/pod1/container6": sa6,
 		"cluster1/namespace1/pod1/container6": sa6,
 	}
 	}
 
 
+	testcase6Map := map[string]*SummaryAllocation{
+		"cluster1/namespace1/pod1/container1": sa1,
+		"cluster1/namespace1/pod1/container2": sa2,
+		"cluster1/__idle__":                   idlesa,
+	}
+
 	sas1 = &SummaryAllocationSet{
 	sas1 = &SummaryAllocationSet{
 		SummaryAllocations: testcase1Map,
 		SummaryAllocations: testcase1Map,
 		Window:             window,
 		Window:             window,
@@ -562,6 +612,11 @@ func TestSummaryAllocationSet_CPUEfficiency(t *testing.T) {
 		Window:             window,
 		Window:             window,
 	}
 	}
 
 
+	sas6 = &SummaryAllocationSet{
+		SummaryAllocations: testcase6Map,
+		Window:             window,
+	}
+
 	cases := []struct {
 	cases := []struct {
 		name               string
 		name               string
 		testsas            *SummaryAllocationSet
 		testsas            *SummaryAllocationSet
@@ -592,6 +647,11 @@ func TestSummaryAllocationSet_CPUEfficiency(t *testing.T) {
 			testsas:            sas5,
 			testsas:            sas5,
 			expectedEfficiency: 0.50,
 			expectedEfficiency: 0.50,
 		},
 		},
+		{
+			name:               "Check CPUEfficiency in presence of an idle allocation",
+			testsas:            sas6,
+			expectedEfficiency: 0.30,
+		},
 	}
 	}
 
 
 	for _, c := range cases {
 	for _, c := range cases {
@@ -803,7 +863,7 @@ func TestSummaryAllocationSet_TotalEfficiency(t *testing.T) {
 		{
 		{
 			name:               "Check TotalEfficiency with idle cost",
 			name:               "Check TotalEfficiency with idle cost",
 			testsas:            sas4,
 			testsas:            sas4,
-			expectedEfficiency: 0.20,
+			expectedEfficiency: 0.30,
 		},
 		},
 	}
 	}
 
 

+ 116 - 4
pkg/kubecost/window.go

@@ -458,22 +458,22 @@ func (w Window) Hours() float64 {
 	return w.end.Sub(*w.start).Hours()
 	return w.end.Sub(*w.start).Hours()
 }
 }
 
 
-//IsEmpty a Window is empty if it does not have a start and an end
+// IsEmpty a Window is empty if it does not have a start and an end
 func (w Window) IsEmpty() bool {
 func (w Window) IsEmpty() bool {
 	return w.start == nil && w.end == nil
 	return w.start == nil && w.end == nil
 }
 }
 
 
-//HasDuration a Window has duration if neither start and end are not nil and not equal
+// HasDuration a Window has duration if neither start and end are not nil and not equal
 func (w Window) HasDuration() bool {
 func (w Window) HasDuration() bool {
 	return !w.IsOpen() && !w.end.Equal(*w.Start())
 	return !w.IsOpen() && !w.end.Equal(*w.Start())
 }
 }
 
 
-//IsNegative a Window is negative if start and end are not null and end is before start
+// IsNegative a Window is negative if start and end are not null and end is before start
 func (w Window) IsNegative() bool {
 func (w Window) IsNegative() bool {
 	return !w.IsOpen() && w.end.Before(*w.Start())
 	return !w.IsOpen() && w.end.Before(*w.Start())
 }
 }
 
 
-//IsOpen a Window is open if it has a nil start or end
+// IsOpen a Window is open if it has a nil start or end
 func (w Window) IsOpen() bool {
 func (w Window) IsOpen() bool {
 	return w.start == nil || w.end == nil
 	return w.start == nil || w.end == nil
 }
 }
@@ -689,6 +689,118 @@ func (w Window) DurationOffsetStrings() (string, string) {
 	return timeutil.DurationOffsetStrings(dur, off)
 	return timeutil.DurationOffsetStrings(dur, off)
 }
 }
 
 
+// GetPercentInWindow Determine pct of item time contained the window.
+// determined by the overlap of the start/end with the given
+// window, which will be negative if there is no overlap. If
+// there is positive overlap, compare it with the total mins.
+//
+// e.g. here are the two possible scenarios as simplidied
+// 10m windows with dashes representing item's time running:
+//
+//  1. item falls entirely within one CloudCostItemSet window
+//     |     ---- |          |          |
+//     totalMins = 4.0
+//     pct := 4.0 / 4.0 = 1.0 for window 1
+//     pct := 0.0 / 4.0 = 0.0 for window 2
+//     pct := 0.0 / 4.0 = 0.0 for window 3
+//
+//  2. item overlaps multiple CloudCostItemSet windows
+//     |      ----|----------|--        |
+//     totalMins = 16.0
+//     pct :=  4.0 / 16.0 = 0.250 for window 1
+//     pct := 10.0 / 16.0 = 0.625 for window 2
+//     pct :=  2.0 / 16.0 = 0.125 for window 3
+func (w Window) GetPercentInWindow(itemStart time.Time, itemEnd time.Time) float64 {
+
+	s := itemStart
+	if s.Before(*w.Start()) {
+		s = *w.Start()
+	}
+
+	e := itemEnd
+	if e.After(*w.End()) {
+		e = *w.End()
+	}
+
+	mins := e.Sub(s).Minutes()
+	if mins <= 0.0 {
+		return 0.0
+	}
+
+	totalMins := itemEnd.Sub(itemStart).Minutes()
+
+	pct := mins / totalMins
+	return pct
+}
+
+// GetWindows returns a slice of Window with equal size between the given start and end. If windowSize does not evenly
+// divide the period between start and end, the last window is not added
+func GetWindows(start time.Time, end time.Time, windowSize time.Duration) ([]Window, error) {
+	// Ensure the range is evenly divisible into windows of the given duration
+	dur := end.Sub(start)
+	if int(dur.Minutes())%int(windowSize.Minutes()) != 0 {
+		return nil, fmt.Errorf("range not divisible by window: [%s, %s] by %s", start, end, windowSize)
+	}
+
+	// Ensure that provided times are multiples of the provided windowSize (e.g. midnight for daily windows, on the hour for hourly windows)
+	if start != start.Truncate(windowSize) {
+		return nil, fmt.Errorf("provided times are not divisible by provided window: [%s, %s] by %s", start, end, windowSize)
+	}
+
+	// Ensure timezones match
+	_, sz := start.Zone()
+	_, ez := end.Zone()
+	if sz != ez {
+		return nil, fmt.Errorf("range has mismatched timezones: %s, %s", start, end)
+	}
+	if sz != int(env.GetParsedUTCOffset().Seconds()) {
+		return nil, fmt.Errorf("range timezone doesn't match configured timezone: expected %s; found %ds", env.GetParsedUTCOffset(), sz)
+	}
+
+	// Build array of windows to cover the CloudCostItemSetRange
+	windows := []Window{}
+	s, e := start, start.Add(windowSize)
+	for !e.After(end) {
+		ws := s
+		we := e
+		windows = append(windows, NewWindow(&ws, &we))
+
+		s = s.Add(windowSize)
+		e = e.Add(windowSize)
+	}
+	return windows, nil
+}
+
+// GetWindowsForQueryWindow breaks up a window into an array of windows with a max size of queryWindow
+func GetWindowsForQueryWindow(start time.Time, end time.Time, queryWindow time.Duration) ([]Window, error) {
+	// Ensure timezones match
+	_, sz := start.Zone()
+	_, ez := end.Zone()
+	if sz != ez {
+		return nil, fmt.Errorf("range has mismatched timezones: %s, %s", start, end)
+	}
+	if sz != int(env.GetParsedUTCOffset().Seconds()) {
+		return nil, fmt.Errorf("range timezone doesn't match configured timezone: expected %s; found %ds", env.GetParsedUTCOffset(), sz)
+	}
+
+	// Build array of windows to cover the CloudCostItemSetRange
+	windows := []Window{}
+	s, e := start, start.Add(queryWindow)
+	for s.Before(end) {
+		ws := s
+		we := e
+		windows = append(windows, NewWindow(&ws, &we))
+
+		s = s.Add(queryWindow)
+		e = e.Add(queryWindow)
+		if e.After(end) {
+			e = end
+		}
+	}
+
+	return windows, nil
+}
+
 type BoundaryError struct {
 type BoundaryError struct {
 	Requested Window
 	Requested Window
 	Supported Window
 	Supported Window

+ 300 - 0
pkg/kubecost/window_test.go

@@ -2,6 +2,7 @@ package kubecost
 
 
 import (
 import (
 	"fmt"
 	"fmt"
+	"github.com/opencost/opencost/pkg/util/timeutil"
 	"strings"
 	"strings"
 	"testing"
 	"testing"
 	"time"
 	"time"
@@ -845,3 +846,302 @@ func TestWindow_Expand(t *testing.T) {
 
 
 // TODO
 // TODO
 // func TestWindow_String(t *testing.T) {}
 // func TestWindow_String(t *testing.T) {}
+
+func TestWindow_GetPercentInWindow(t *testing.T) {
+	dayStart := time.Date(2022, 12, 6, 0, 0, 0, 0, time.UTC)
+	dayEnd := dayStart.Add(timeutil.Day)
+	window := NewClosedWindow(dayStart, dayEnd)
+
+	testcases := map[string]struct {
+		window    Window
+		itemStart time.Time
+		itemEnd   time.Time
+		expected  float64
+	}{
+		"matching start/matching end": {
+			window:    window,
+			itemStart: dayStart,
+			itemEnd:   dayEnd,
+			expected:  1.0,
+		},
+		"matching start/contained end": {
+			window:    window,
+			itemStart: dayStart,
+			itemEnd:   dayEnd.Add(-time.Hour * 6),
+			expected:  1.0,
+		},
+		"contained start/matching end": {
+			window:    window,
+			itemStart: dayStart.Add(time.Hour * 6),
+			itemEnd:   dayEnd,
+			expected:  1.0,
+		},
+		"contained start/contained end": {
+			window:    window,
+			itemStart: dayStart.Add(time.Hour * 6),
+			itemEnd:   dayEnd.Add(-time.Hour * 6),
+			expected:  1.0,
+		},
+		"before start/contained end": {
+			window:    window,
+			itemStart: dayStart.Add(-time.Hour * 12),
+			itemEnd:   dayEnd.Add(-time.Hour * 12),
+			expected:  0.5,
+		},
+		"before start/before end": {
+			window:    window,
+			itemStart: dayStart.Add(-time.Hour * 24),
+			itemEnd:   dayEnd.Add(-time.Hour * 24),
+			expected:  0.0,
+		},
+		"contained start/after end": {
+			window:    window,
+			itemStart: dayStart.Add(time.Hour * 12),
+			itemEnd:   dayEnd.Add(time.Hour * 12),
+			expected:  0.5,
+		},
+		"after start/after end": {
+			window:    window,
+			itemStart: dayStart.Add(time.Hour * 24),
+			itemEnd:   dayEnd.Add(time.Hour * 24),
+			expected:  0.0,
+		},
+		"before start/after end": {
+			window:    window,
+			itemStart: dayStart.Add(-time.Hour * 12),
+			itemEnd:   dayEnd.Add(time.Hour * 12),
+			expected:  0.5,
+		},
+	}
+	for name, tc := range testcases {
+		t.Run(name, func(t *testing.T) {
+			if actual := tc.window.GetPercentInWindow(tc.itemStart, tc.itemEnd); actual != tc.expected {
+				t.Errorf("GetPercentInWindow() = %v, want %v", actual, tc.expected)
+			}
+		})
+	}
+}
+
+func TestWindow_GetWindows(t *testing.T) {
+	dayStart := time.Date(2022, 12, 6, 0, 0, 0, 0, time.UTC)
+	dayEnd := dayStart.Add(timeutil.Day)
+	loc, _ := time.LoadLocation("America/Vancouver")
+	testCases := map[string]struct {
+		start       time.Time
+		end         time.Time
+		windowSize  time.Duration
+		expected    []Window
+		expectedErr bool
+	}{
+		"mismatching tz": {
+			start:       dayStart,
+			end:         dayEnd.In(loc),
+			windowSize:  time.Hour,
+			expected:    nil,
+			expectedErr: true,
+		},
+		"hour windows over 1 hours": {
+			start:      dayStart,
+			end:        dayStart.Add(time.Hour),
+			windowSize: time.Hour,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(time.Hour)),
+			},
+			expectedErr: false,
+		},
+		"hour windows over 3 hours": {
+			start:      dayStart,
+			end:        dayStart.Add(time.Hour * 3),
+			windowSize: time.Hour,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(time.Hour)),
+				NewClosedWindow(dayStart.Add(time.Hour), dayStart.Add(time.Hour*2)),
+				NewClosedWindow(dayStart.Add(time.Hour*2), dayStart.Add(time.Hour*3)),
+			},
+			expectedErr: false,
+		},
+		"hour windows off hour grid": {
+			start:       dayStart.Add(time.Minute),
+			end:         dayEnd.Add(time.Minute),
+			windowSize:  time.Hour,
+			expected:    nil,
+			expectedErr: true,
+		},
+		"hour windows range not divisible by hour": {
+			start:       dayStart,
+			end:         dayStart.Add(time.Minute * 90),
+			windowSize:  time.Hour,
+			expected:    nil,
+			expectedErr: true,
+		},
+		"day windows over 1 day": {
+			start:      dayStart,
+			end:        dayEnd,
+			windowSize: timeutil.Day,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayEnd),
+			},
+			expectedErr: false,
+		},
+		"day windows over 3 days": {
+			start:      dayStart,
+			end:        dayStart.Add(timeutil.Day * 3),
+			windowSize: timeutil.Day,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(timeutil.Day)),
+				NewClosedWindow(dayStart.Add(timeutil.Day), dayStart.Add(timeutil.Day*2)),
+				NewClosedWindow(dayStart.Add(timeutil.Day*2), dayStart.Add(timeutil.Day*3)),
+			},
+			expectedErr: false,
+		},
+		"day windows off day grid": {
+			start:       dayStart.Add(time.Hour),
+			end:         dayEnd.Add(time.Hour),
+			windowSize:  timeutil.Day,
+			expected:    nil,
+			expectedErr: true,
+		},
+		"day windows range not divisible by day": {
+			start:       dayStart,
+			end:         dayEnd.Add(time.Hour),
+			windowSize:  timeutil.Day,
+			expected:    nil,
+			expectedErr: true,
+		},
+	}
+	for name, tc := range testCases {
+		t.Run(name, func(t *testing.T) {
+			actual, err := GetWindows(tc.start, tc.end, tc.windowSize)
+			if (err != nil) != tc.expectedErr {
+				t.Errorf("GetWindows() error = %v, expectedErr %v", err, tc.expectedErr)
+				return
+			}
+			if len(tc.expected) != len(actual) {
+				t.Errorf("GetWindows() []window has incorrect length expected: %d, actual: %d", len(tc.expected), len(actual))
+			}
+			for i, actualWindow := range actual {
+				expectedWindow := tc.expected[i]
+				if !actualWindow.Equal(expectedWindow) {
+					t.Errorf("GetWindow() window at index %d were not equal expected: %s, actual %s", i, expectedWindow.String(), actualWindow)
+				}
+			}
+		})
+	}
+}
+
+func TestWindow_GetWindowsForQueryWindow(t *testing.T) {
+	dayStart := time.Date(2022, 12, 6, 0, 0, 0, 0, time.UTC)
+	dayEnd := dayStart.Add(timeutil.Day)
+	loc, _ := time.LoadLocation("America/Vancouver")
+	testCases := map[string]struct {
+		start       time.Time
+		end         time.Time
+		windowSize  time.Duration
+		expected    []Window
+		expectedErr bool
+	}{
+		"mismatching tz": {
+			start:       dayStart,
+			end:         dayEnd.In(loc),
+			windowSize:  time.Hour,
+			expected:    nil,
+			expectedErr: true,
+		},
+		"hour windows over 1 hours": {
+			start:      dayStart,
+			end:        dayStart.Add(time.Hour),
+			windowSize: time.Hour,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(time.Hour)),
+			},
+			expectedErr: false,
+		},
+		"hour windows over 3 hours": {
+			start:      dayStart,
+			end:        dayStart.Add(time.Hour * 3),
+			windowSize: time.Hour,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(time.Hour)),
+				NewClosedWindow(dayStart.Add(time.Hour), dayStart.Add(time.Hour*2)),
+				NewClosedWindow(dayStart.Add(time.Hour*2), dayStart.Add(time.Hour*3)),
+			},
+			expectedErr: false,
+		},
+		"hour windows off hour grid": {
+			start:      dayStart.Add(time.Minute),
+			end:        dayStart.Add(time.Minute * 61),
+			windowSize: time.Hour,
+			expected: []Window{
+				NewClosedWindow(dayStart.Add(time.Minute), dayStart.Add(time.Minute*61)),
+			},
+			expectedErr: false,
+		},
+		"hour windows range not divisible by hour": {
+			start:      dayStart,
+			end:        dayStart.Add(time.Minute * 90),
+			windowSize: time.Hour,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(time.Hour)),
+				NewClosedWindow(dayStart.Add(time.Hour), dayStart.Add(time.Minute*90)),
+			},
+			expectedErr: false,
+		},
+		"day windows over 1 day": {
+			start:      dayStart,
+			end:        dayEnd,
+			windowSize: timeutil.Day,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayEnd),
+			},
+			expectedErr: false,
+		},
+		"day windows over 3 days": {
+			start:      dayStart,
+			end:        dayStart.Add(timeutil.Day * 3),
+			windowSize: timeutil.Day,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayStart.Add(timeutil.Day)),
+				NewClosedWindow(dayStart.Add(timeutil.Day), dayStart.Add(timeutil.Day*2)),
+				NewClosedWindow(dayStart.Add(timeutil.Day*2), dayStart.Add(timeutil.Day*3)),
+			},
+			expectedErr: false,
+		},
+		"day windows off day grid": {
+			start:      dayStart.Add(time.Hour),
+			end:        dayEnd.Add(time.Hour),
+			windowSize: timeutil.Day,
+			expected: []Window{
+				NewClosedWindow(dayStart.Add(time.Hour), dayEnd.Add(time.Hour)),
+			},
+			expectedErr: false,
+		},
+		"day windows range not divisible by day": {
+			start:      dayStart,
+			end:        dayEnd.Add(time.Hour),
+			windowSize: timeutil.Day,
+			expected: []Window{
+				NewClosedWindow(dayStart, dayEnd),
+				NewClosedWindow(dayEnd, dayEnd.Add(time.Hour)),
+			},
+			expectedErr: false,
+		},
+	}
+	for name, tc := range testCases {
+		t.Run(name, func(t *testing.T) {
+			actual, err := GetWindowsForQueryWindow(tc.start, tc.end, tc.windowSize)
+			if (err != nil) != tc.expectedErr {
+				t.Errorf("GetWindowsForQueryWindow() error = %v, expectedErr %v", err, tc.expectedErr)
+				return
+			}
+			if len(tc.expected) != len(actual) {
+				t.Errorf("GetWindowsForQueryWindow() []window has incorrect length expected: %d, actual: %d", len(tc.expected), len(actual))
+			}
+			for i, actualWindow := range actual {
+				expectedWindow := tc.expected[i]
+				if !actualWindow.Equal(expectedWindow) {
+					t.Errorf("GetWindowsForQueryWindow() window at index %d were not equal expected: %s, actual %s", i, expectedWindow.String(), actualWindow)
+				}
+			}
+		})
+	}
+}

+ 4 - 3
pkg/storage/gcsstorage.go

@@ -270,10 +270,11 @@ func (gs *GCSStorage) ListDirectories(path string) ([]*StorageInfo, error) {
 			continue
 			continue
 		}
 		}
 
 
-		// If trim removes the entire name, it's a directory, ergo we list it
-		if trimName(attrs.Name) == "" {
+		// We filter directories using DirDelim, so a nameless entry is a dir
+		// See gcs.ObjectAttrs Prefix property
+		if attrs.Name == "" {
 			stats = append(stats, &StorageInfo{
 			stats = append(stats, &StorageInfo{
-				Name:    attrs.Name,
+				Name:    attrs.Prefix,
 				Size:    attrs.Size,
 				Size:    attrs.Size,
 				ModTime: attrs.Updated,
 				ModTime: attrs.Updated,
 			})
 			})

+ 15 - 0
pkg/util/mathutil/mathutil.go

@@ -0,0 +1,15 @@
+package mathutil
+
+import "math"
+
+func Approximately(exp, act float64) bool {
+	return ApproximatelyPct(exp, act, 0.0001) // within 0.1%
+}
+
+func ApproximatelyPct(exp, act, pct float64) bool {
+	delta := (exp * pct)
+	if delta < 0.00001 {
+		delta = 0.00001
+	}
+	return math.Abs(exp-act) < delta
+}

+ 3 - 0
pkg/util/timeutil/timeutil.go

@@ -34,6 +34,9 @@ const (
 
 
 	// DaysPerMonth expresses the amount of days in a month
 	// DaysPerMonth expresses the amount of days in a month
 	DaysPerMonth = 30.42
 	DaysPerMonth = 30.42
+
+	// Day expresses 24 hours
+	Day = time.Hour * 24.0
 )
 )
 
 
 // DurationString converts a duration to a Prometheus-compatible string in
 // DurationString converts a duration to a Prometheus-compatible string in

+ 1 - 1
ui/Dockerfile

@@ -1,4 +1,4 @@
-FROM node:16-alpine as builder
+FROM node:18.3.0 as builder
 ADD package*.json /opt/ui/
 ADD package*.json /opt/ui/
 WORKDIR /opt/ui
 WORKDIR /opt/ui
 RUN npm install
 RUN npm install

+ 3 - 3
ui/README.md

@@ -1,7 +1,7 @@
-# Kubecost Open Source UI
+# OpenCost UI
 The preferred install path for Kubecost is via Helm chart, and is explained [here](http://docs.kubecost.com/install)
 The preferred install path for Kubecost is via Helm chart, and is explained [here](http://docs.kubecost.com/install)
 
 
-To manually run an open source demo UI, follow the steps below.
+To manually run the OpenCost UI, follow the steps below.
 
 
 ## Requirements
 ## Requirements
 
 
@@ -15,7 +15,7 @@ To run the UI, open a terminal to the `opencost/ui/` directory (where this READM
 npm install
 npm install
 ```
 ```
 
 
-This will install required depndencies and build tools. To launch the UI, run
+This will install required dependencies and build tools. To launch the UI, run
 
 
 ```
 ```
 npx parcel src/index.html
 npx parcel src/index.html

+ 2 - 1
ui/src/components/Header.js

@@ -23,9 +23,10 @@ const useStyles = makeStyles({
 const Header = (props) => {
 const Header = (props) => {
   const classes = useStyles()
   const classes = useStyles()
   const { title, breadcrumbs } = props
   const { title, breadcrumbs } = props
-  
+
   return (
   return (
     <div className={classes.root}>
     <div className={classes.root}>
+      <img src={ require('../images/logo.png') } alt="OpenCost" />
       <div className={classes.context}>
       <div className={classes.context}>
         {title && <Typography variant="h4" className={classes.title}>{props.title}</Typography>}
         {title && <Typography variant="h4" className={classes.title}>{props.title}</Typography>}
         {breadcrumbs && breadcrumbs.length > 0 && (
         {breadcrumbs && breadcrumbs.length > 0 && (

BIN
ui/src/images/logo.png


BIN
ui/src/opencost-ui.png