Переглянути джерело

Merge develop into master (#688)

* Split window.ToDurationOffset into DurationOffset and DurationOffsetStrings

* Check and register annotations collectors if enabled.

* add csv fallback (#641) (#646)

* add csv fallback

* log class match

* add counts by source

* add test for pricing source counter and make the names of sources public

* Fix the issue with empty pod name on annotation metric.

* Only Emit label_ and annotation_ metrics if they have values!

* Simplify label and annotation metric to labels.

* Add annotations costmodel

* Additional annotations additions

* Ajay tripathy pvc error fix (#644)

* add csv fallback

* log class match

* filter empty volumenames out

* Ajay tripathy fix e2custom (#623)

* pass offset to ccdr

* remove conflict

* add e2custom support

* revert cntext background change

* fix improperly named constant for govcloud lookup (#651)

* Use compatibility region implementation. (#653)

* Add filter by annotations to AggregateCostModelHandler

* WIP AWS idle investigation

* Fix bug with multiple filters on label and annotations

* Fix idle allocation bug for windows < 1h; improve DurationOffset string conversion

* Commit missing test file

* aggregate on annotations

* Merge master into develop. (#658)

* add csv fallback (#641)

* add csv fallback

* log class match

* add counts by source

* add test for pricing source counter and make the names of sources public

* Bump to version 1.71.1

* Bump version to 1.72.0 (#659)

* Aggregation by label

Updated AssetSet and AssetSetRange to aggregate by a new property aggStrings []string.
The props []AssetProperty value is still maintained on a given AssetSet, but rather than use this for aggregation,
we now use aggStrings, which can include values other than enumerated props. Specifically, strings prefixed with "label:"
are interpreted as labels and can be grouped on.

Strings in aggStrings which match the enumerated AssetProperty strings are stored in AssetSet.props, as before.

Also updated the relevant asset_test tests to call the Aggregation funcs with []string rather than []AssetProperty

* WIP add labels to nodes

* Now copying all labels into node objects

* Updates:

- Factored out `AssetSet.properties []AssetProperty`, which has been replaced by `AssetSet.aggregateBy []string`.

- Function `key()` now uses reserved word `__unallocated__` in keys for assets that do not have the given prop defined. Previous behavior was to omit that part of the key.

- Function `key()` now emits errors for given aggregation keys that are not either an `AssetProperty` or a string prefixed with `"label:"`.

- Updated tests to expect `__unallocated__` in relevant parts of keys.

* Fixes from Nikos comments

* Implement kubecost.AllocationSetRange.InsertRange and test; refactor custom approx implementations into util.IsApproximately for testing

* Returning single Error from key() func and disallowing grouping on empty label

* Capacity Optimizations (#664)

* Update to latest bingen read optimizations by allocating the required map space ahead of time. Apply to Properties as well.

* A few more smaller optimizations on Clone()

* simple pvfix (#663)

* Add annotation to allocation key

* go fmt

* Adds the concept of an AssetCredit to pkg/kubecost/asset

* undo go fmt

* Checking for errors from key() everywhere and made __undefined__ a constant

* process annotations in etl

* Simplified Cloud.Credit down to a simple float64 rather than objects containing credit metadata

* Allocation ETL: on-demand idle cost with unit testing

* Make AllocationSet.insert safer

* Ajay tripathy fix pvcalls (#667)

* simple pvfix

* Update to latest bingen read optimizations by allocating the required map space ahead of time. Apply to Properties as well.

* A few more smaller optimizations on Clone()

* fix extra PVlookups

Co-authored-by: Matt Bolt <mbolt35@gmail.com>

* Re-implemented 'propsEqual' check in accumulate(). Can't be exactly propsEqual() because they aren't props anymore...but wrote equivalent functions for []string

* Allocation ETL: on-demand external cost

* Allocation ETL: on-demand external cost; implement Properties.AggregationStrings

* Allocation ETL: on-demand external cost: fix Properties

* camel case json property

* refactor map merging function

* WIP logging for Allocation ETL: on-demand external cost

* Added a unit test for Aggregating by label.

* Added a comment explaining the nature of Credit

* Changed the label-key format to match allocation. Format for a label's key entry is now '/key=value/' rather than '/value/'

* undefined labels don't list key= before the __undefined__ value

* Now treating labels with value '' as unset labels

* Fix test broken in merge

* Ajay tripathy remove 2d cache (#668)

* simple pvfix

* Update to latest bingen read optimizations by allocating the required map space ahead of time. Apply to Properties as well.

* A few more smaller optimizations on Clone()

* fix extra PVlookups

* remove 2d cache

Co-authored-by: Matt Bolt <mbolt35@gmail.com>

* Active minutes query includes provider id

* Refactor ClusterNodes and use provider ID

See comments for more detailed explanation.

* Detailed comment describing buildNodeMap

* Unit test for mergeTypeMaps

* Initial simple test for buildNodeMap

* More buildNodeMap tests

* Renamed test file

* Removed check made obsolete by refactor

* Grammar and spelling fix in a comment

* Moved labels map into helper function

Also updated tests and followed the new query name
for preemptible, resNodeLabels -> resIsSpot.

* Allocation ETL: on-demand external cost: fix naming convention

* Fix bugs with external cost AggregateBy; fix test funcs; remove logs

* Update TODOs; add log

* Unit tests for E2 manual cost adjustment (#676)

* e2 fixes

* Added tests for CPU cost adjustment for e2

Co-authored-by: Ajay Tripathy <4tripathy@gmail.com>

* Bump version (#677)

* fix n2 prices (#679)

* Include adjustment in idle cost: write failing test

* Add adjustmentRate to ComputeIdleAllocations

* Add tests

* ComputeCostData gets CPU+RAM requests from k8s API

The intended result of this change is reducing load on
Prometheus for things it is not needed for. The one caveat
of this change is a modification of the output of data
for pods that stop existing in the cluster during the
query lookback window (which is currently 2 minutes).
Since we will be querying the k8s API at a time where
the pod does not exist, it will not have any information
about it. We choose to handle this by not outputting
request information, only usage information.

* Commented request emission special case better

* Better comment explaining units of memory and cpu

* K8s request stats include current timestamp

Necessary for the maxing op that occurs in
getContainerAllocation. The timestamps for
request and usage must be roughly equal.

* Read secret and save values to env variable

* logical fix

* Comment explaining manual timestamp

* OOC for azure on details

* add service account checks to Azure provider to notify front end of storage configuration

* fix bool name

* Update costmodelenv.go

Co-authored-by: Niko Kovacevic <nikovacevic@gmail.com>
Co-authored-by: Matt Bolt <mbolt35@gmail.com>
Co-authored-by: Sean Holcomb <seanholcomb@gmail.com>
Co-authored-by: Neal Ormsbee <neal.ormsbee@gmail.com>
Co-authored-by: Sean Holcomb <sean@kubecost.com>
Co-authored-by: Michael Dresser <michael@kubecost.com>
Ajay Tripathy 5 роки тому
батько
коміт
72c326c22b

+ 8 - 5
go.mod

@@ -4,14 +4,16 @@ replace github.com/golang/lint => golang.org/x/lint v0.0.0-20180702182130-06c868
 
 require (
 	cloud.google.com/go v0.34.0
-	contrib.go.opencensus.io/exporter/ocagent v0.5.0 // indirect
-	github.com/Azure/azure-sdk-for-go v24.1.0+incompatible
-	github.com/Azure/go-autorest v11.3.2+incompatible
+	github.com/Azure/azure-sdk-for-go v51.1.0+incompatible
+	github.com/Azure/azure-storage-blob-go v0.13.0
+	github.com/Azure/go-autorest/autorest v0.11.17
+	github.com/Azure/go-autorest/autorest/azure/auth v0.5.6
+	github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
+	github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
 	github.com/aws/aws-sdk-go v1.28.9
 	github.com/davecgh/go-spew v1.1.1
-	github.com/dimchansky/utfbom v1.1.0 // indirect
 	github.com/getsentry/sentry-go v0.6.1
-	github.com/google/martian v2.1.0+incompatible
+	github.com/google/martian v2.1.0+incompatible // indirect
 	github.com/google/uuid v1.1.1
 	github.com/googleapis/gax-go v2.0.2+incompatible // indirect
 	github.com/gophercloud/gophercloud v0.2.0 // indirect
@@ -28,6 +30,7 @@ require (
 	golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
 	golang.org/x/sync v0.0.0-20190423024810-112230192c58
 	google.golang.org/api v0.4.0
+	google.golang.org/grpc v1.20.1 // indirect
 	gopkg.in/yaml.v2 v2.2.4
 	k8s.io/api v0.0.0-20190913080256-21721929cffa
 	k8s.io/apimachinery v0.0.0-20190913075812-e119e5e154b6

+ 60 - 17
go.sum

@@ -1,14 +1,45 @@
 cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
 cloud.google.com/go v0.34.0 h1:eOI3/cP2VTU6uZLDYAoic+eyzzB9YyGmJ7eIjl8rOPg=
 cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
-contrib.go.opencensus.io/exporter/ocagent v0.5.0 h1:TKXjQSRS0/cCDrP7KvkgU6SmILtF/yV2TOs/02K/WZQ=
-contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0=
 github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
+github.com/Azure/azure-pipeline-go v0.2.3 h1:7U9HBg1JFK3jHl5qmo4CTZKFTVgMwdFHMVtCdfBE21U=
+github.com/Azure/azure-pipeline-go v0.2.3/go.mod h1:x841ezTBIMG6O3lAcl8ATHnsOPVl2bqk7S3ta6S6u4k=
 github.com/Azure/azure-sdk-for-go v24.1.0+incompatible h1:P7GocB7bhkyGbRL1tCy0m9FDqb1V/dqssch3jZieUHk=
 github.com/Azure/azure-sdk-for-go v24.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v51.1.0+incompatible h1:7uk6GWtUqKg6weLv2dbKnzwb0ml1Qn70AdtRccZ543w=
+github.com/Azure/azure-sdk-for-go v51.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-storage-blob-go v0.13.0 h1:lgWHvFh+UYBNVQLFHXkvul2f6yOPA9PIH82RTG2cSwc=
+github.com/Azure/azure-storage-blob-go v0.13.0/go.mod h1:pA9kNqtjUeQF2zOSu4s//nUdBD+e64lEuc4sVnuOfNs=
 github.com/Azure/go-autorest v11.1.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
-github.com/Azure/go-autorest v11.3.2+incompatible h1:2bRmoaLvtIXW5uWpZVoIkc0C1z7c84rVGnP+3mpyCRg=
-github.com/Azure/go-autorest v11.3.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
+github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/Azure/go-autorest/autorest v0.11.17 h1:2zCdHwNgRH+St1J+ZMf66xI8aLr/5KMy+wWLH97zwYM=
+github.com/Azure/go-autorest/autorest v0.11.17/go.mod h1:eipySxLmqSyC5s5k1CLupqet0PSENBEDP93LQ9a8QYw=
+github.com/Azure/go-autorest/autorest/adal v0.9.2 h1:Aze/GQeAN1RRbGmnUJvUj+tFGBzFdIg3293/A9rbxC4=
+github.com/Azure/go-autorest/autorest/adal v0.9.2/go.mod h1:/3SMAM86bP6wC9Ev35peQDUeqFZBMH07vvUOmg4z/fE=
+github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
+github.com/Azure/go-autorest/autorest/adal v0.9.10 h1:r6fZHMaHD8B6LDCn0o5vyBFHIHrM6Ywwx7mb49lPItI=
+github.com/Azure/go-autorest/autorest/adal v0.9.10/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
+github.com/Azure/go-autorest/autorest/adal v0.9.11 h1:L4/pmq7poLdsy41Bj1FayKvBhayuWRYkx9HU5i4Ybl0=
+github.com/Azure/go-autorest/autorest/adal v0.9.11/go.mod h1:nBKAnTomx8gDtl+3ZCJv2v0KACFHWTB2drffI1B68Pk=
+github.com/Azure/go-autorest/autorest/azure/auth v0.5.6 h1:cgiBtUxatlt/e3qY6fQJioqbocWHr5osz259MomF5M0=
+github.com/Azure/go-autorest/autorest/azure/auth v0.5.6/go.mod h1:nYlP+G+n8MhD5CjIi6W8nFTIJn/PnTHes5nUbK6BxD0=
+github.com/Azure/go-autorest/autorest/azure/auth v0.5.7 h1:8DQB8yl7aLQuP+nuR5e2RO6454OvFlSTXXaNHshc16s=
+github.com/Azure/go-autorest/autorest/azure/auth v0.5.7/go.mod h1:AkzUsqkrdmNhfP2i54HqINVQopw0CLDnvHpJ88Zz1eI=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 h1:dMOmEJfkLKW/7JsokJqkyoYSgmR08hi9KrhjZb+JALY=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.2/go.mod h1:7qkJkT+j6b+hIpzMOwPChJhTqS8VbsqqgULzMNRugoM=
+github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
+github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/to v0.4.0 h1:oXVqrxakqqV1UZdSazDOPOLvOIz+XA683u8EctwboHk=
+github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE=
+github.com/Azure/go-autorest/autorest/validation v0.3.1 h1:AgyqjAd94fwNAoTjl/WQXg4VvFeRFpO+UhNyRXqF1ac=
+github.com/Azure/go-autorest/autorest/validation v0.3.1/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E=
+github.com/Azure/go-autorest/logger v0.2.0 h1:e4RVHVZKC5p6UANLJHkM4OfR1UKZPj8Wt8Pcx+3oqrE=
+github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
+github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
+github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
 github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
 github.com/CloudyKit/fastprinter v0.0.0-20170127035650-74b38d55f37a/go.mod h1:EFZQ978U7x8IRnstaskI3IysnWY5Ao3QgZUKOXlsAdw=
 github.com/CloudyKit/jet v2.1.3-0.20180809161101-62edd43e4f88+incompatible/go.mod h1:HPYO+50pSWkPoj9Q/eq0aRGByCL6ScRlUmiEX5Zgm+w=
@@ -29,8 +60,6 @@ github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLM
 github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
 github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
 github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
-github.com/census-instrumentation/opencensus-proto v0.2.0 h1:LzQXZOgg4CQfE6bFvXGM30YZL1WW/M337pXml+GrcZ4=
-github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
 github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
 github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM=
 github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@@ -49,6 +78,8 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
 github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
 github.com/dimchansky/utfbom v1.1.0 h1:FcM3g+nofKgUteL8dm/UpdRXNC9KmADgTpLKsu0TRo4=
 github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
+github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi/U=
+github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE=
 github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
 github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
 github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM=
@@ -61,13 +92,14 @@ github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
 github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8=
 github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
 github.com/flosch/pongo2 v0.0.0-20190707114632-bbf5a6c351f4/go.mod h1:T9YF2M40nIgbVgp3rreNmTged+9HrbNTIQf1PsaIiTA=
+github.com/form3tech-oss/jwt-go v3.2.2+incompatible h1:TcekIExNqud5crz4xD2pavyTgWiPvpYe4Xau31I0PRk=
+github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
 github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
 github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
 github.com/gavv/httpexpect v2.0.0+incompatible/go.mod h1:x+9tiU1YnrOvnB725RkpoLv1M62hOWzwo5OXotisrKc=
 github.com/getsentry/sentry-go v0.6.1 h1:K84dY1/57OtWhdyr5lbU78Q/+qgzkEyGc/ud+Sipi5k=
 github.com/getsentry/sentry-go v0.6.1/go.mod h1:0yZBuzSvbZwBnvaF9VwZIMen3kXscY8/uasKtAX1qG8=
 github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
-github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
 github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s=
 github.com/gin-gonic/gin v1.4.0/go.mod h1:OW2EZn3DO8Ln9oIKOvM++LBO+5UPHJJDH72/q/3rZdM=
 github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclKsC9YodN5RgxqK/VD9HM9JsCSh7rNhMZE98=
@@ -129,8 +161,6 @@ github.com/gophercloud/gophercloud v0.2.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEo
 github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
 github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
 github.com/gregjones/httpcache v0.0.0-20170728041850-787624de3eb7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
-github.com/grpc-ecosystem/grpc-gateway v1.8.5 h1:2+KSC78XiO6Qy0hIjfc1OD9H+hsaJdJlb8Kqsd41CTE=
-github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
 github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
 github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
 github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
@@ -189,6 +219,8 @@ github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
 github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
 github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
 github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
+github.com/mattn/go-ieproxy v0.0.1 h1:qiyop7gCflfhwCzGyeT0gro3sF9AIg9HU98JORTkqfI=
+github.com/mattn/go-ieproxy v0.0.1/go.mod h1:pYabZ6IHcRpFh7vIaLfK7rdcWgFEb3SFJ6/gNWuh88E=
 github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
 github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
 github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
@@ -217,6 +249,8 @@ github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+
 github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM=
 github.com/nats-io/nkeys v0.0.2/go.mod h1:dab7URMsZm6Z/jp9Z5UGa87Uutgc2mVpXLC4B7TDb/4=
 github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
+github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
+github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
 github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
 github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
 github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
@@ -236,6 +270,8 @@ github.com/pkg/errors v0.8.0 h1:WdK/asTD0HN+q6hsWO3/vpuAkAr+tw6aNJNDFFf0+qw=
 github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
 github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
 github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
 github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
 github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
 github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -250,7 +286,6 @@ github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
 github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
 github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
 github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
-github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
 github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
 github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
 github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
@@ -313,6 +348,11 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90Pveol
 golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
 golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4 h1:HuIa8hRrWRSrqYzx1qI49NNxhdi2PrY7gxVSq1JjLDc=
 golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad h1:DN0cp81fZ3njFcrLCytUHRSUkqBjfTo4Tx9RJTWs0EY=
+golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
 golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
 golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
 golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -337,6 +377,8 @@ golang.org/x/net v0.0.0-20190812203447-cdfb69ac37fc h1:gkKoSkUmnU6bpS/VhkuO27bzQ
 golang.org/x/net v0.0.0-20190812203447-cdfb69ac37fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
 golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM=
 golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191112182307-2180aed22343 h1:00ohfJ4K98s3m6BGUoBd8nyfp4Yl0GoIKvw5abItTjI=
+golang.org/x/net v0.0.0-20191112182307-2180aed22343/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
 golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
 golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
 golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a h1:tImsplftrFpALCYumobsd0K86vlAs/eXGFms2txfJfA=
@@ -350,12 +392,10 @@ golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6 h1:bjcUS9ztw9kFmmIxJInhon/0
 golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
 golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
 golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.0.0-20201207232520-09787c993a3a h1:DcqTD9SDLc+1P/r1EmRBwnVsrOwW+kk2vWf9n+1sGhs=
 golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -369,8 +409,14 @@ golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7w
 golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
 golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a h1:aYOabOQFp6Vj6W1F80affTUvO9UxmJRx8K0gsfABByQ=
 golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
 golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5 h1:LfCXLvNmTYH9kEmVgqbnsWfruoXZIrh4YBgqVHtDvw0=
 golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200828194041-157a740278f4 h1:kCCpuwSAoYJPkNc6x0xT9yTtV4oKtARo4RGBQWOfg9E=
+golang.org/x/sys v0.0.0-20200828194041-157a740278f4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/term v0.0.0-20201117132131-f5c789dd3221 h1:/ZHdbVpdR/jk3g30/d4yUL0JU9kksj8+F/bnQUVLGDM=
+golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
 golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
 golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
 golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -410,15 +456,15 @@ gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLks
 gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
 gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
 gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
+gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
 gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
 gopkg.in/go-playground/assert.v1 v1.2.1/go.mod h1:9RXL0bg/zibRAgZUYszZSwO/z8Y/a8bDuhia5mkpMnE=
 gopkg.in/go-playground/validator.v8 v8.18.2/go.mod h1:RX2a/7Ha8BgOhfk7j780h4/u/RRjR0eouCJSH80/M2Y=
 gopkg.in/inf.v0 v0.9.0 h1:3zYtXIO92bvsdS3ggAdA8Gb4Azj0YU+TVY1uGYNFA8o=
 gopkg.in/inf.v0 v0.9.0/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
 gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA=
-gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
 gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
-gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
 gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
 gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
 gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -433,11 +479,8 @@ k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719 h1:uV4S5IB5g4Nvi+TBVNf3e9
 k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA=
 k8s.io/apimachinery v0.0.0-20190913075812-e119e5e154b6 h1:tGU1C/vMoUV2ZakSH6wQq2shk9KiFtjoH2vDDHlhpA4=
 k8s.io/apimachinery v0.0.0-20190913075812-e119e5e154b6/go.mod h1:nL6pwRT8NgfF8TT68DBI8uEePRt89cSvoXUVqbkWHq4=
-k8s.io/apimachinery v0.20.1 h1:LAhz8pKbgR8tUwn7boK+b2HZdt7MiTu2mkYtFMUjTRQ=
 k8s.io/client-go v0.0.0-20190620085101-78d2af792bab h1:E8Fecph0qbNsAbijJJQryKu4Oi9QTp5cVpjTE+nqg6g=
 k8s.io/client-go v0.0.0-20190620085101-78d2af792bab/go.mod h1:E95RaSlHr79aHaX0aGSwcPNfygDiPKOVXdmivCIZT0k=
-k8s.io/client-go v1.5.1 h1:XaX/lo2/u3/pmFau8HN+sB5C/b4dc4Dmm2eXjBH4p1E=
-k8s.io/client-go v11.0.0+incompatible h1:LBbX2+lOwY9flffWlJM7f1Ct8V2SRNiMRDFeiwnJo9o=
 k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
 k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
 k8s.io/klog v0.3.1/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=

+ 240 - 3
pkg/cloud/azureprovider.go

@@ -2,14 +2,17 @@ package cloud
 
 import (
 	"context"
+	"encoding/csv"
 	"encoding/json"
 	"fmt"
+	"github.com/kubecost/cost-model/pkg/kubecost"
 	"io"
 	"io/ioutil"
 	"regexp"
 	"strconv"
 	"strings"
 	"sync"
+	"time"
 
 	"github.com/kubecost/cost-model/pkg/clustercache"
 	"github.com/kubecost/cost-model/pkg/env"
@@ -63,8 +66,12 @@ var (
 	mtStandardN, _ = regexp.Compile(`^Standard_N[C|D|V]\d+r?[_v\d]*[_Promo]*$`)
 )
 
+const AzureLayout = "2006-01-02"
+
 var loadedAzureSecret bool = false
 var azureSecret *AzureServiceKey = nil
+var loadedAzureStorageConfigSecret bool = false
+var azureStorageConfig *AzureStorageConfig= nil
 
 type regionParts []string
 
@@ -184,6 +191,7 @@ type Azure struct {
 	DownloadPricingDataLock sync.RWMutex
 	Clientset               clustercache.ClusterCache
 	Config                  *ProviderConfig
+	ServiceAccountChecks        map[string]*ServiceAccountCheck
 }
 
 type azureKey struct {
@@ -211,6 +219,13 @@ func (k *azureKey) ID() string {
 	return ""
 }
 
+// Represents an azure storage config
+type AzureStorageConfig struct {
+	AccountName string `json:"azureStorageAccount"`
+	AccessKey string `json:"azureStorageAccessKey"`
+	ContainerName string `json:"azureStorageContainer"`
+}
+
 // Represents an azure app key
 type AzureAppKey struct {
 	AppID       string `json:"appId"`
@@ -226,6 +241,7 @@ type AzureServiceKey struct {
 	ServiceKey     *AzureAppKey `json:"serviceKey"`
 }
 
+
 // Validity check on service key
 func (ask *AzureServiceKey) IsValid() bool {
 	return ask.SubscriptionID != "" &&
@@ -260,6 +276,59 @@ func (az *Azure) getAzureAuth(forceReload bool, cp *CustomPricing) (subscription
 	return "", "", "", ""
 }
 
+func (az *Azure) ConfigureAzureStorage() error {
+	accessKey, accountName, containerName := az.getAzureStorageConfig(false)
+	if accessKey != "" && accountName != "" && containerName != "" {
+		err := env.Set(env.AzureStorageAccessKeyEnvVar, accessKey)
+		if err != nil {
+			return err
+		}
+		err = env.Set(env.AzureStorageAccountNameEnvVar, accountName)
+		if err != nil {
+			return err
+		}
+		err = env.Set(env.AzureStorageContainerNameEnvVar, containerName)
+		if err != nil {
+			return err
+		}
+	}
+	return nil
+}
+func (az *Azure) getAzureStorageConfig(forceReload bool) (accessKey, accountName, containerName string) {
+	if az.ServiceAccountChecks == nil {
+		az.ServiceAccountChecks = make(map[string]*ServiceAccountCheck)
+	}
+	// 1. Check for secret
+	s, _ := az.loadAzureStorageConfig(forceReload)
+	if s != nil && s.AccessKey != "" && s.AccountName != ""  && s.ContainerName != ""{
+
+		az.ServiceAccountChecks["hasStorage"] = &ServiceAccountCheck{
+			Message: "Azure Storage Config exists",
+			Status:  true,
+		}
+
+		accessKey = s.AccessKey
+		accountName = s.AccountName
+		containerName = s.ContainerName
+		return
+	}
+
+	// 3. Fall back to env vars
+	accessKey, accountName, containerName = env.GetAzureStorageAccessKey(), env.GetAzureStorageAccountName(), env.GetAzureStorageContainerName()
+	if accessKey != "" && accountName != "" && containerName != "" {
+		az.ServiceAccountChecks["hasStorage"] = &ServiceAccountCheck{
+			Message: "Azure Storage Config exists",
+			Status:  true,
+		}
+	} else {
+		az.ServiceAccountChecks["hasStorage"] = &ServiceAccountCheck{
+			Message: "Azure Storage Config exists",
+			Status:  false,
+		}
+	}
+	return
+}
+
 // Load once and cache the result (even on failure). This is an install time secret, so
 // we don't expect the secret to change. If it does, however, we can force reload using
 // the input parameter.
@@ -289,6 +358,35 @@ func (az *Azure) loadAzureAuthSecret(force bool) (*AzureServiceKey, error) {
 	return azureSecret, nil
 }
 
+// Load once and cache the result (even on failure). This is an install time secret, so
+// we don't expect the secret to change. If it does, however, we can force reload using
+// the input parameter.
+func (az *Azure) loadAzureStorageConfig(force bool) (*AzureStorageConfig, error) {
+	if !force && loadedAzureStorageConfigSecret {
+		return azureStorageConfig, nil
+	}
+	loadedAzureStorageConfigSecret = true
+
+	exists, err := util.FileExists(storageConfigSecretPath)
+	if !exists || err != nil {
+		return nil, fmt.Errorf("Failed to locate azure storage config file: %s", storageConfigSecretPath)
+	}
+
+	result, err := ioutil.ReadFile(storageConfigSecretPath)
+	if err != nil {
+		return nil, err
+	}
+
+	var ask AzureStorageConfig
+	err = json.Unmarshal(result, &ask)
+	if err != nil {
+		return nil, err
+	}
+
+	azureStorageConfig = &ask
+	return azureStorageConfig, nil
+}
+
 func (az *Azure) GetKey(labels map[string]string, n *v1.Node) Key {
 	cfg, err := az.GetConfig()
 	if err != nil {
@@ -807,8 +905,143 @@ func (az *Azure) GetConfig() (*CustomPricing, error) {
 	return c, nil
 }
 
-func (az *Azure) ExternalAllocations(string, string, []string, string, string, bool) ([]*OutOfClusterAllocation, error) {
-	return nil, nil
+// ExternalAllocations represents tagged assets outside the scope of kubernetes.
+// "start" and "end" are dates of the format YYYY-MM-DD
+// "aggregator" is the tag used to determine how to allocate those assets, ie namespace, pod, etc.
+func (az *Azure) ExternalAllocations(start string, end string, aggregators []string, filterType string, filterValue string, crossCluster bool) ([]*OutOfClusterAllocation, error) {
+	var csvRetriever CSVRetriever = AzureCSVRetriever{}
+	err := az.ConfigureAzureStorage() // load Azure Storage config
+	if err != nil {
+		return nil, err
+	}
+	return GetExternalAllocations(start, end, aggregators, filterType, filterValue, crossCluster, csvRetriever)
+}
+
+func GetExternalAllocations(start string, end string, aggregators []string, filterType string, filterValue string, crossCluster bool, csvRetriever CSVRetriever) ([]*OutOfClusterAllocation, error) {
+	dateFormat := "2006-1-2"
+	startTime, err := time.Parse(dateFormat, start)
+	if err != nil {
+		return nil, err
+	}
+	endTime, err := time.Parse(dateFormat, end)
+	if err != nil {
+		return nil, err
+	}
+	readers, err := csvRetriever.GetCSVReaders(startTime, endTime)
+	if err != nil {
+		return nil, err
+	}
+	oocAllocs := make(map[string]*OutOfClusterAllocation)
+	for _, reader := range readers {
+		err = ParseCSV(reader, startTime, endTime, oocAllocs, aggregators, filterType, filterValue, crossCluster)
+		if err != nil {
+			return nil, err
+		}
+	}
+	var oocAllocsArr []*OutOfClusterAllocation
+	for _, alloc := range oocAllocs {
+		oocAllocsArr = append(oocAllocsArr, alloc)
+	}
+	return oocAllocsArr, nil
+}
+
+func ParseCSV (reader *csv.Reader, start, end time.Time, oocAllocs map[string]*OutOfClusterAllocation, aggregators []string, filterType string, filterValue string, crossCluster bool) error {
+	headers, _ := reader.Read()
+
+	headerMap := map[string]int{}
+	for i, header := range headers {
+		headerMap[header] = i
+	}
+
+	for {
+		var record, err = reader.Read()
+		if err == io.EOF {
+			break
+		}
+		if err != nil {
+			return err
+		}
+
+		meterCategory := record[headerMap["MeterCategory"]]
+		category := selectCategory(meterCategory)
+		usageDateTime, err := time.Parse(AzureLayout, record[headerMap["UsageDateTime"]])
+		if err != nil {
+			klog.Errorf("failed to parse usage date: '%s'", record[headerMap["UsageDateTime"]])
+			continue
+		}
+		// Ignore VM's and Storage Items for now
+		if category == kubecost.ComputeCategory || category == kubecost.StorageCategory || !isValidUsageDateTime(start, end, usageDateTime) {
+			continue
+		}
+
+		itemCost, err := strconv.ParseFloat(record[headerMap["PreTaxCost"]], 64)
+		if err != nil {
+			klog.Infof("failed to parse cost: '%s'", record[headerMap["PreTaxCost"]])
+			continue
+		}
+
+		itemTags := make(map[string]string)
+		itemTagJson := record[headerMap["Tags"]]
+		if itemTagJson != "" {
+			err = json.Unmarshal([]byte(itemTagJson), &itemTags)
+			if err != nil {
+				klog.Infof("Could not parse item tags %v", err)
+			}
+		}
+
+		if filterType != "kubernetes_" {
+			if value, ok := itemTags[filterType];!ok || value != filterValue {
+				continue
+			}
+		}
+		environment := ""
+		for _, agg := range aggregators {
+			if tag, ok := itemTags[agg]; ok {
+				environment = tag // just set to the first nonempty match
+				break
+			}
+		}
+		key := environment + record[headerMap["ConsumedService"]]
+		if alloc, ok := oocAllocs[key]; ok {
+			alloc.Cost += itemCost
+		} else {
+			ooc := &OutOfClusterAllocation{
+				Aggregator:  strings.Join(aggregators, ","),
+				Environment: environment,
+				Service:     record[headerMap["ConsumedService"]],
+				Cost:        itemCost,
+			}
+			oocAllocs[key] = ooc
+		}
+
+
+	}
+	return nil
+}
+
+
+
+// UsageDateTime only contains date information and not time because of this filtering usageDate time is inclusive on start and exclusive on end
+func isValidUsageDateTime(start, end, usageDateTime time.Time) bool {
+	return (usageDateTime.After(start) || usageDateTime.Equal(start)) && usageDateTime.Before(end)
+}
+
+func getStartAndEndTimes(usageDateTime time.Time) (time.Time, time.Time) {
+	start := time.Date(usageDateTime.Year(), usageDateTime.Month(), usageDateTime.Day(), 0, 0, 0, 0, usageDateTime.Location())
+	end := time.Date(usageDateTime.Year(), usageDateTime.Month(), usageDateTime.Day(), 23, 59, 59, 999999999, usageDateTime.Location())
+	return start, end
+}
+
+func selectCategory(meterCategory string) string {
+	if meterCategory == "Virtual Machines" {
+		return kubecost.ComputeCategory
+	} else if meterCategory == "Storage" {
+		return kubecost.StorageCategory
+	} else if meterCategory == "Load Balancer" || meterCategory == "Bandwidth" {
+		return kubecost.NetworkCategory
+	} else {
+		return kubecost.OtherCategory
+	}
 }
 
 func (az *Azure) ApplyReservedInstancePricing(nodes map[string]*Node) {
@@ -832,8 +1065,12 @@ func (az *Azure) GetLocalStorageQuery(window, offset string, rate bool, used boo
 }
 
 func (az *Azure) ServiceAccountStatus() *ServiceAccountStatus {
+	checks := []*ServiceAccountCheck{}
+	for _, v := range az.ServiceAccountChecks {
+		checks = append(checks, v)
+	}
 	return &ServiceAccountStatus{
-		Checks: []*ServiceAccountCheck{},
+		Checks: checks,
 	}
 }
 

+ 155 - 0
pkg/cloud/csvretriever.go

@@ -0,0 +1,155 @@
+package cloud
+
+import (
+	"bytes"
+	"context"
+	"encoding/csv"
+	"fmt"
+	"github.com/Azure/azure-storage-blob-go/azblob"
+	"github.com/kubecost/cost-model/pkg/env"
+	"net/url"
+	"strings"
+	"time"
+)
+type CSVRetriever interface {
+	GetCSVReaders(start, end time.Time) ([]*csv.Reader, error)
+}
+
+type AzureCSVRetriever struct {
+}
+
+func (acr AzureCSVRetriever) GetCSVReaders(start, end time.Time) ([]*csv.Reader, error) {
+
+	containerURL, err := acr.getContainer()
+	if err != nil {
+		return nil, err
+	}
+	return acr.getMostRecentFiles(start, end, containerURL)
+}
+
+func (acr AzureCSVRetriever) getMostRecentFiles(start, end time.Time, containerURL *azblob.ContainerURL) ([]*csv.Reader, error) {
+	ctx := context.Background()
+	blobNames, err := acr.getMostResentBlobNames(start, end, ctx, containerURL)
+	if err != nil {
+		return nil, err
+	}
+	var readers []*csv.Reader
+	for _, blobName := range blobNames {
+		blobURL := containerURL.NewBlobURL(blobName)
+
+		downloadResponse, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false, azblob.ClientProvidedKeyOptions{})
+		if err != nil {
+			return nil, err
+		}
+		// NOTE: automatically retries are performed if the connection fails
+		bodyStream := downloadResponse.Body(azblob.RetryReaderOptions{MaxRetryRequests: 20})
+
+		// read the body into a buffer
+		downloadedData := bytes.Buffer{}
+		_, err = downloadedData.ReadFrom(bodyStream)
+		if err != nil {
+			return nil, err
+		}
+		reader := csv.NewReader(bytes.NewReader(downloadedData.Bytes()))
+		readers = append(readers, reader)
+	}
+	return readers, nil
+}
+
+func (acr AzureCSVRetriever) getContainer() (*azblob.ContainerURL, error) {
+	accountKey := env.GetAzureStorageAccessKey()
+	accountName := env.GetAzureStorageAccountName()
+	containerName := env.GetAzureStorageContainerName()
+	if accountName == "" || accountKey == "" || containerName == "" {
+		return nil, fmt.Errorf("set up Azure storage config to access out of cluster costs")
+	}
+
+	// Create a default request pipeline using your storage account name and account key.
+	credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
+	if err != nil {
+		return nil, err
+	}
+
+	p := azblob.NewPipeline(credential, azblob.PipelineOptions{})
+
+	// From the Azure portal, get your storage account blob service URL endpoint.
+	URL, _ := url.Parse(
+		fmt.Sprintf("https://%s.blob.core.windows.net/%s", accountName, containerName))
+
+	// Create a ContainerURL object that wraps the container URL and a request
+	// pipeline to make requests.
+	containerURL := azblob.NewContainerURL(*URL, p)
+	return &containerURL, nil
+}
+
+func (acr AzureCSVRetriever) getMostResentBlobNames(start, end time.Time, ctx context.Context, containerURL *azblob.ContainerURL) ([]string, error) {
+	// Get list of month substrings for months contained in the start to end range
+	monthStrs, err := acr.getMonthStrings(start, end)
+	if err != nil {
+		return nil, err
+	}
+	mostResentBlobs := make(map[string]azblob.BlobItemInternal)
+	for marker := (azblob.Marker{}); marker.NotDone(); {
+		// Get a result segment starting with the blob indicated by the current Marker.
+		listBlob, err := containerURL.ListBlobsFlatSegment(ctx, marker, azblob.ListBlobsSegmentOptions{})
+		if err != nil {
+			return nil, err
+		}
+
+		// ListBlobs returns the start of the next segment; you MUST use this to get
+		// the next segment (after processing the current result segment).
+		marker = listBlob.NextMarker
+
+		// Using the list of months strings find the most resent blob for each month in the range
+		for _, blobInfo := range listBlob.Segment.BlobItems {
+			for _, month := range monthStrs {
+				if strings.Contains(blobInfo.Name, month) {
+					if prevBlob, ok := mostResentBlobs[month]; ok {
+						if prevBlob.Properties.CreationTime.After(*blobInfo.Properties.CreationTime) {
+							continue
+						}
+					}
+					mostResentBlobs[month] = blobInfo
+				}
+			}
+		}
+	}
+
+	// move the blobs names from map into ordered list of blob names
+	var blobNames []string
+	for _, month := range monthStrs {
+		if blob, ok := mostResentBlobs[month]; ok {
+			blobNames = append(blobNames, blob.Name)
+		}
+	}
+	return blobNames, nil
+}
+
+func (acr AzureCSVRetriever) getMonthStrings(start, end time.Time) ([]string, error) {
+	if end.After(time.Now()) {
+		end = time.Now()
+	}
+	if start.After(end) {
+		return []string{}, fmt.Errorf("start date must be before end date")
+	}
+
+	var monthStrs []string
+	monthStr := acr.timeToMonthString(start)
+	endStr := acr.timeToMonthString(end)
+	monthStrs = append(monthStrs, monthStr)
+	currMonth := start.AddDate(0, 0, -start.Day()+1)
+	for monthStr != endStr {
+		currMonth = currMonth.AddDate(0, 1, 0)
+		monthStr = acr.timeToMonthString(currMonth)
+		monthStrs = append(monthStrs, monthStr)
+	}
+
+	return monthStrs, nil
+}
+
+func (acr AzureCSVRetriever) timeToMonthString(input time.Time) string {
+	format := "20060102"
+	startOfMonth := input.AddDate(0, 0, -input.Day()+1)
+	endOfMonth := input.AddDate(0, 1, -input.Day())
+	return startOfMonth.Format(format) + "-" + endOfMonth.Format(format)
+}

+ 1 - 1
pkg/cloud/gcpprovider.go

@@ -686,7 +686,7 @@ func (gcp *GCP) parsePage(r io.Reader, inputKeys map[string]Key, pvKeys map[stri
 					instanceType = "custom"
 				}
 
-				if (instanceType == "ram" || instanceType == "cpu") && strings.Contains(strings.ToUpper(product.Description), "N2") {
+				if (instanceType == "ram" || instanceType == "cpu") && strings.Contains(strings.ToUpper(product.Description), "N2") && !strings.Contains(strings.ToUpper(product.Description), "PREMIUM") {
 					if (instanceType == "ram" || instanceType == "cpu") && strings.Contains(strings.ToUpper(product.Description), "N2D AMD") {
 						instanceType = "n2dstandard"
 					} else {

+ 1 - 0
pkg/cloud/provider.go

@@ -18,6 +18,7 @@ import (
 )
 
 const authSecretPath = "/var/secrets/service-key.json"
+const storageConfigSecretPath = "/var/secrets/azure-storage-config.json"
 
 var createTableStatements = []string{
 	`CREATE TABLE IF NOT EXISTS names (

+ 42 - 40
pkg/costmodel/costmodel.go

@@ -226,9 +226,7 @@ const (
 )
 
 func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyzerCloud.Provider, window string, offset string, filterNamespace string) (map[string]*CostData, error) {
-	queryRAMRequests := fmt.Sprintf(queryRAMRequestsStr, window, offset, window, offset)
 	queryRAMUsage := fmt.Sprintf(queryRAMUsageStr, window, offset, window, offset)
-	queryCPURequests := fmt.Sprintf(queryCPURequestsStr, window, offset, window, offset)
 	queryCPUUsage := fmt.Sprintf(queryCPUUsageStr, window, offset)
 	queryGPURequests := fmt.Sprintf(queryGPURequestsStr, window, offset, window, offset, 1.0, window, offset)
 	queryPVRequests := fmt.Sprintf(queryPVRequestsStr)
@@ -242,9 +240,7 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 
 	// Submit all Prometheus queries asynchronously
 	ctx := prom.NewContext(cli)
-	resChRAMRequests := ctx.Query(queryRAMRequests)
 	resChRAMUsage := ctx.Query(queryRAMUsage)
-	resChCPURequests := ctx.Query(queryCPURequests)
 	resChCPUUsage := ctx.Query(queryCPUUsage)
 	resChGPURequests := ctx.Query(queryGPURequests)
 	resChPVRequests := ctx.Query(queryPVRequests)
@@ -277,9 +273,7 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 	}
 
 	// Process Prometheus query results. Handle errors using ctx.Errors.
-	resRAMRequests, _ := resChRAMRequests.Await()
 	resRAMUsage, _ := resChRAMUsage.Await()
-	resCPURequests, _ := resChCPURequests.Await()
 	resCPUUsage, _ := resChCPUUsage.Await()
 	resGPURequests, _ := resChGPURequests.Await()
 	resPVRequests, _ := resChPVRequests.Await()
@@ -345,14 +339,6 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 	containerNameCost := make(map[string]*CostData)
 	containers := make(map[string]bool)
 
-	RAMReqMap, err := GetContainerMetricVector(resRAMRequests, true, normalizationValue, clusterID)
-	if err != nil {
-		return nil, err
-	}
-	for key := range RAMReqMap {
-		containers[key] = true
-	}
-
 	RAMUsedMap, err := GetContainerMetricVector(resRAMUsage, true, normalizationValue, clusterID)
 	if err != nil {
 		return nil, err
@@ -360,13 +346,6 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 	for key := range RAMUsedMap {
 		containers[key] = true
 	}
-	CPUReqMap, err := GetContainerMetricVector(resCPURequests, true, normalizationValue, clusterID)
-	if err != nil {
-		return nil, err
-	}
-	for key := range CPUReqMap {
-		containers[key] = true
-	}
 	GPUReqMap, err := GetContainerMetricVector(resGPURequests, true, normalizationValue, clusterID)
 	if err != nil {
 		return nil, err
@@ -401,6 +380,9 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 		if _, ok := containerNameCost[key]; ok {
 			continue // because ordering is important for the allocation model (all PV's applied to the first), just dedupe if it's already been added.
 		}
+		// The _else_ case for this statement is the case in which the container has been
+		// deleted so we have usage information but not request information. In that case,
+		// we return partial data for CPU and RAM: only usage and not requests.
 		if pod, ok := currentContainers[key]; ok {
 			podName := pod.GetObjectMeta().GetName()
 			ns := pod.GetObjectMeta().GetNamespace()
@@ -487,21 +469,40 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 				// recreate the key and look up data for this container
 				newKey := NewContainerMetricFromValues(ns, podName, containerName, pod.Spec.NodeName, clusterID).Key()
 
-				RAMReqV, ok := RAMReqMap[newKey]
-				if !ok {
-					klog.V(4).Info("no RAM requests for " + newKey)
-					RAMReqV = []*util.Vector{{}}
+				// k8s.io/apimachinery/pkg/api/resource/amount.go and
+				// k8s.io/apimachinery/pkg/api/resource/quantity.go for
+				// details on the "amount" API. See
+				// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types
+				// for the units of memory and CPU.
+				ramRequestBytes := container.Resources.Requests.Memory().Value()
+
+				// Because RAM (and CPU) information isn't coming from Prometheus, it won't
+				// have a timestamp associated with it. We need to provide a timestamp,
+				// otherwise the vector op that gets applied to take the max of usage
+				// and request won't work properly and will only take into account
+				// usage.
+				RAMReqV := []*util.Vector{
+					{
+						Value:     float64(ramRequestBytes),
+						Timestamp: float64(time.Now().UTC().Unix()),
+					},
+				}
+
+				// use millicores so we can convert to cores in a float64 format
+				cpuRequestMilliCores := container.Resources.Requests.Cpu().MilliValue()
+				CPUReqV := []*util.Vector{
+					{
+						Value:     float64(cpuRequestMilliCores) / 1000,
+						Timestamp: float64(time.Now().UTC().Unix()),
+					},
 				}
+
 				RAMUsedV, ok := RAMUsedMap[newKey]
 				if !ok {
 					klog.V(4).Info("no RAM usage for " + newKey)
 					RAMUsedV = []*util.Vector{{}}
 				}
-				CPUReqV, ok := CPUReqMap[newKey]
-				if !ok {
-					klog.V(4).Info("no CPU requests for " + newKey)
-					CPUReqV = []*util.Vector{{}}
-				}
+
 				GPUReqV, ok := GPUReqMap[newKey]
 				if !ok {
 					klog.V(4).Info("no GPU requests for " + newKey)
@@ -559,21 +560,22 @@ func (cm *CostModel) ComputeCostData(cli prometheusClient.Client, cp costAnalyze
 			if err != nil {
 				return nil, err
 			}
-			RAMReqV, ok := RAMReqMap[key]
-			if !ok {
-				klog.V(4).Info("no RAM requests for " + key)
-				RAMReqV = []*util.Vector{{}}
-			}
+
+			// CPU and RAM requests are obtained from the Kubernetes API.
+			// If this case has been reached, the Kubernetes API will not
+			// have information about the pod because it no longer exists.
+			//
+			// The case where this matters is minimal, mainly in environments
+			// with very short-lived pods that over-request resources.
+			RAMReqV := []*util.Vector{{}}
+			CPUReqV := []*util.Vector{{}}
+
 			RAMUsedV, ok := RAMUsedMap[key]
 			if !ok {
 				klog.V(4).Info("no RAM usage for " + key)
 				RAMUsedV = []*util.Vector{{}}
 			}
-			CPUReqV, ok := CPUReqMap[key]
-			if !ok {
-				klog.V(4).Info("no CPU requests for " + key)
-				CPUReqV = []*util.Vector{{}}
-			}
+
 			GPUReqV, ok := GPUReqMap[key]
 			if !ok {
 				klog.V(4).Info("no GPU requests for " + key)

+ 17 - 1
pkg/env/costmodelenv.go

@@ -15,6 +15,10 @@ const (
 	AWSAccessKeySecretEnvVar = "AWS_SECRET_ACCESS_KEY"
 	AWSClusterIDEnvVar       = "AWS_CLUSTER_ID"
 
+	AzureStorageAccessKeyEnvVar = "AZURE_STORAGE_ACCESS_KEY"
+	AzureStorageAccountNameEnvVar = "AZURE_STORAGE_ACCOUNT"
+	AzureStorageContainerNameEnvVar = "AZURE_STORAGE_CONTAINER"
+
 	KubecostNamespaceEnvVar        = "KUBECOST_NAMESPACE"
 	ClusterIDEnvVar                = "CLUSTER_ID"
 	ClusterProfileEnvVar           = "CLUSTER_PROFILE"
@@ -64,7 +68,7 @@ const (
 // GetAWSAccessKeyID returns the environment variable value for AWSAccessKeyIDEnvVar which represents
 // the AWS access key for authentication
 func GetAppVersion() string {
-	return Get(AppVersionEnvVar, "1.73.0")
+	return Get(AppVersionEnvVar, "1.74.0")
 }
 
 // IsEmitNamespaceAnnotationsMetric returns true if cost-model is configured to emit the kube_namespace_annotations metric
@@ -97,6 +101,18 @@ func GetAWSClusterID() string {
 	return Get(AWSClusterIDEnvVar, "")
 }
 
+func GetAzureStorageAccessKey() string {
+	return Get(AzureStorageAccessKeyEnvVar, "")
+}
+
+func GetAzureStorageAccountName() string {
+	return Get(AzureStorageAccountNameEnvVar, "")
+}
+
+func GetAzureStorageContainerName() string {
+	return Get(AzureStorageContainerNameEnvVar, "")
+}
+
 // GetKubecostNamespace returns the environment variable value for KubecostNamespaceEnvVar which
 // represents the namespace the cost model exists in.
 func GetKubecostNamespace() string {

+ 31 - 9
pkg/kubecost/allocation.go

@@ -1108,17 +1108,15 @@ func (as *AllocationSet) Clone() *AllocationSet {
 // ComputeIdleAllocations computes the idle allocations for the AllocationSet,
 // given a set of Assets. Ideally, assetSet should contain only Nodes, but if
 // it contains other Assets, they will be ignored; only CPU, GPU and RAM are
-// considered for idle allocation. One idle allocation per-cluster will be
-// computed and returned, keyed by cluster_id.
+// considered for idle allocation. If the Nodes have adjustments, then apply
+// the adjustments proportionally to each of the resources so that total
+// allocation with idle reflects the adjusted node costs. One idle allocation
+// per-cluster will be computed and returned, keyed by cluster_id.
 func (as *AllocationSet) ComputeIdleAllocations(assetSet *AssetSet) (map[string]*Allocation, error) {
 	if as == nil {
 		return nil, fmt.Errorf("cannot compute idle allocation for nil AllocationSet")
 	}
 
-	// TODO: external allocation: remove after testing and benchmarking
-	profStart := time.Now()
-	defer log.Profile(profStart, fmt.Sprintf("ComputeIdleAllocations: %s", as.Window))
-
 	if assetSet == nil {
 		return nil, fmt.Errorf("cannot compute idle allocation with nil AssetSet")
 	}
@@ -1137,9 +1135,33 @@ func (as *AllocationSet) ComputeIdleAllocations(assetSet *AssetSet) (map[string]
 			if _, ok := assetClusterResourceCosts[node.Properties().Cluster]; !ok {
 				assetClusterResourceCosts[node.Properties().Cluster] = map[string]float64{}
 			}
-			assetClusterResourceCosts[node.Properties().Cluster]["cpu"] += node.CPUCost * (1.0 - node.Discount)
-			assetClusterResourceCosts[node.Properties().Cluster]["gpu"] += node.GPUCost * (1.0 - node.Discount)
-			assetClusterResourceCosts[node.Properties().Cluster]["ram"] += node.RAMCost * (1.0 - node.Discount)
+
+			// adjustmentRate is used to scale resource costs proportionally
+			// by the adjustment. This is necessary because we only get one
+			// adjustment per Node, not one per-resource-per-Node.
+			//
+			// e.g. total cost = $90, adjustment = -$10 => 0.9
+			// e.g. total cost = $150, adjustment = -$300 => 0.3333
+			// e.g. total cost = $150, adjustment = $50 => 1.5
+			adjustmentRate := 1.0
+			if node.TotalCost()-node.Adjustment() == 0 {
+				// If (totalCost - adjustment) is 0.0 then adjustment cancels
+				// the entire node cost and we should make everything 0
+				// without dividing by 0.
+				adjustmentRate = 0.0
+			} else if node.Adjustment() != 0.0 {
+				// adjustmentRate is the ratio of cost-with-adjustment (i.e. TotalCost)
+				// to cost-without-adjustment (i.e. TotalCost - Adjustment).
+				adjustmentRate = node.TotalCost() / (node.TotalCost() - node.Adjustment())
+			}
+
+			cpuCost := node.CPUCost * (1.0 - node.Discount) * adjustmentRate
+			gpuCost := node.GPUCost * (1.0 - node.Discount) * adjustmentRate
+			ramCost := node.RAMCost * (1.0 - node.Discount) * adjustmentRate
+
+			assetClusterResourceCosts[node.Properties().Cluster]["cpu"] += cpuCost
+			assetClusterResourceCosts[node.Properties().Cluster]["gpu"] += gpuCost
+			assetClusterResourceCosts[node.Properties().Cluster]["ram"] += ramCost
 		}
 	})
 

+ 120 - 23
pkg/kubecost/allocation_test.go

@@ -901,34 +901,35 @@ func TestAllocationSet_ComputeIdleAllocations(t *testing.T) {
 	// NOTE: we're re-using generateAllocationSet so this has to line up with
 	// the allocated node costs from that function. See table above.
 
-	// | Hierarchy                               | Cost |  CPU |  RAM |  GPU |
-	// +-----------------------------------------+------+------+------+------+
+	// | Hierarchy                               | Cost |  CPU |  RAM |  GPU | Adjustment |
+	// +-----------------------------------------+------+------+------+------+------------+
 	//   cluster1:
-	//     nodes                                  100.00  50.00  40.00  10.00
-	// +-----------------------------------------+------+------+------+------+
-	//   cluster1 subtotal                        100.00  50.00  40.00  10.00
-	// +-----------------------------------------+------+------+------+------+
-	//   cluster1 allocated                        48.00   6.00  16.00   6.00
-	// +-----------------------------------------+------+------+------+------+
-	//   cluster1 idle                             72.00  44.00  24.00   4.00
-	// +-----------------------------------------+------+------+------+------+
+	//     nodes                                  100.00  55.00  44.00  11.00      -10.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1 subtotal (adjusted)             100.00  50.00  40.00  10.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1 allocated                        48.00   6.00  16.00   6.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1 idle                             72.00  44.00  24.00   4.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
 	//   cluster2:
-	//     node1                                   35.00  20.00  15.00   0.00
-	//     node2                                   35.00  20.00  15.00   0.00
-	//     node3                                   30.00  10.00  10.00  10.00
+	//     node1                                   35.00  20.00  15.00   0.00        0.00
+	//     node2                                   35.00  20.00  15.00   0.00        0.00
+	//     node3                                   30.00  10.00  10.00  10.00        0.00
 	//     (disks should not matter for idle)
-	// +-----------------------------------------+------+------+------+------+
-	//   cluster2 subtotal                        100.00  50.00  40.00  10.00
-	// +-----------------------------------------+------+------+------+------+
-	//   cluster2 allocated                        28.00   6.00   6.00   6.00
-	// +-----------------------------------------+------+------+------+------+
-	//   cluster2 idle                             82.00  44.00  34.00   4.00
-	// +-----------------------------------------+------+------+------+------+
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2 subtotal                        100.00  50.00  40.00  10.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2 allocated                        28.00   6.00   6.00   6.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2 idle                             82.00  44.00  34.00   4.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
 
 	cluster1Nodes := NewNode("", "cluster1", "", start, end, NewWindow(&start, &end))
-	cluster1Nodes.CPUCost = 50.0
-	cluster1Nodes.RAMCost = 40.0
-	cluster1Nodes.GPUCost = 10.0
+	cluster1Nodes.CPUCost = 55.0
+	cluster1Nodes.RAMCost = 44.0
+	cluster1Nodes.GPUCost = 11.0
+	cluster1Nodes.adjustment = -10.00
 
 	cluster2Node1 := NewNode("node1", "cluster2", "node1", start, end, NewWindow(&start, &end))
 	cluster2Node1.CPUCost = 20.0
@@ -966,6 +967,102 @@ func TestAllocationSet_ComputeIdleAllocations(t *testing.T) {
 			t.Fatalf("%s idle: expected total cost %f; got total cost %f", "cluster1", 72.0, idle.TotalCost)
 		}
 	}
+	if !util.IsApproximately(idles["cluster1"].CPUCost, 44.0) {
+		t.Fatalf("expected idle CPU cost for %s to be %.2f; got %.2f", "cluster1", 44.0, idles["cluster1"].CPUCost)
+	}
+	if !util.IsApproximately(idles["cluster1"].RAMCost, 24.0) {
+		t.Fatalf("expected idle RAM cost for %s to be %.2f; got %.2f", "cluster1", 24.0, idles["cluster1"].RAMCost)
+	}
+	if !util.IsApproximately(idles["cluster1"].GPUCost, 4.0) {
+		t.Fatalf("expected idle GPU cost for %s to be %.2f; got %.2f", "cluster1", 4.0, idles["cluster1"].GPUCost)
+	}
+
+	if idle, ok := idles["cluster2"]; !ok {
+		t.Fatalf("expected idle cost for %s", "cluster2")
+	} else {
+		if !util.IsApproximately(idle.TotalCost, 82.0) {
+			t.Fatalf("%s idle: expected total cost %f; got total cost %f", "cluster2", 82.0, idle.TotalCost)
+		}
+	}
+
+	// NOTE: we're re-using generateAllocationSet so this has to line up with
+	// the allocated node costs from that function. See table above.
+
+	// | Hierarchy                               | Cost |  CPU |  RAM |  GPU | Adjustment |
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1:
+	//     nodes                                  100.00   5.00   4.00   1.00       90.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1 subtotal (adjusted)             100.00  50.00  40.00  10.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1 allocated                        48.00   6.00  16.00   6.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster1 idle                             72.00  44.00  24.00   4.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2:
+	//     node1                                   35.00  20.00  15.00   0.00        0.00
+	//     node2                                   35.00  20.00  15.00   0.00        0.00
+	//     node3                                   30.00  10.00  10.00  10.00        0.00
+	//     (disks should not matter for idle)
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2 subtotal                        100.00  50.00  40.00  10.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2 allocated                        28.00   6.00   6.00   6.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+	//   cluster2 idle                             82.00  44.00  34.00   4.00        0.00
+	// +-----------------------------------------+------+------+------+------+------------+
+
+	cluster1Nodes = NewNode("", "cluster1", "", start, end, NewWindow(&start, &end))
+	cluster1Nodes.CPUCost = 5.0
+	cluster1Nodes.RAMCost = 4.0
+	cluster1Nodes.GPUCost = 1.0
+	cluster1Nodes.adjustment = 90.00
+
+	cluster2Node1 = NewNode("node1", "cluster2", "node1", start, end, NewWindow(&start, &end))
+	cluster2Node1.CPUCost = 20.0
+	cluster2Node1.RAMCost = 15.0
+	cluster2Node1.GPUCost = 0.0
+
+	cluster2Node2 = NewNode("node2", "cluster2", "node2", start, end, NewWindow(&start, &end))
+	cluster2Node2.CPUCost = 20.0
+	cluster2Node2.RAMCost = 15.0
+	cluster2Node2.GPUCost = 0.0
+
+	cluster2Node3 = NewNode("node3", "cluster2", "node3", start, end, NewWindow(&start, &end))
+	cluster2Node3.CPUCost = 10.0
+	cluster2Node3.RAMCost = 10.0
+	cluster2Node3.GPUCost = 10.0
+
+	cluster2Disk1 = NewDisk("disk1", "cluster2", "disk1", start, end, NewWindow(&start, &end))
+	cluster2Disk1.Cost = 5.0
+
+	assetSet = NewAssetSet(start, end, cluster1Nodes, cluster2Node1, cluster2Node2, cluster2Node3, cluster2Disk1)
+
+	idles, err = as.ComputeIdleAllocations(assetSet)
+	if err != nil {
+		t.Fatalf("unexpected error: %s", err)
+	}
+
+	if len(idles) != 2 {
+		t.Fatalf("idles: expected length %d; got length %d", 2, len(idles))
+	}
+
+	if idle, ok := idles["cluster1"]; !ok {
+		t.Fatalf("expected idle cost for %s", "cluster1")
+	} else {
+		if !util.IsApproximately(idle.TotalCost, 72.0) {
+			t.Fatalf("%s idle: expected total cost %f; got total cost %f", "cluster1", 72.0, idle.TotalCost)
+		}
+	}
+	if !util.IsApproximately(idles["cluster1"].CPUCost, 44.0) {
+		t.Fatalf("expected idle CPU cost for %s to be %.2f; got %.2f", "cluster1", 44.0, idles["cluster1"].CPUCost)
+	}
+	if !util.IsApproximately(idles["cluster1"].RAMCost, 24.0) {
+		t.Fatalf("expected idle RAM cost for %s to be %.2f; got %.2f", "cluster1", 24.0, idles["cluster1"].RAMCost)
+	}
+	if !util.IsApproximately(idles["cluster1"].GPUCost, 4.0) {
+		t.Fatalf("expected idle GPU cost for %s to be %.2f; got %.2f", "cluster1", 4.0, idles["cluster1"].GPUCost)
+	}
 
 	if idle, ok := idles["cluster2"]; !ok {
 		t.Fatalf("expected idle cost for %s", "cluster2")

+ 0 - 1
pkg/kubecost/asset_test.go

@@ -837,7 +837,6 @@ func TestAssetSet_AggregateBy(t *testing.T) {
 	if err != nil {
 		t.Fatalf("AssetSet.AggregateBy: unexpected error: %s", err)
 	}
-	fmt.Println(as.assets)
 	assertAssetSet(t, as, "1e", window, map[string]float64{
 		"__undefined__": 53.00,
 		"test=test":     7.00,