Alexander Belanger 5 лет назад
Родитель
Сommit
a5794e7642

+ 5 - 1
CONTRIBUTING.md

@@ -94,7 +94,11 @@ SQL_LITE=false
 
 Once you've done this, go to the root repository, and run `docker-compose -f docker-compose.dev.yaml up`. You should see postgres, webpack, and porter containers spin up. When the webpack and porter containers have finished compiling and have spun up successfully (this will take 5-10 minutes after the containers start), you can navigate to `localhost:8080` and you should be greeted with the "Log In" screen. 
 
-At this point, you can make a change to any `.go` file to trigger a backend rebuild, and any file in `/dashboard/src` to trigger a hot reload. Happy developing!
+At this point, you can make a change to any `.go` file to trigger a backend rebuild, and any file in `/dashboard/src` to trigger a hot reload. 
+
+For a more detailed development guide, [go here](/docs/developing/setup.md). 
+
+Happy developing!
 
 ### Testing 
 

+ 0 - 103
docs/GETTING_STARTED.md

@@ -1,103 +0,0 @@
-## Getting Started
-
-- [Prerequisites](#prerequisites)
-- [Installing](#installing)
-  - [Mac Installation](#mac-installation)
-  - [Linux Installation](#linux-installation)
-  - [Windows Installation](#windows-installation)
-- [Local Setup](#local-setup)
-  - [Connecting to a Cluster](#connecting-to-a-cluster)
-
-## Prerequisites
-
-You must have access to a Kubernetes cluster with Helm charts installed and the Docker engine must be running on your machine. To quickly get a local Kubernetes cluster set up, following the instructions for [installing minikube](https://minikube.sigs.k8s.io/docs/start/), and make sure that minikube is set as the current context by ensuring the output of `kubectl config current-context` is `minikube`.
-
-## Installing
-
-### Mac Installation
-
-Run the following command to grab the latest binary:
-
-```sh
-{
-name=$(curl -s https://api.github.com/repos/porter-dev/porter/releases/latest | grep "browser_download_url.*_Darwin_x86_64\.zip" | cut -d ":" -f 2,3 | tr -d \")
-name=$(basename $name)
-curl -L https://github.com/porter-dev/porter/releases/latest/download/$name --output $name
-unzip -a $name
-rm $name
-}
-```
-
-Then move the file into your bin:
-
-```sh
-chmod +x ./porter
-sudo mv ./porter /usr/local/bin/porter
-```
-
-### Linux Installation
-
-Run the following command to grab the latest binary:
-
-```sh
-{
-name=$(curl -s https://api.github.com/repos/porter-dev/porter/releases/latest | grep "browser_download_url.*_Linux_x86_64\.zip" | cut -d ":" -f 2,3 | tr -d \")
-name=$(basename $name)
-curl -L https://github.com/porter-dev/porter/releases/latest/download/$name --output $name
-unzip -a $name
-rm $name
-}
-```
-
-Then move the file into your bin:
-
-```sh
-chmod +x ./porter
-sudo mv ./porter /usr/local/bin/porter
-```
-
-### Windows Installation
-
-Go [here](https://github.com/porter-dev/porter/releases/latest/download/porter_0.1.0-beta.1_Windows_x86_64.zip) to download the Windows executable and add the binary to your `PATH`.
-
-## Local Setup
-
-> **Note:** the local setup process is tracked in [issue #60](https://github.com/porter-dev/porter/issues/60), while the overall onboarding flow is tracked in [issue #50](https://github.com/porter-dev/porter/issues/50).
-
-To view Porter locally, you must have access to a Kubernetes cluster with Helm charts installed. The simplest way to run Porter is via `porter server start`. After doing this, you can go to `http://localhost:8080` to register an account and create a project manually. Alternatively, you can run the following commands:
-
-```sh
-porter auth register
-porter project create porter-test
-```
-
-### Connecting to a Cluster
-
-In the case of local setup, you will have to connect to a cluster using the CLI command `porter connect kubeconfig`. By default, this command will read the `current-context` that's set in your default `kubeconfig` (either by reading the `$KUBECONFIG` env variable or reading from `$HOME/.kube/config`). You can also pass a path to a kubeconfig file explicitly (see below).
-
-The Porter CLI will attempt to generate a working kubeconfig for many types of cluster configurations and auth mechanisms, even though the necessary commands and/or certificates will not be present in the Porter container. The CLI will attempt the following resolutions:
-
-1. If a kubeconfig requires cluster CA data via the `certificate-authority` field, the CA data will be automatically populated.
-2. If a kubeconfig requires client cert data via the `client-certificate` field, the certificate data will be automatically populated.
-3. If a kubeconfig requires client key data via the `client-key` field, the key data will be automatically populated.
-4. If a kubeconfig requires a custom `oidc` auth mechanism, and this mechanism requires OIDC issuer CA data via the `idp-certificate-authority` field, the CA data will be automatically populated.
-5. If a kubeconfig requires a bearer token to be read from a `token-file` field, the token data will be automatically populated.
-6. If a kubeconfig requires a custom `gcp` auth mechanism (for connecting with GKE clusters), the CLI will require a GCP `service-account` that has permissions to read from the GKE cluster. The CLI will ask the user if it can set this up automatically: if so, it will automatically detect the correct GCP project ID and will create a service account and download a key file. If the user does not wish the CLI to set this up automatically, the user will need to provide a file path to a service account key file that was downloaded from GCloud.
-
-> **Note:** AWS EKS support coming soon.
-
-#### Passing `kubeconfig` explicitly
-
-You can pass a path to a `kubeconfig` file explicitly via:
-
-```sh
-porter connect kubeconfig --kubeconfig path/to/kubeconfig
-```
-
-#### Passing a context list
-
-You can initialize Porter with a set of contexts by passing a context list to start. The contexts that Porter will be able to access are the same as `kubectl config get-contexts`. For example, if there are two contexts named `minikube` and `staging`, you could connect both of them via:
-
-```sh
-porter connect kubeconfig --context minikube --context staging
-```

+ 7 - 0
docs/deploy/addons/overview.md

@@ -0,0 +1,7 @@
+For deployments that do not fall into the three application types (i.e. web service, worker, and cron job), you can deploy them as add-ons on Porter. Below is the list of add-ons that are currently supported. 
+
+If you have requests for add-ons you'd like us to support, please let us know in the #suggestions channel of our [community](https://discord.gg/mmGAw5nNjr).
+
+- [PostgresDB](doc:postgresdb)
+- [Redis](doc:redis)
+- [MongoDB](doc:mongodb)

+ 42 - 0
docs/deploy/addons/postgres.md

@@ -0,0 +1,42 @@
+# Deployment
+To deploy a PostgresDB instance on Porter, head to the **Community Add-ons** tab. Specify a username and password you'd like for the instance. You can optionally configure the amount of resources (i.e. CPU and RAM) assigned to the database instance.
+
+PostgresDB instances deployed on Porter have persistent volumes attached to them to prevent data loss in the case of accidents. See [Persistent Volumes](#persistent-volumes) for a guide on how to manage these volumes in your cloud provider.
+
+![Postgres](https://files.readme.io/2ddb8a2-Screen_Shot_2021-03-18_at_2.48.50_PM.png "Screen Shot 2021-03-18 at 2.48.50 PM.png")
+
+# Connecting to the Database
+
+PostgresDB on Porter is by default only exposed to internal traffic - only applications and add-on's that are deployed in the same Kubernetes cluster can connect to the database. The DNS name for the instance can be found on the deployment view as shown below. Note that Postgres listens on port 5432 by default.
+
+![Internal URI](https://files.readme.io/857e0ed-Screen_Shot_2021-03-18_at_2.58.57_PM.png "Screen Shot 2021-03-18 at 2.58.57 PM.png")
+
+Note that the connection URI for the PostgresDB instance follows this format: 
+
+```
+postgres://${USERNAME}:${PASSWORD}@${DNS_NAME}:5432/${DATABASE_NAME}
+```
+
+For the example above, the connection string would be:
+
+```
+postgres://postgres@force-double-snake-postgresql.default.svc.cluster.local:5432/postgres
+```
+
+# Deletion
+To delete this add-on, navigate to the **Settings** tab of the deployment. Note that deleting from the Porter dashboard will not delete the persistent volumes that have been attached to your PostgresDB instance. To delete these dangling volumes, see the next section.
+
+# Persistent Volumes
+
+## AWS
+By default, Porter creates [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes.html) of type **gp2** (general purpose SSD) volumes that are attached to the database. To view the volumes attached to your cluster, navigate to **EC2 > Volumes** tab in your AWS console.
+
+> ❗️
+> 
+> The unnamed 100GB volumes are attached to your EKS cluster itself. Make sure to not delete them - this will make your cluster not functional.
+
+![AWS Volumes](https://files.readme.io/c9b77c7-Screen_Shot_2021-03-18_at_3.11.11_PM.png "Screen Shot 2021-03-18 at 3.11.11 PM.png")
+
+Click on the volume and navigate to the **Tags** tab to see which deployment the volume belongs to. You can modify, delete, and make a snapshot of this volume from the AWS console.
+
+![AWS DB Volume](https://files.readme.io/d2b93d2-Screen_Shot_2021-03-18_at_3.17.19_PM.png "Screen Shot 2021-03-18 at 3.17.19 PM.png")

+ 26 - 0
docs/deploy/applications/deploying-from-docker-registry.md

@@ -0,0 +1,26 @@
+Porter lets you deploy a service from a public or private Docker image registry. You can update your service after it has been deployed by triggering a generated webhook or by manually redeploying with a specific image tag.
+
+> 📘 Prerequisites
+>
+> - A public Docker image or private container registry linked to Porter. See how to [link a registry to Porter]((https://docs.getporter.dev/docs/linking-an-existing-docker-container-registry))
+> - A Kubernetes cluster connected to Porter (linked by default if you provisioned through Porter). 
+>
+> **Note:** If you didn't provision through Porter, you can still [link an existing cluster](). 
+
+Let's get started!
+
+1. On the Porter dashboard, navigate to the **Launch** tab in the sidebar and select **Web Service** -> **Launch Template**.
+
+2. Select the **Docker Registry** option. If you have not linked a registry, you can do so from the **Integrations** tab ([learn more](https://docs.getporter.dev/docs/linking-an-existing-docker-container-registry)). 
+
+3. Indicate the image repo and image tag you would like to use.
+
+![Image repo selection](https://files.readme.io/9d796f4-Screen_Shot_2021-03-18_at_11.26.45_AM.png "Screen Shot 2021-03-18 at 11.26.45 AM.png")
+
+4. Further down under **Additional Settings**, you can configure remaining options like your service's port and computing resources. Once you're ready, click the **Deploy** button to launch. You will be redirected to the cluster dashboard where you should see your newly deployed service.
+
+5. To programmatically redeploy your service (for instance, from a CI pipeline), you will need to call your service's custom webhook. You can find your webhook by expanding your deployed service and going to the **Settings** tab.
+
+![Webhook](https://files.readme.io/23e217a-Screen_Shot_2021-03-18_at_11.29.16_AM.png "Screen Shot 2021-03-18 at 11.29.16 AM.png")
+
+Make sure to replace the `YOUR_COMMIT_HASH` and `IMAGE_REPOSITORY_URL` fields in the generated webhook.

+ 35 - 0
docs/deploy/applications/deploying-from-git-repo.md

@@ -0,0 +1,35 @@
+Porter lets you deploy a service directly from a Git repository. By default, services on Porter automatically update whenever there is a push to the primary branch (usually `main` or `master`) of the connected repo.
+
+> 📘 Prerequisites
+> 
+> - A repository (public or private) hosted on GitHub
+> - A Kubernetes cluster and container registry linked to Porter (linked by default if you provisioned through Porter). **Note:** If you didn't provision through Porter, consult the docs to link an existing cluster or registry. 
+> - (Optional) A Dockerfile that generates the Docker image you would like to run on Porter
+
+Let's get started!
+
+1. On the Porter dashboard, navigate to the **Launch** tab in the sidebar and select **Web Service** -> **Launch Template**.
+
+2. Select the **Git Repository** option. If you have not linked your GitHub account, click "log in with GitHub" to authorize Porter to access your repositories.
+
+> 📘
+> 
+> Porter will set up CI/CD with [Github Actions](https://github.com/features/actions) to automatically build and deploy new versions of your code. You can learn more about how Porter uses Github Actions [here](https://docs.getporter.dev/docs/auto-deploy-requirements#cicd-with-github-actions).
+
+![Github Actions](https://files.readme.io/0660e91-Screen_Shot_2021-03-17_at_7.20.44_PM.png "Screen Shot 2021-03-17 at 7.20.44 PM.png")
+
+3. After returning to the **Launch** tab you will be prompted to select a repository and source folder. Select the root folder of your service (this is usually where you run a start command like `npm start` or `python -m flask run`) and click **Continue**. If you have an existing Dockerfile, you can select it directly instead of using a folder. 
+
+> 📘
+> 
+> If you specify a folder in your repo to use as source, Porter will autodetect the language runtime and build your application using Cloud Native Buildpacks. For more details refer to our guide on [requirements for auto build](https://docs.getporter.dev/docs/auto-deploy-requirements).
+
+4. Further down under **Additional Settings**, you can configure remaining options like your service's port and computing resources. Once you're ready, click the **Deploy** button to launch. You will be redirected to the cluster dashboard where you should see your newly deployed service.
+
+![Deployed service](https://files.readme.io/4f731ca-Screen_Shot_2021-03-17_at_7.53.40_PM.png "Screen Shot 2021-03-17 at 7.53.40 PM.png")
+
+5. The first time your service is being built, your deployment will use a placeholder Docker image until the GitHub Action has completed. You can monitor the status of the generated GitHub Action by checking the **Actions** tab in your linked repository.
+
+![Actions tab](https://files.readme.io/ffe7b14-d1046ba-Screen_Shot_2021-02-26_at_11.33.55_AM.png "Screen_Shot_2021-02-26_at_11.33.55_AM.png")
+
+After the GitHub Action has finished running, you can refresh the Porter dashboard. The new version of your service should have been successfully deployed.

+ 17 - 0
docs/deploy/applications/overview.md

@@ -0,0 +1,17 @@
+There are three types of applications you can deploy on Porter: web services, workers, and cron jobs. Below is an overview of each application type as well as the use cases each one is best suited for.
+
+# Web Service
+
+Web services are processes that are constantly running and are exposed to either external or internal traffic. This includes any servers or web applications - most of your deployments should fall into this category.
+
+You can choose to expose your application to external traffic on a custom domain - Porter will automatically [secure your endpoints with SSL certificates](https://dash.readme.com/project/porter-dev/v1.0/docs/https-and-custom-domains). Alternatively, you can expose your web service to only internal traffic (i.e. accessible only by other deployments in the same cluster). 
+
+# Worker
+
+Worker processes are constantly running processes that are exposed to neither external nor internal traffic. Workers have no URLs or ports - it's best suited for background processes, queuing systems, etc.
+
+# Cron Job
+
+Jobs are one-off processes that run to completion. It's best suited for ephemeral tasks such as database migration or clean up scripts.
+
+Cron jobs run periodically on a schedule specified as a cron expression. Please see [this article](https://en.wikipedia.org/wiki/Cron#Overview) for a quick guide on cron expressions. To create cron expressions more easily, try [this online editor](https://crontab.guru/) to generate cron schedule expressions.

+ 25 - 0
docs/developing/setup.md

@@ -0,0 +1,25 @@
+# Getting Started
+
+After forking and cloning the repo, you should save two `.env` files in the repo. 
+
+First, in `/dashboard/.env`:
+
+```
+NODE_ENV=development
+API_SERVER=localhost:8080
+```
+
+Next, in `/docker/.env`:
+
+```
+SERVER_URL=http://localhost:8080
+SERVER_PORT=8080
+DB_HOST=postgres
+DB_PORT=5432
+DB_USER=porter
+DB_PASS=porter
+DB_NAME=porter
+SQL_LITE=false
+```
+
+Once you've done this, go to the root repository, and run `docker-compose -f docker-compose.dev.yaml up`. You should see postgres, webpack, and porter containers spin up. When the webpack and porter containers have finished compiling and have spun up successfully (this will take 5-10 minutes after the containers start), you can navigate to `localhost:8080` and you should be greeted with the "Log In" screen. At this point, you can make a change to any `.go` file to trigger a backend rebuild, and any file in `/dashboard/src` to trigger a hot reload.

+ 213 - 0
docs/getting-started/aws.md

@@ -0,0 +1,213 @@
+# Quick Installation
+Porter runs on a Kubernetes cluster in your own AWS account. You can provision a cluster through Porter by inputting the credentials of your AWS IAM account. You can also delete all resources provided by Porter with one-click.
+
+> 🚧
+> 
+> Quick Installation uses **AdministratorAccess** permissions to set up Porter. You can optionally specify the minimum IAM policies for provisioning a cluster and registry.
+
+<br />
+
+1. To create a new user, go to your <a href="https://console.aws.amazon.com" target="_blank">AWS console</a> and navigate to **IAM** -> **Users** and select **Add user**:
+
+![AWS add user](https://files.readme.io/5b8d083-Screen_Shot_2020-12-30_at_1.01.49_PM.png "Screen Shot 2020-12-30 at 1.01.49 PM.png")
+
+<br />
+
+2. Give your user a name and select **Programmatic access**. After selecting **Next**, you will be prompted to set permissions for your user, choose **Attach existing policies directly** and select the **AdministratorAccess** policy:
+
+![AdministratorAccess policy attachment](https://files.readme.io/6dfb722-Screen_Shot_2020-12-30_at_1.08.07_PM.png "Screen Shot 2020-12-30 at 1.08.07 PM.png")
+
+Optionally, if you don't want to grant Porter **AdministratorAccess**, you can follow these additional steps to configure the minimum required policy **(otherwise, skip to step 3).**
+
+To instead specify the minimum required policy, select **Attach existing policies directly**, and click on **Create Policy**. 
+
+![Minimum required policy attachment](https://files.readme.io/a1901d1-Screen_Shot_2021-02-16_at_4.55.06_PM.png "Screen Shot 2021-02-16 at 4.55.06 PM.png")
+
+You will be prompted to enter your custom policy. Click on the **JSON** tab.
+
+![Custom policy JSON](https://files.readme.io/c9b4d96-Screen_Shot_2021-02-16_at_5.00.00_PM.png "Screen Shot 2021-02-16 at 5.00.00 PM.png")
+
+Copy and paste the below JSON to the field. 
+
+```json
+{
+    "Version": "2012-10-17",
+    "Statement": [
+        {
+            "Sid": "VisualEditor0",
+            "Effect": "Allow",
+            "Action": [
+                "autoscaling:AttachInstances",
+                "autoscaling:CreateAutoScalingGroup",
+                "autoscaling:CreateLaunchConfiguration",
+                "autoscaling:CreateOrUpdateTags",
+                "autoscaling:DeleteAutoScalingGroup",
+                "autoscaling:DeleteLaunchConfiguration",
+                "autoscaling:DeleteTags",
+                "autoscaling:Describe*",
+                "autoscaling:DetachInstances",
+                "autoscaling:SetDesiredCapacity",
+                "autoscaling:UpdateAutoScalingGroup",
+                "autoscaling:SuspendProcesses",
+                "ec2:AllocateAddress",
+                "ec2:AssignPrivateIpAddresses",
+                "ec2:Associate*",
+                "ec2:AttachInternetGateway",
+                "ec2:AttachNetworkInterface",
+                "ec2:AuthorizeSecurityGroupEgress",
+                "ec2:AuthorizeSecurityGroupIngress",
+                "ec2:CreateDefaultSubnet",
+                "ec2:CreateDhcpOptions",
+                "ec2:CreateEgressOnlyInternetGateway",
+                "ec2:CreateInternetGateway",
+                "ec2:CreateNatGateway",
+                "ec2:CreateNetworkInterface",
+                "ec2:CreateRoute",
+                "ec2:CreateRouteTable",
+                "ec2:CreateSecurityGroup",
+                "ec2:CreateSubnet",
+                "ec2:CreateTags",
+                "ec2:CreateVolume",
+                "ec2:CreateVpc",
+                "ec2:CreateVpcEndpoint",
+                "ec2:DeleteDhcpOptions",
+                "ec2:DeleteEgressOnlyInternetGateway",
+                "ec2:DeleteInternetGateway",
+                "ec2:DeleteNatGateway",
+                "ec2:DeleteNetworkInterface",
+                "ec2:DeleteRoute",
+                "ec2:DeleteRouteTable",
+                "ec2:DeleteSecurityGroup",
+                "ec2:DeleteSubnet",
+                "ec2:DeleteTags",
+                "ec2:DeleteVolume",
+                "ec2:DeleteVpc",
+                "ec2:DeleteVpnGateway",
+                "ec2:Describe*",
+                "ec2:DetachInternetGateway",
+                "ec2:DetachNetworkInterface",
+                "ec2:DetachVolume",
+                "ec2:Disassociate*",
+                "ec2:ModifySubnetAttribute",
+                "ec2:ModifyVpcAttribute",
+                "ec2:ModifyVpcEndpoint",
+                "ec2:ReleaseAddress",
+                "ec2:RevokeSecurityGroupEgress",
+                "ec2:RevokeSecurityGroupIngress",
+                "ec2:UpdateSecurityGroupRuleDescriptionsEgress",
+                "ec2:UpdateSecurityGroupRuleDescriptionsIngress",
+                "ec2:CreateLaunchTemplate",
+                "ec2:CreateLaunchTemplateVersion",
+                "ec2:DeleteLaunchTemplate",
+                "ec2:DeleteLaunchTemplateVersions",
+                "ec2:DescribeLaunchTemplates",
+                "ec2:DescribeLaunchTemplateVersions",
+                "ec2:GetLaunchTemplateData",
+                "ec2:ModifyLaunchTemplate",
+                "ec2:RunInstances",
+                "eks:CreateCluster",
+                "eks:DeleteCluster",
+                "eks:DescribeCluster",
+                "eks:ListClusters",
+                "eks:UpdateClusterConfig",
+                "eks:UpdateClusterVersion",
+                "eks:DescribeUpdate",
+                "eks:TagResource",
+                "eks:UntagResource",
+                "eks:ListTagsForResource",
+                "eks:CreateFargateProfile",
+                "eks:DeleteFargateProfile",
+                "eks:DescribeFargateProfile",
+                "eks:ListFargateProfiles",
+                "eks:CreateNodegroup",
+                "eks:DeleteNodegroup",
+                "eks:DescribeNodegroup",
+                "eks:ListNodegroups",
+                "eks:UpdateNodegroupConfig",
+                "eks:UpdateNodegroupVersion",
+                "iam:AddRoleToInstanceProfile",
+                "iam:AttachRolePolicy",
+                "iam:CreateInstanceProfile",
+                "iam:CreateOpenIDConnectProvider",
+                "iam:CreateServiceLinkedRole",
+                "iam:CreatePolicy",
+                "iam:CreatePolicyVersion",
+                "iam:CreateRole",
+                "iam:DeleteInstanceProfile",
+                "iam:DeleteOpenIDConnectProvider",
+                "iam:DeletePolicy",
+                "iam:DeleteRole",
+                "iam:DeleteRolePolicy",
+                "iam:DeleteServiceLinkedRole",
+                "iam:DetachRolePolicy",
+                "iam:GetInstanceProfile",
+                "iam:GetOpenIDConnectProvider",
+                "iam:GetPolicy",
+                "iam:GetPolicyVersion",
+                "iam:GetRole",
+                "iam:GetRolePolicy",
+                "iam:List*",
+                "iam:PassRole",
+                "iam:PutRolePolicy",
+                "iam:RemoveRoleFromInstanceProfile",
+                "iam:TagRole",
+                "iam:UntagRole",
+                "iam:UpdateAssumeRolePolicy",
+                "logs:CreateLogGroup",
+                "logs:DescribeLogGroups",
+                "logs:DeleteLogGroup",
+                "logs:ListTagsLogGroup",
+                "logs:PutRetentionPolicy",
+                "kms:CreateGrant",
+                "kms:CreateKey",
+                "kms:DescribeKey",
+                "kms:GetKeyPolicy",
+                "kms:GetKeyRotationStatus",
+                "kms:ListResourceTags",
+                "kms:ScheduleKeyDeletion"
+            ],
+            "Resource": "*"
+        }
+    ]
+}
+```
+
+Click on **Create a Policy** and give it a name to create a custom policy.
+
+Once you've created the custom policy, attach this policy to your IAM user along with the `AmazonEC2ContainerRegistryFullAccess` policy. Permission policies for your IAM user should look like the image below. In this example, the custom policy has been named **porter-minimum-permissions**.
+
+![Minimum permissions](https://files.readme.io/cca043e-Screen_Shot_2021-02-16_at_5.05.24_PM.png "Screen Shot 2021-02-16 at 5.05.24 PM.png")
+
+<br />
+
+3. After creating the user, you will be shown an **Access key ID** and **Secret access key** for your new user. Copy both of these directly into Porter's AWS Credentials form along with your preferred AWS region:
+
+> 📘
+>
+> You can find your default AWS region by navigating to [console.aws.amazon.com](https://console.aws.amazon.com). After being automatically redirected, your region will appear at the start of the URL.
+
+![New project AWS](https://files.readme.io/02c9537-Screen_Shot_2020-12-30_at_2.03.38_PM.png "Screen Shot 2020-12-30 at 2.03.38 PM.png")
+
+<br />
+
+After clicking **Create Project** from Porter, installation will begin automatically.
+
+# Deleting Provisioned Resources
+
+> 🚧 AWS Deletion Instability
+> 
+> Deleting resources on AWS via Porter may result in dangling resources. After clicking delete, please make sure to check your AWS console to see if all resources have properly been removed. You can remove any dangling resources via either the AWS console or the CLI.
+
+Because it is difficult to keep track of all the resources created by Porter, we recommend that you delete all provisioned resources through Porter. This will ensure that you do not get charged on AWS for lingering resources.
+
+To delete resources, click on **Cluster Settings** from the **Cluster Dashboard**.
+
+![Delete cluster](https://files.readme.io/c1ed31a-Screen_Shot_2021-01-09_at_2.59.49_PM.png "Screen Shot 2021-01-09 at 2.59.49 PM.png")
+
+Click **Delete Cluster** to remove the cluster from Porter and delete resources in your AWS console. It may take up to 30 minutes for these resources to be deleted from your AWS console. 
+
+**Note that you can only delete cluster resources that have been provisioned via Porter.** 
+
+![Delete cluster confirmation](https://files.readme.io/a7b36fc-Screen_Shot_2021-01-09_at_3.02.07_PM.png "Screen Shot 2021-01-09 at 3.02.07 PM.png")
+
+For a guide on how to delete the dangling resources, see [Deleting Dangling Resources.](deleting-dangling-resources).

+ 12 - 0
docs/getting-started/digitalocean.md

@@ -0,0 +1,12 @@
+# Quick Installation
+Porter runs on a Kubernetes cluster in your own Digital Ocean account. Digital Ocean is by far the easiest cloud provider to get set up on. You can provision a cluster through Porter by choosing Digital Ocean on Porter, then simply logging into your Digital Ocean account.
+
+# Provisioning resources on Digital Ocean
+
+After you select Digital Ocean, you'll see the screen below. Select which resources you'd want to provision in your account. Once you click **Submit**, you'll be redirected to Digital Ocean's login page.
+
+![DigitalOcean redirect](https://files.readme.io/1722d09-Screen_Shot_2021-02-12_at_5.27.27_PM.png "Screen Shot 2021-02-12 at 5.27.27 PM.png")
+
+After you log in, you'll see a message that says resources are being provisioned. This will take on average 15 minutes. Once the resources have been provisioned, refresh the page and you'll see a cluster connected to Porter. Before you start deploying, you need to first set up HTTPS and custom domain support for the cluster to expose your applications to external traffic. 
+
+Follow the next guide to start deploying on your own domain, secured with HTTPS!

+ 84 - 0
docs/getting-started/gcp.md

@@ -0,0 +1,84 @@
+# Quick Installation
+Porter runs on a Kubernetes cluster in your own Google Cloud account. You can provision a cluster through Porter by providing the credentials of a GCP service account.
+
+> 🚧
+> 
+> Quick Installation uses **Owner** permissions to set up Porter. You can optionally specify the minimum IAM policies for provisioning both a cluster and registry.
+
+<br />
+
+# Prerequisites
+
+To use Porter on GCP, you must first enable some APIs on your project.
+
+1. Navigate to the **APIs & Services** tab of your project.
+
+![APIs and services](https://files.readme.io/210337a-Screen_Shot_2021-05-06_at_6.23.07_PM.png "Screen Shot 2021-05-06 at 6.23.07 PM.png")
+
+2. Click on the **Enable APIs and Services** button at the top. This will bring up a catalog of APIs that you can enable on GCP. Enable the following four APIs:
+- Compute Engine API
+- Kubernetes Engine API
+- Cloud Resource Manager API
+- Container Registry API
+
+It might take a few minutes for each of these APIs to be enabled. Once you can confirm that all four APIs are enabled from the **APIs & Services** tab, proceed to the next section.
+
+# Provisioning the Resources
+
+1. First, go to your [Google Cloud console](https://console.cloud.google.com/) and navigate to **IAM & Admin** -> **Service Accounts**:
+
+![Service accounts](https://files.readme.io/f0f2b69-Screen_Shot_2021-04-15_at_6.41.26_PM.png "Screen Shot 2021-04-15 at 6.41.26 PM.png")
+
+<br />
+
+2. Select **Create Service Account**:
+
+![Create service account](https://files.readme.io/38dd34a-Screen_Shot_2021-04-15_at_6.45.42_PM.png "Screen Shot 2021-04-15 at 6.45.42 PM.png")
+
+<br />
+
+3. After naming your service account, grant the service account these four permissions: **Cloud Storage > Storage Admin**, **Compute Engine > Compute Admin**, **Kubernetes Engine > Kubernetes Engine Admin**, and **Service Accounts > Service Account User**. Select **Done** to create the service account.
+
+![Create service account confirmation](https://files.readme.io/15b1d28-Screen_Shot_2021-01-28_at_4.34.21_PM.png "Screen Shot 2021-01-28 at 4.34.21 PM.png")
+
+<br />
+
+4. Once the service account has been created, under **Actions** select **Manage keys**.
+
+![Manage keys](https://files.readme.io/b94a4ef-Screen_Shot_2021-04-15_at_6.51.25_PM.png "Screen Shot 2021-04-15 at 6.51.25 PM.png")
+
+<br />
+
+5. Select **ADD KEY** -> **Create new key** and then choose **JSON** as your key type. After creation, your JSON key will automatically be downloaded as a file.
+
+![Download JSON](https://files.readme.io/ebeb5c2-Screen_Shot_2021-04-15_at_6.56.30_PM.png "Screen Shot 2021-04-15 at 6.56.30 PM.png")
+
+<br />
+
+6. Copy the contents of your JSON key file into Porter's GCP Credentials form along with your preferred GCP region and project ID:
+
+> 📘
+> 
+> You can find your GCP project ID by navigating to [console.cloud.google.com](https://console.cloud.google.com). After being automatically redirected, your project ID will appear at the end of the URL as well as under **Project Info** on the dashboard.
+
+![Project ID location](https://files.readme.io/8a89fea-Screen_Shot_2021-01-25_at_4.53.00_PM.png "Screen Shot 2021-01-25 at 4.53.00 PM.png")
+
+After clicking **Submit** from Porter, installation will begin automatically.
+
+# Deleting Provisioned Resources
+
+> 🚧 GCP Deletion Instability
+> 
+> Deleting resources on GCP via Porter may result in dangling resources. After clicking delete, please make sure to check your GCP console to see if all resources have properly been removed. You can remove any dangling resources via either the GCP console or the gcloud CLI.
+
+We recommend that you delete all provisioned resources through Porter as well as confirm resources have been deleted from the GCP console. This will ensure that you do not get charged on GCP for lingering resources.
+
+To delete resources, click on **Cluster Settings** from the **Cluster Dashboard**.
+
+![Cluster settings](https://files.readme.io/c1ed31a-Screen_Shot_2021-01-09_at_2.59.49_PM.png "Screen Shot 2021-01-09 at 2.59.49 PM.png")
+
+Click **Delete Cluster** to remove the cluster from Porter and delete resources in your GCP console. It may take up to 30 minutes for these resources to be deleted from your GCP console. 
+
+**Note that you can only delete cluster resources that have been provisioned via Porter from the dashboard.** 
+
+![Cluster settings delete modal](https://files.readme.io/a7b36fc-Screen_Shot_2021-01-09_at_3.02.07_PM.png "Screen Shot 2021-01-09 at 3.02.07 PM.png")

+ 27 - 0
docs/guides/connecting-to-cloud-sql.md

@@ -0,0 +1,27 @@
+Porter supports connecting to a Google Cloud SQL database using the [Cloud SQL Auth proxy](https://cloud.google.com/sql/docs/mysql/sql-proxy). This connection method provides Google Cloud users strong encryption and IAM-based authentication when accessing a MySQL, PostgreSQL, or SQL Server instance hosted on Cloud SQL.
+
+If you don't already have a Cloud SQL instance, please refer to the official docs for [creating a Cloud SQL instance](https://cloud.google.com/sql/docs/mysql/create-instance). 
+
+> 📘
+>
+> This guide will demonstrate how to securely connect to a PostgreSQL instance hosted on Cloud SQL. That said, the steps for connecting to MySQL or a generic SQL Server on Cloud SQL are virtually identical.
+
+1. First, navigate to the **Launch** tab from the Porter dashboard and choose to create either a **Web Service** or **Worker** (depending on whether you would like to expose your service to external traffic).
+
+2. After naming your service and configuring any desired application settings, navigate to the **Advanced** tab under **Additional Settings** and select **Enable Google Cloud SQL Proxy**:
+
+![Cloud SQL proxy](https://files.readme.io/5e3c9b7-Screen_Shot_2021-04-19_at_10.23.18_PM.png "Screen Shot 2021-04-19 at 10.23.18 PM.png")
+
+3. You will be prompted for an **Instance Connection Name**, **Database Port**, and **Service Account JSON**. First, go to your [Cloud SQL dashboard](https://console.cloud.google.com/sql/instances) and copy your database's **Instance Connection Name** into Porter:
+
+![Instance connection name](https://files.readme.io/ca9a00f-Screen_Shot_2021-04-19_at_10.38.36_PM.png "Screen Shot 2021-04-19 at 10.38.36 PM.png")
+
+4. Next, on the Porter dashboard specify the port for your database. Defaults are: Postgres: 5432, MySQL: 3306, SQLServer: 1433.
+
+5. Finally, copy the raw JSON of your Cloud SQL Service Account into the **Service Account JSON** field. If you don't already have a Cloud SQL Service Account, you should [create a Service Account with Cloud SQL access permissions](https://cloud.google.com/sql/docs/mysql/connect-admin-proxy#create-service-account):
+
+![Service Account JSON](https://files.readme.io/b22baf5-Screen_Shot_2021-04-19_at_10.41.48_PM.png "Screen Shot 2021-04-19 at 10.41.48 PM.png")
+
+6. After deploying your template, your service should be able to connect to your Cloud SQL database via `localhost`.
+
+If you would like to learn more about connecting to Cloud SQL via Auth proxy, please refer to refer to the [official Google Cloud guide](https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) for additional information.

+ 31 - 0
docs/guides/deleting-dangling-resources.md

@@ -0,0 +1,31 @@
+When you delete your project or clusters, Porter automatically destroys your resources so you don't get charged for any unused resources. Sometimes, this automatic destruction occurs only partially and results in dangling resources. This is a guide with specific steps on how to delete these resources for each cloud provider.
+
+# AWS
+
+## Deleting VPC
+
+Navigate to the **VPC** section in your AWS console to see the VPC's that are currently in use. Select the VPC that belongs to the cluster you've provisioned, then click **Actions > Delete VPC**. 
+
+![Delete VPC](https://files.readme.io/a33b774-Screen_Shot_2021-03-26_at_4.05.16_PM.png "Screen Shot 2021-03-26 at 4.05.16 PM.png")
+
+AWS might warn you that the VPC cannot be deleted due to existing NAT gateways or network interfaces that are in use. Click on the text **NAT Gateways** to view the NAT gateway in use. Once you delete the NAT gateway, you'll be able to delete the VPC.
+
+![NAT gateway](https://files.readme.io/61a972f-Screen_Shot_2021-03-26_at_4.09.51_PM.png "Screen Shot 2021-03-26 at 4.09.51 PM.png")
+
+##  Deleting Elastic IP
+
+Head to the **Elastic IP addresses** section in the VPC tab. Select the EIP of the cluster you have deleted and click **Actions > Release Elastic IP Addresses**.
+
+## Deleting EKS 
+
+Head to the **EKS** tab, select the cluster and click **Delete** .
+
+![EKS](https://files.readme.io/0d4635a-Screen_Shot_2021-03-26_at_4.16.01_PM.png "Screen Shot 2021-03-26 at 4.16.01 PM.png")
+
+## Deleting Nodes
+
+It sometimes occurs that even after the EKS cluster has been deleted, the actual **EC2 machines** comprising the cluster do not get deleted properly. Navigate to **EC2 > Autoscaling Groups** and ensure that autoscaling groups attached to your cluster has been deleted. **If this autoscaling group persists, AWS will continue to spin back up your nodes even after manual deletion.** 
+
+Once you've deleted the **Autoscaling Group**, head to **EC2 > Instances** and delete all the nodes. 
+
+![Autoscaling group](https://files.readme.io/fad5baf-Screen_Shot_2021-03-26_at_4.24.08_PM.png "Screen Shot 2021-03-26 at 4.24.08 PM.png")

+ 143 - 0
docs/guides/https-and-custom-domains.md

@@ -0,0 +1,143 @@
+Porter secures application endpoints with HTTPS and sets up custom domains using [cert-manager](https://cert-manager.io/) and [lets-encrypt](https://letsencrypt.org/). Below are the steps to set up custom domains on each cloud provider.
+
+# Amazon Web Services (AWS)
+
+Porter provisions a EKS cluster and an ECR registry in your AWS account by default. Along with these resources, it also deploys both the `nginx-ingress` controller and cert-manager on the provisioned cluster - there is no need to separately install these components.
+
+## Setting up HTTPS Issuer
+
+1. Navigate to **Templates** tab and select the HTTPS issuer.
+
+![HTTPS Issuer Template](https://files.readme.io/35f8f69-Screen_Shot_2021-01-18_at_6.22.17_PM.png "Screen Shot 2021-01-18 at 6.22.17 PM.png")
+
+2. From the **Launch Template** view, select the `cert-manager` namespace. Enter the email you'd like to be contacted for HTTPS certificate related notifications, then hit **Deploy**. Now the cluster is ready to issue certificates for your endpoints. 
+
+![Deploy HTTPS Issuer](https://files.readme.io/b733753-Screen_Shot_2021-01-18_at_7.14.26_PM.png "Screen Shot 2021-01-18 at 7.14.26 PM.png")
+
+Follow the next section to start deploying with HTTPS and custom domains.
+
+## Managing DNS
+
+Before you can secure docker containers with HTTPS, you need to first set up appropriate DNS records in your DNS provider. When Porter creates a Kubernetes cluster on AWS, it also creates a load balancer. We will create either a CNAME or an ALIAS record that points to the DNS name of that load balancer.
+
+### Using Route 53
+
+To set up HTTPS on AWS via Porter on **domain apex that is not a subdomain** (e.g. `getporter.dev` ), we recommend you use Route 53 to manage DNS because it supports ALIAS records. Load Balancers on AWS are not assigned a static IP, which means your DNS record must point to a DNS name rather than an IP address. Route 53 supports ALIAS records that let you create an A record that points to another domain instead of an IP address. There are other DNS providers that support this feature, so please check with your DNS provider whether this is possible first.
+
+If you've purchased your domain through another service like GoDaddy or Namecheap, you can still manage your DNS with Route 53 by simply changing the nameservers of your purchased domain. Please follow [this guide](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html) to manage your existing domains with Route 53.
+
+> 📘 ALIAS records are not necessary for subdomains
+> 
+> It is not necessary to use Route 53 or any DNS provider that supports ALIAS records if you only want to host subdomains on Porter. ALIAS records are only necessary for the domain apex.
+
+### Set up DNS
+
+1. You must first find the DNS name assigned to the load balancer of you cluster. Navigate to the **EC2** page in your AWS console, then select **Load Balancing > Load Balancer** from the sidebar. Click on the load balancer to view its DNS name. Check the **Instances** tab to ensure the load balancer is assigned to the correct Kubernetes cluster. 
+
+![Load balancer instance](https://files.readme.io/21a5c96-Screen_Shot_2021-02-16_at_11.09.20_AM.png "Screen Shot 2021-02-16 at 11.09.20 AM.png")
+
+2. Set up a DNS record that points to the DNS name copied from above. If you are setting up a subdomain, follow step 3. If you're setting up a domain apex (i.e. root domain), follow step 4. Note that in this tutorial, I will be using Amazon's Route 53 as an example DNS provider. 
+
+3. (**For Subdomains**) Click on **Define simple record** and create a CNAME record that points a subdomain to the URL you have copied from step 2. Make sure you exclude the protocol `http://` and any trailing `/` from the URL string.
+
+![CNAME record](https://files.readme.io/88b8b8a-Screen_Shot_2021-01-18_at_6.53.16_PM.png "Screen Shot 2021-01-18 at 6.53.16 PM.png")
+
+4. (**For the domain apex - the root domain that is not a subdomain**) Leave the Record name empty and select the **Alias to Network Load Balancer** option. After you choose the region your EKS cluster is provisioned in, you will be able to select the URL you've copied from Step 2 from the dropdown menu. Set Record type as **A record** and create the record.
+
+![A Record](https://files.readme.io/bdbd78d-Screen_Shot_2021-01-18_at_6.56.04_PM.png "Screen Shot 2021-01-18 at 6.56.04 PM.png")
+
+> 🚧 It may take up to 30 minutes for DNS records to propagate
+> 
+> After you complete the previous steps, it might take up to 30 minutes for DNS records to fully propagate. Please wait before deploying your applications until the DNS propagation is complete. You can check this using tools like [dnschecker.org](https://dnschecker.org)
+
+5. Almost there! Now return to the Porter Dashboard and deploy the Docker template with custom domain. Click on the **Configure Custom Domain** option, then enter the **Domain Name** you made the DNS record for, then hit **Deploy**. It may take a few minutes for certificates to be approved. After the certificate has been approved, you will see your application running on the custom domain secured with HTTPS.
+
+![Custom domain deployment](https://files.readme.io/e3fcb37-Screen_Shot_2021-02-16_at_11.16.20_AM.png "Screen Shot 2021-02-16 at 11.16.20 AM.png")
+
+6. **Optional.** To point the `www` subdomain to the deployed container along with the domain apex, you need to create a CNAME record for the `www` subdomain just like you did in step 5, then configure the Ingress of the deployed container to accept both the root domain and the `www` subdomain. 
+
+To do this, toggle **DevOps Mode** on your deployed container and select the **Raw Values** tab.  Add the `www` subdomain to the ingress.hosts field as shown below, then hit **Deploy**. Again, it may take up to 15 minutes for the change to be reflected.
+
+![WWW subdomain](https://files.readme.io/b76c57d-Screen_Shot_2021-01-18_at_7.09.37_PM.png "Screen Shot 2021-02-16 at 11.16.20 AM.png")
+
+# Digital Ocean
+
+Digital Ocean's Kubernetes cluster automatically assigns a load balancer with static IP to all ingresses of the cluster. You simply have to create an A record that points to the static IP of this load balancer.
+
+1. Once Porter has provisioned the cluster on Digital Ocean, you will see a load balancer created on the Digital Ocean dashboard. Copy the static IP of this load balancer.
+
+![Load balancer IP](https://files.readme.io/5270a2f-Screen_Shot_2021-01-19_at_10.03.05_AM.png "Screen Shot 2021-01-19 at 10.03.05 AM.png")
+
+2. Go to your DNS provider and create an **A record** that points your domain to the static IP copied above. It may take around 15 minutes for DNS propagation to complete. You can use the [DNS checker](https://dnschecker.org/) to view progress.
+
+3. Once DNS propagation is complete, deploy the **HTTPS Issuer** template to the `cert-manager` namespace from the Porter Dashboard. Enter the email you'd like to receive any updates about the certificate that will be issued (e.g. expiry date).
+
+![Email address](https://files.readme.io/17ef5b6-Screen_Shot_2021-01-18_at_6.22.17_PM.png "Screen Shot 2021-01-18 at 6.22.17 PM.png")
+
+4. Deploy a Docker template from the Porter dashboard with the **Configure Custom Domain** option. Type in **"digitalocean"** as your provider and your domain name without the protocol (i.e. https:// or http://), then hit deploy.
+
+![DigitalOcean HTTPS provider](https://files.readme.io/4086a19-Screen_Shot_2021-01-19_at_10.08.06_AM.png "Screen Shot 2021-01-19 at 10.08.06 AM.png")
+
+5. **Optional.** To point the `www` subdomain to the deployed container along with the domain apex, you need to create an A record for the `www` subdomain just like you did in step 2, then configure the Ingress of the deployed container to accept both the root domain and the `www` subdomain.
+
+To do this, toggle **DevOps Mode** on your deployed container and select the **Raw Values** tab.  Add the `www` subdomain to the ingress.hosts field as shown below, then hit **Deploy**. Again, it may take around 15 minutes for the change to be reflected.
+
+![WWW subdomain](https://files.readme.io/cbeb2da-Screen_Shot_2021-01-18_at_7.09.37_PM.png "Screen Shot 2021-01-18 at 7.09.37 PM.png")
+
+# Google Cloud Platform (GCP)
+
+During cluster provisioning, Porter automatically reserves a static IP and assigns it to a load balancer that forwards traffic to the nginx-ingress controller. To configure custom domains and HTTPS, you simply need to create an A record that points your domain to the static IP that has been reserved.
+
+1. Visit the **External IP addresses** section on Google Cloud Console. You'll see an IP with a name that looks like `k8s-${cluster_name}-cluster-lb`. Copy this IP address.
+
+2. Go to your DNS provider and create an **A record** that points your domain to the static IP you have copied from step 1. It may take around 15 minutes for DNS propagation to complete. You can use the [DNS checker](https://dnschecker.org/) to view progress.
+
+3. Once DNS propagation is complete, deploy the **HTTPS Issuer** template to the `cert-manager` namespace from the Porter Dashboard. Enter the email you'd like to receive any updates about the certificate that will be issued (e.g. expiry date).
+
+![Email HTTPS issuer](https://files.readme.io/7f0c594-Screen_Shot_2021-05-07_at_8.18.06_PM.png "Screen Shot 2021-05-07 at 8.18.06 PM.png")
+
+4. After you've deployed the **HTTPS Issuer**, deploy a Docker template from the Porter dashboard with the **Configure Custom Domain** option. Type in **"gcp"** as your provider and your domain name without the protocol (i.e. https:// or http://). Then hit deploy.
+
+![GCP configure custom domain](https://files.readme.io/dadeb05-Screen_Shot_2021-01-29_at_12.44.23_AM.png "Screen Shot 2021-01-29 at 12.44.23 AM.png")
+
+5. **Optional.** To point the `www` subdomain to the deployed container along with the domain apex, you need to create an A record for the `www` subdomain just like you did in step 2, then configure the Ingress of the deployed container to accept both the root domain and the `www` subdomain.
+
+To do this, toggle **DevOps Mode** on your deployed container and select the **Raw Values** tab.  Add the `www` subdomain to the ingress.hosts field as shown below, then hit **Deploy**. Again, it may take around 15 minutes for the change to be reflected.
+
+![WWW domain configuration](https://files.readme.io/d5e286b-Screen_Shot_2021-01-18_at_7.09.37_PM.png "Screen Shot 2021-01-18 at 7.09.37 PM.png")
+
+# Wildcard Domains
+
+It is possible to set up a wildcard domain so that you don't have to keep creating DNS records every time you create a deployment. At the moment, this is only supported on Digital Ocean clusters.
+
+## Digital Ocean
+
+### Prerequisites
+
+1. From your DNS provider, point the nameservers of your domain to Digital Ocean. You can find provider specific ways to do this [here](https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars).
+
+2. Create a personal access token on Digital Ocean. Visit this [direct link](https://cloud.digitalocean.com/account/api/tokens/new) to create a token. If this doesn't work, see this [documentation](https://docs.digitalocean.com/reference/api/create-personal-access-token/).
+
+### Setting up the Wildcard Domain
+
+1. Once the nameservers of your domain have been swapped out, [create an A record for your wildcard domain](https://docs.digitalocean.com/products/networking/dns/how-to/manage-records/#a-records). Make sure that the A record you create points at the load balancer attached to the Kubernetes cluster provisioned through Porter.
+
+2. Once DNS propagation is complete, deploy the **HTTPS Issuer** template to the `cert-manager` namespace from the Porter Dashboard. 
+
+![HTTPS issuer deployment](https://files.readme.io/231ad37-Screen_Shot_2021-05-07_at_8.18.06_PM.png "Screen Shot 2021-05-07 at 8.18.06 PM.png")
+
+3. Enter the email you'd like to receive any updates about the certificate that will be issued (e.g. expiry date). Enable the wildcard domain, copy your personal access token and input the wildcard domain you have made the A record for in step 1. Then hit the **Deploy** button. 
+
+![Deploy HTTPS issuer](https://files.readme.io/3a6e36c-Screen_Shot_2021-05-07_at_8.20.30_PM.png "Screen Shot 2021-05-07 at 8.20.30 PM.png")
+
+It might take a few minutes for the HTTPS Issuer instance to be ready. To be safe, wait 5~10 minutes before you start creating deployments that use the wildcard domain.
+
+### Using the wildcard domain
+
+1. From the **Web Service** view, click **Enable Custom Domains**. Put in the name of the domain you'd like to expose your web service on and make sure it matches the wildcard domain you have configured in the previous section. Then toggle the **Use wildcard domain** option. 
+
+![Wildcard domain option](https://files.readme.io/8fbcb3f-Screen_Shot_2021-05-07_at_8.26.23_PM.png "Screen Shot 2021-05-07 at 8.26.23 PM.png")
+
+After you hit deploy, it might take a few minutes for the endpoint to be secured with HTTPS. Once that's done, you will be able to access endpoints on the domain you have specified. 
+
+With wildcard domain enabled, you can create deployments and expose them on domains without having to create another DNS record, as long as the domain matches the wildcard domain.

+ 54 - 0
docs/guides/jobs-and-cron-jobs.md

@@ -0,0 +1,54 @@
+You can create one-time jobs or cron jobs on Porter, which can be linked [from your Github repo](https://docs.getporter.dev/docs/applications) or [from an existing Docker image registry](https://docs.getporter.dev/docs/deploying-from-docker-image-registry). Cron jobs are meant to run on a schedule using a specified [cron expression](https://en.wikipedia.org/wiki/Cron#CRON_expression), while one-time jobs are meant to be triggered manually or on every push to your Github repository. Here are some use-cases for each type of job:
+
+- Run one-time jobs for database migration scripts, data processing, or generally scripts that are designed to run to completion on an unpredictable schedule
+- Run cron jobs for tasks that should run on a specified schedule, such as scraping data at a specified interval, cleaning up rows in a database, taking backups of a DB, or sending batch notifications at a specified time every day
+
+# Deploying a One-Time Job
+
+To deploy a one-time job on Porter, head to the "Launch" tab and select the "Jobs" template. From this template, you can connect your source ([Github repo](https://docs.getporter.dev/docs/applications) or [Docker image registry](https://docs.getporter.dev/docs/deploying-from-docker-image-registry)), specify the job command, and add environment variables. For example, to create a job that simply prints to the console from an environment variable, we can create a job with the following configuration:
+
+![One-time job](https://files.readme.io/f566850-Screen_Shot_2021-04-16_at_2.54.35_PM.png "Screen Shot 2021-04-16 at 2.54.35 PM.png")
+
+![One-time job additional settings](https://files.readme.io/18a84d4-Screen_Shot_2021-04-16_at_2.55.12_PM.png "Screen Shot 2021-04-16 at 2.55.12 PM.png")
+
+After clicking "Deploy" and waiting for the Job to run successfully, you will be redirected to the "Jobs" tab where your jobs are listed. Click into the job you would like to view (in this case, `ubuntu-job`), and the history of runs for that job will be shown. You can click on each job to view the logs:
+
+![View job logs](https://files.readme.io/1b4f582-Screen_Shot_2021-04-16_at_3.00.11_PM.png "Screen Shot 2021-04-16 at 3.00.11 PM.png")
+
+To re-run the job, simply click the "Rerun job" button in the bottom right corner, which will re-run the job using the exact same configuration as before. You can view the configuration for this job from the "Main" tab, and you can delete the job (along with the history of all runs of the job) from the "Settings" tab. 
+
+> 📘
+>
+> **Note:** as an alternative to one-time jobs, you can also run a command using [remote execution](https://docs.getporter.dev/docs/cli-documentation#remote-execution) from the CLI. This is simpler to do, but lacks the benefit of getting the history of jobs along with logs and status for each job.
+
+## Running One-Time Jobs from Github Repositories
+
+When you set up a one-time job to deploy from a Github repository, the job will automatically be run on each push to a specific branch in the Github repository. There are cases where it is useful to run jobs on each push to your `main` branch: for example, running a schema migration script so that your data schema is always up to date. However, if you do not want the job to run frequently, you should create a branch that you push to only when you want the job to be re-run. 
+
+> 🚧
+> 
+> **Note:** we are working on a better solution for deploying jobs from a Github repository, so that the job only rebuilds when you want it to. This will be addressed in the next release.
+
+# Deploying a Cron Job
+
+Deploying a cron job follows the same pattern as deploying a one-time job, but requires that you input a cron expression for the job to periodically run. To create cron expressions more easily, see [this online editor](https://crontab.guru/) for developing cron expressions. 
+
+As an example, we will once again create a job that simply prints to the console from an environment variable, but in this case we will create the job with a cron expression so that the job runs every minute:
+
+![Cron expression](https://files.readme.io/7756ab7-Screen_Shot_2021-04-16_at_3.15.06_PM.png "Screen Shot 2021-04-16 at 3.15.06 PM.png")
+
+![Cron expression additional settings](https://files.readme.io/d4c1bd7-Screen_Shot_2021-04-16_at_3.15.15_PM.png "Screen Shot 2021-04-16 at 3.15.15 PM.png")
+
+After the cron job successfully deploys, you can navigate to the "Jobs" tab and click on your deployed job (in this case, `ubuntu-cron-job`):
+
+![Cron expression list](https://files.readme.io/e7fdb91-Screen_Shot_2021-04-16_at_3.17.17_PM.png "Screen Shot 2021-04-16 at 3.17.17 PM.png")
+
+As you can see, the cron job runs every minute. By default, Porter will keep the history of the last 20 jobs run for that cron schedule. 
+
+## Running Cron Jobs from Github Repositories
+
+When you set up a cron job to deploy from a Github repository, the cron job will automatically rebuild on each push to your Github repository, so that the cron job uses the latest version of your application on each run. If you do not want the cron job to rebuild frequently, you should create a separate branch that you push to only when you want the cron job to rebuild and update. 
+
+> 🚧
+> 
+> **Note:** we are working on a better solution for deploying cron jobs from a Github repository, so that the cron job only rebuilds when you want it to. This will be addressed in an upcoming release.

+ 141 - 0
docs/guides/linking-existing-container-registry.md

@@ -0,0 +1,141 @@
+Porter supports linking a private Docker container registry to your project. This container registry is used to deploy Docker containers onto a cluster and push new versions of the image to your registry from CI/CD workflows. We support the following container registries:
+
+- Amazon Elastic Container Registry (ECR) 
+- Google Container Registry (GCR)
+- DigitalOcean Container Registry 
+- Docker Hub 
+- Other registries which implement the [Registry HTTP API v2](https://docs.docker.com/registry/spec/api/) 
+
+The following guide will show you how to link your container registries depending on your registry provider. Linking your container registries requires the Porter CLI to be installed, so make sure that you've following the [installation guide for the CLI](cli-documentation#installation). 
+
+## Amazon Elastic Container Registry (ECR)
+
+Run the following command on the Porter CLI:
+
+```sh
+porter connect ecr
+```
+
+You will be prompted for the region your ECR instance belongs to. For example:
+
+```sh
+Please provide the AWS region where the ECR instance is located.
+AWS Region: us-east-2
+```
+
+The CLI will then ask if you'd like to set up an IAM user in your AWS account automatically, or if you'd like to enter your credentials manually. If you specify yes, Porter will create a user with the policy `AmazonEC2ContainerRegistryFullAccess` that can push/pull images from ECR. If you'd like more fine-grained access control, specify no and create an IAM user with custom permissions, and generate an access key/secret for this user and input this information into the CLI. 
+
+Finally, the CLI will prompt you to enter a name for the registry. Here you may enter any name you like.
+
+```sh
+Give this registry a name: my-awesome-registry
+```
+
+That's it! If you navigate to the "Launch" tab in the dashboard, you should see your existing ECR images in the "Registry" section. 
+
+## Google Container Registry (GCR)
+
+In order to connect to an existing GCR instance, you must create a service account with permission to push/pull from GCR. 
+
+1. First, go to your [Google Cloud console](https://console.cloud.google.com/) and navigate to **IAM & Admin** -> **Service Accounts**:
+
+![GCP service accounts](https://files.readme.io/c93b89a-Screen_Shot_2021-02-26_at_8.34.21_AM.png "Screen Shot 2021-02-26 at 8.34.21 AM.png")
+
+2. Select **Create Service Account**:
+
+![Create SA](https://files.readme.io/8480097-Screen_Shot_2021-02-26_at_8.36.48_AM.png "Screen Shot 2021-02-26 at 8.36.48 AM.png")
+
+3. Name your service account (for example, "porter-gcr-access"), grant the service account **Storage Admin** permissions, then select **Done**:
+
+![Storage admin permissions](https://files.readme.io/3357638-Screen_Shot_2021-02-26_at_8.39.58_AM.png "Screen Shot 2021-02-26 at 8.39.58 AM.png")
+
+4. After creating the service account, you will be redirected to the list of service accounts. Find the row with the newly created service account, and select "Manage keys" in the "Actions" column:
+
+![Manage keys SA](https://files.readme.io/55283c8-Screen_Shot_2021-02-26_at_8.44.08_AM.png "Screen Shot 2021-02-26 at 8.44.08 AM.png")
+
+5. Finally, press the "Add key" dropdown and select "Create new key". After choosing the **JSON** key type, your key file will be **automatically** downloaded:
+
+![JSON key download](https://files.readme.io/21c8ec4-Screen_Shot_2021-02-26_at_8.45.48_AM.png "Screen Shot 2021-02-26 at 8.45.48 AM.png")
+
+Now that you have downloaded this key, run the following command on the Porter CLI:
+
+```sh
+porter connect gcr
+```
+
+You will be prompted for the full path to the service account key file that you just downloaded. **Enter the full path (not relative) to the key file location**:
+
+```sh
+Key file location: [PATH]
+```
+
+Finally, you will be prompted to provide the registry URL, in the form `[GCR_DOMAIN]/[GCP_PROJECT_ID]`, and a name for the registry. For most, the GCR domain will be `gcr.io` (for more information, [click here](https://cloud.google.com/container-registry/docs/overview#registries)). 
+
+```sh
+Please provide the registry URL, in the form [GCP_DOMAIN]/[GCP_PROJECT_ID]. For example, gcr.io/my-project-123456.
+Registry URL:
+Give this registry a name:
+```
+
+That's it! If you navigate to the "Launch" tab in the dashboard, you should see your existing GCR images in the "Registry" section. 
+
+## DigitalOcean Container Registry
+
+Run the following command on the Porter CLI:
+
+```sh
+porter connect docr
+```
+
+If you have not yet linked a DigitalOcean account, this command will open the browser that allows you to link your DigitalOcean account. Authorize the DigitalOcean account that you'd like to give access. You will then be redirected to the dashboard, at which point you can close out of the browser tab and go back to the CLI. 
+
+The CLI will then prompt you to provide a link to the container registry, in the form `registry.digitalocean.com/[REGISTRY_NAME]`. This can be found by navigating to the "Container Registry" tab in DigitalOcean and copying the registry name:
+
+![Container registry name](https://files.readme.io/c5fc652-Screen_Shot_2021-02-26_at_9.00.08_AM.png "Screen Shot 2021-02-26 at 9.00.08 AM.png")
+
+```sh
+Please provide the registry URL, in the form registry.digitalocean.com/[REGISTRY_NAME]. For example, registry.digitalocean.com/porter-test. 
+Registry URL: registry.digitalocean.com/porter-hi
+```
+
+That's it! If you navigate to the "Launch" tab in the dashboard, you should see your existing DigitalOcean Container Registry images in the "Registry" section. 
+
+## Docker Hub
+
+In order to connect to a Docker Hub image repository, you must first generate a personal access token in the Docker Hub dashboard. Navigate to the ["Security" tab in your account settings](https://hub.docker.com/settings/security), and select "New Access Token":
+
+![Access token](https://files.readme.io/53cce0e-Screen_Shot_2021-02-26_at_9.09.26_AM.png "Screen Shot 2021-02-26 at 9.09.26 AM.png")
+
+Name this access token something like "Porter," and copy the access token to the clipboard. 
+
+> 📘
+> 
+> If you're planning on linking more than one image repository through Docker Hub, you will need to re-use this access token multiple times, so you may want to copy it to a local file on your computer and delete it when you're finished.
+
+Then type `porter connect dockerhub` into the CLI. You will first be prompted to enter the path to the Docker Hub image repository that you would like to link. The image repository path can be found by going to the "Repositories" tab in Docker Hub and copying the username/organization and repository name. For example, for an organization called "porter1" and an image repository name called "porter", the repo name would be "porter1/porter":
+
+```sh
+Provide the Docker Hub image path, in the form of ${org_name}/${repo_name}. For example, porter1/porter.
+Image path: porter1/porter
+```
+
+You should then enter your Docker Hub username and the access token you just copied. 
+
+That's it! If you navigate to the "Launch" tab in the dashboard, you should see your existing Docker Hub image repository in the "Registry" section. 
+
+> 📘 Linking Multiple Docker Hub Repositories
+> 
+> This flow only links a single Docker Hub repository at a time. If you'd like to link multiple repositories, you must run `porter connect dockerhub` for each repository.
+
+## Custom/Private Registries
+
+Other Docker container registries are supported, as long as they implement the  [Registry HTTP API v2](https://docs.docker.com/registry/spec/api/) specification. To link these, type `porter connect registry` into the CLI. You will then be asked to input the URL of your image repository:
+
+```sh
+Provide the image registry URL (include the protocol). For example, https://my-custom-registry.getporter.dev.
+Image registry URL: https://my-custom-registry.getporter.dev
+```
+
+If your registry is public, you can simply press enter when asked to input the username/password. Otherwise, enter the username/password that you would use for `docker login`. 
+
+That's it! If you navigate to the "Launch" tab in the dashboard, you should see your existing image repositories in the "Registry" section.

+ 67 - 0
docs/guides/preserving-client-ip-addresses.md

@@ -0,0 +1,67 @@
+# AWS
+
+You will need to update your NGINX config to support proxying external IP addresses to Porter.
+
+In the `ingress-nginx` application, you'll be modifying the following Helm values:
+
+```yaml
+controller:
+  config:
+    use-proxy-protocol: 'true' # <-- CHANGE
+  metrics:
+    annotations:
+      prometheus.io/port: '10254'
+      prometheus.io/scrape: 'true'
+    enabled: true
+  podAnnotations:
+    prometheus.io/port: '10254'
+    prometheus.io/scrape: 'true'
+  service:
+    annotations:
+      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'  # <-- CHANGE
+      service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
+```
+
+![AWS nginx config](https://files.readme.io/26d96cf-Screen_Shot_2021-05-11_at_10.08.32_AM.png "Screen Shot 2021-05-11 at 10.08.32 AM.png")
+
+# GCP 
+
+> 🚧 Prequisites
+> 
+> You must have a health check endpoint for your application. This endpoint must return a `200 OK` status when it is pinged.
+
+On Porter clusters provisioned through GCP, traffic flows through a regional TCP load balancer by default. These load balancers [do not support a proxy protocol](https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke) (only global TCP load balancers or regional/global HTTP(S) load balancers support this), and thus the client IP cannot be accessed by using the default load balancer. As a result, to get client IP addresses to your applications, you must create a new load balancer with a custom IP address. This guide will show you how to do that. 
+
+1. You must first create a static global IP address in the GCP console. You can do this by navigating to [External IP addresses](https://console.cloud.google.com/networking/addresses/list) (**VPC Network > External IP Addresses**), and clicking "Reserve External Address". Name this address something like `porter-ip-address` and select "Global" for the type:
+
+![Global LB config](https://files.readme.io/5e56940-Screen_Shot_2021-05-10_at_2.25.04_PM.png "Screen Shot 2021-05-10 at 2.25.04 PM.png")
+
+Copy the created IP address to the clipboard. 
+
+2. In your DNS provider, configure a custom domain to point to that IP address, which you can do by creating an A record with your domain as the value. Check that the domain is pointing to the IP address through `nslookup <domain>`, where the address in the response should be the IP address you just created. 
+
+3. Install an HTTPS issuer on the Porter dashboard by going to **Launch > Community Addons > HTTPS Issuer**. Toggle the checkbox **Create GCE Ingress**. If you have already installed the HTTPS issuer, you will have to delete your current issuer and create a new one. 
+
+![HTTPS ingress with GCE](https://files.readme.io/a58e975-Screen_Shot_2021-05-10_at_4.12.27_PM.png "Screen Shot 2021-05-10 at 4.12.27 PM.png")
+
+4. Create the web service by going to the Porter dashboard and navigating to **Launch > Web service**. Link up your source, and then configure the following three settings:
+
+- Toggle "Configure Custom Domain" at the bottom of the "Main" tab, and add your custom domain. 
+
+- Go to the "Advanced" tab. In the "Ingress Custom Annotations" section, add the following three parameters:
+
+```yaml
+cert-manager.io/cluster-issuer: letsencrypt-prod-gce
+kubernetes.io/ingress.class: gce
+kubernetes.io/ingress.global-static-ip-name: porter-ip-address # IMPORTANT: replace this with the name of your static ip address!
+```
+
+It should look something like this:
+
+![Deployment config](https://files.readme.io/acdf9c2-Screen_Shot_2021-05-10_at_4.24.01_PM.png "Screen Shot 2021-05-10 at 4.24.01 PM.png")
+
+- Still in the "Advanced" tab, you must set up a custom health check at an application endpoint. This is by default set to `/healthz`, but you can choose whichever path you'd like. This endpoint must return a `200 OK` status when it is pinged. 
+
+![Healthz config](https://files.readme.io/9b5432a-Screen_Shot_2021-05-10_at_4.24.13_PM.png "Screen Shot 2021-05-10 at 4.24.13 PM.png")
+
+5. Click "Deploy". It will take 10-15 minutes for the load balancer to be created and the certificates to be issued.

+ 19 - 0
docs/guides/running-porter-locally.md

@@ -0,0 +1,19 @@
+While it requires a few additional steps, it is possible to run Porter locally. These are the steps to start using Porter on your local machine.
+
+1. [Install our CLI](https://docs.getporter.dev/docs/cli-documentation#installation)
+
+2. Run `porter server start`. This will spin up a local Porter instance on port 8080.
+
+By default, GitHub login and the deploying from GitHub repo is disabled on the local version of Porter - this is due to security reasons. However, you can add these functionalities to your local instance by creating your own GitHub OAuth application. These are the steps to enable the GitHub features on the local version of Porter:
+
+1. [Create a new GitHub Oauth App](https://docs.github.com/en/developers/apps/creating-an-oauth-app). This app should be created with `http://localhost:8080/api/oauth/github/callback` as the callback URL. 
+
+2. Copy the Client ID and the Client secrets. Then add these lines into your `.bashrc` file:
+
+```txt
+export GITHUB_CLIENT_ID=YOUR_CLIENT_ID
+export GITHUB_CLIENT_SECRET=YOUR_CLIENT_SECRET
+export GITHUB_ENABLED=true
+```
+
+3. When you run `porter server start`, Porter will automatically read these variables in and enable the GitHub features.

+ 63 - 0
docs/guides/using-env-groups.md

@@ -0,0 +1,63 @@
+# Introduction
+
+An **environment group** is a set of environment variables that are meant to be reused across multiple applications. For example, if all web services require a shared set of API and database keys, this could form a `web-service` environment group with all of those keys as a shared configuration. In this guide, we will explain how to create and use environment groups. 
+
+> 📘
+> 
+> Environment groups are stored in **your own cluster** as a Kubernetes [Config Map](https://kubernetes.io/docs/concepts/configuration/configmap/) or a Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret). The data will be visible to any users or services with access to your cluster, such as Porter.
+
+# Creating and using environment groups
+
+You can create a new environment group in the "Env Groups" tab on the dashboard. Click on "Create Env Group" from this tab:
+
+![Create env group](https://files.readme.io/07c9628-env-groups-0.png "env-groups-0.png")
+
+From this screen, you can name your env group and add your environment variables. In this example, we will simply create an environment group named `web` that will be shared across all web services that we create. When you're finished, press "Create env group". 
+
+![Create env group finished](https://files.readme.io/f795459-env-groups-1.png "env-groups-1.png")
+
+You will be redirected to the list of environment groups, and your new environment group should be listed. At this point, you can use this environment group in a deployment. From the "Launch" tab, you can select "Load from Env Group" in the "Environment" tab:
+
+![Load env group](https://files.readme.io/c909d6a-env-groups-4.png "env-groups-4.png")
+
+You can then select your environment group and click "Load Selected Env Group", which will automatically populate the environment group variables that you previously set. You can modify these environment variables in this tab, for example if you'd like to add environment variables that aren't currently in the environment group. To view all deployment options, head over to our [application deployment docs](https://docs.getporter.dev/docs/add-ons). 
+
+# 🔒 Creating secret environment variables
+
+Porter supports creating secret environment variables that will not be exposed after creation. At the moment, you must create an environment group in order to create secret environment variables. To create a secret environment variable, click on the lock icon next to the environment variable during creation of the environment variable:
+
+![Lock icon](https://files.readme.io/1d91810-env-groups-5.png "env-groups-5.png")
+
+When you launch a new service, and you select "Load from Env Group" in the "Environment" tab, this sensitive value will be injected into the container before it is mounted:
+
+![Sensitive value](https://files.readme.io/14f07f3-Screen_Shot_2021-04-27_at_9.33.04_AM.png "Screen Shot 2021-04-27 at 9.33.04 AM.png")
+
+> 📘
+> 
+> **Note:** the sensitive value above is not written to the dashboard -- the hidden value is simply a dummy string.
+
+# Updating and deleting environment groups
+
+To update or delete your environment group, navigate back to the "Env Groups" tab, and click on the existing environment group to update or delete. You can make changes to the env group here, and select the "Update" button when finished: 
+
+![Updating env group](https://files.readme.io/d26712e-env-groups-2.png "env-groups-2.png")
+
+To delete the environment group, navigate to the "Settings" tab, and press the "Delete" button:
+
+![Deleting env group](https://files.readme.io/4323089-env-groups-3.png "env-groups-3.png")
+
+# How Secrets are Stored
+
+All env group variables are stored **in your own cluster**, and not on Porter's infrastructure. The entire env group is stored as a Kubernetes [Config Map](https://kubernetes.io/docs/concepts/configuration/configmap/), which is meant for non-sensitive, unstructured data. When you create a secret environment variable, the ConfigMap will contain a reference to a Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret), which contains the secret data. This secret will be [injected into your container](https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/) as it is mounted, and will not be exposed on the Porter dashboard after creation. To summarize:
+
+- No env group data is stored on Porter's servers: it is all stored on your own cluster. 
+- **Non-sensitive data** in an env group will be read into memory on Porter's servers during deployment, and added directly to the deployment. 
+- **Sensitive data** in an env group will **not** be read into memory on Porter's servers during deployment, and are referenced as a secret during deployment. This sensitive data only exists in memory on Porter's infrastructure during creation/updating of the env group (**not** the deployment). 
+
+## Encryption
+
+We don't do any special encryption beyond what is offered by the managed Kubernetes providers. Some providers will encrypt all secret data in the control plane at rest (in `etcd`), others will store it as base64-encoded data. While you don't have access to this control plane or the `etcd` instance, users with access to your cluster could view the secrets using `kubectl get secrets -o yaml`, for example.
+
+> 📘
+> 
+> **Note:** for secret encryption beyond what is offered in the managed Kubernetes providers, you may want to use a solution such as [sealed-secrets](https://github.com/bitnami-labs/sealed-secrets).

+ 88 - 0
docs/reference/auto-build.md

@@ -0,0 +1,88 @@
+# Auto Build with Cloud Native Buildpacks
+
+Porter uses [Cloud Native Buildpacks](https://buildpacks.io/docs/) to build applications from source when no Dockerfile is present. By default, the `heroku/buildpacks:18` builder is used to provide maximum parity with Heroku's auto build process.
+
+In order for auto build to work correctly, certain basic expectations must be met at the application-level (for instance, NodeJS apps should specify a start command in `npm start`).
+
+For reference, here is a list of supported language runtimes along with Heroku's language-specific buildpack documentation for the Heroku-18 Stack:
+
+| Buildpack | Requirements |
+|:----------|:-------------|
+| [Ruby](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-ruby) | [Requirements](https://devcenter.heroku.com/articles/ruby-support) |
+| [Node.js](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-nodejs) | [Requirements](https://devcenter.heroku.com/articles/nodejs-support) |
+| [Python](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-python) | [Requirements](https://devcenter.heroku.com/articles/python-support) |
+| [Java](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-java) | [Requirements](https://devcenter.heroku.com/articles/java-support) |
+| [PHP](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-php) | [Requirements](https://devcenter.heroku.com/articles/php-support) |
+| [Go](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-go) | [Requirements](https://devcenter.heroku.com/articles/go-support) |
+
+# CI/CD with GitHub Actions
+
+Porter uses [GitHub Actions](https://docs.github.com/en/actions) to automatically set up CI/CD within a connected GitHub repo. By default, services running on Porter will update with each push to the repo's main branch.
+
+Porter creates the following GitHub secrets in a connected repo:
+
+| Secret Name | Secret Value |
+|:------------|:-------------|
+| **ENV_<TEMPLATE_NAME>** | Contains the environment variables created through the Porter dashboard. |
+| **PORTER\_TOKEN\_<PROJECT_ID>** | Porter auth credentials |
+| **WEBHOOK_<TEMPLATE_NAME>** | Webhook ID for triggering a redeploy of the connected application. |
+
+Porter will also write a file to `.github/workflows/porter_<template_name>.yaml` in your repo to automatically configure a GitHub Actions workflow.
+
+The general structure of this GitHub Actions workflow file is as follows:
+
+```yaml
+name: Deploy to Porter
+on:
+  push:
+    branches:
+    - <MAIN_BRANCH>
+jobs:
+  porter-deploy:
+    runs-on: ubuntu-latest
+    steps:
+    - name: Checkout code
+      uses: actions/checkout@v2.3.4
+    - name: Download Porter
+      id: download_porter
+      run: |2
+        name=$(curl -s https://api.github.com/repos/porter-dev/porter/releases/latest | grep "browser_download_url.*/porter_.*_Linux_x86_64\.zip" | cut -d ":" -f 2,3 | tr -d \")
+        name=$(basename $name)
+        curl -L https://github.com/porter-dev/porter/releases/latest/download/$name --output $name
+        unzip -a $name
+        rm $name
+        chmod +x ./porter
+        sudo mv ./porter /usr/local/bin/porter
+    - name: Configure Porter
+      id: configure_porter
+      run: |2
+        sudo porter auth login --token ${{secrets.PORTER_TOKEN_<PROJECT_ID>}}
+        sudo porter docker configure
+    - name: Docker build, push
+      id: docker_build_push
+      run: |2
+        export $(echo "${{secrets.ENV_<TEMPLATE_NAME>}}" | xargs)
+        sudo add-apt-repository ppa:cncf-buildpacks/pack-cli
+        sudo apt-get update
+        sudo apt-get install pack-cli
+        sudo pack build <IMAGE_REPO>:$(git rev-parse --short HEAD) --path ./ --builder heroku/buildpacks:18
+        sudo docker push <IMAGE_REPO>:$(git rev-parse --short HEAD)
+    - name: Deploy on Porter
+      id: deploy_porter
+      run: |2
+        curl -X POST "https://dashboard.getporter.dev/api/webhooks/deploy/${{secrets.WEBHOOK_<TEMPLATE_NAME>}}?commit=$(git rev-parse --short HEAD)&repository=<IMAGE_REPO>"
+```
+
+**Note:** You can edit this file as desired or manually trigger the generated redeploy webhook to configure CI/CD.
+
+# Language Specific Notes
+
+## Node.JS
+
+* If you want to specify the node.js version, use the `engines` directive in your `package.json`.  By default, Node.js v12 will be used. You can use ranges or wildcards, but do not include the "v" in a version, eg:
+
+```json
+  "engines": {
+    "node": "14.x"
+  },
+```

+ 121 - 0
docs/reference/cli.md

@@ -0,0 +1,121 @@
+# Installation
+## Mac 
+Run the following command to grab the latest binary:
+
+```sh
+{
+name=$(curl -s https://api.github.com/repos/porter-dev/porter/releases/latest | grep "browser_download_url.*/porter_.*_Darwin_x86_64\.zip" | cut -d ":" -f 2,3 | tr -d \")
+name=$(basename $name)
+curl -L https://github.com/porter-dev/porter/releases/latest/download/$name --output $name
+unzip -a $name
+rm $name
+}
+```
+
+Then move the file into your bin:
+
+```sh
+chmod +x ./porter
+sudo mv ./porter /usr/local/bin/porter
+```
+
+## Linux
+
+Run the following command to grab the latest binary:
+
+```sh
+{
+name=$(curl -s https://api.github.com/repos/porter-dev/porter/releases/latest | grep "browser_download_url.*/porter_.*_Linux_x86_64\.zip" | cut -d ":" -f 2,3 | tr -d \")
+name=$(basename $name)
+curl -L https://github.com/porter-dev/porter/releases/latest/download/$name --output $name
+unzip -a $name
+rm $name
+}
+```
+
+Then move the file into your bin:
+
+```sh
+chmod +x ./porter
+sudo mv ./porter /usr/local/bin/porter
+```
+
+## Windows
+
+Go [here](https://github.com/porter-dev/porter/releases/latest/download/porter_0.1.0-beta.1_Windows_x86_64.zip) to download the Windows executable and add the binary to your `PATH`.
+
+# Connecting to an existing cluster
+### `porter connect kubeconfig`
+Connects Porter to an existing Kubernetes cluster using the `current-context` in your `kubeconfig`.
+
+# Pushing Docker images to your Porter image registry
+
+> 🚧 You must be logged in before configuring registry access
+> 
+> Please make sure you are logged in by running `porter config set-host https://dashboard.getporter.dev; porter auth login` first.
+
+### `porter docker configure`
+
+Writes to the local Docker `config.json` file to grant push/pull access to the image registries provisioned by Porter. Once you have run this command, you can directly use the `docker` CLI to push to the private image registry.
+
+**Example:**
+
+```sh
+porter docker configure
+docker build . -t gcr.io/project-123456/porter-server:latest
+docker push gcr.io/project-123456/porter-server:latest
+```
+
+> 📘
+>
+> We are working to add support for additional private Docker registries. If you don't see your registry provider, send us an email at [contact@getporter.dev](mailto:contact@getporter.dev) or feel free to contribute to the [repo](https://github.com/porter-dev/porter).
+
+# Connecting the CLI to a locally running instance of Porter
+
+### `porter config set-host [HOST]`
+
+Sets the URL of the Porter API server the CLI will communicate with. HOST is a URL including the protocol and defaults to `https://dashboard.getporter.dev`. 
+
+For locally running porter instances, run:
+
+```sh
+porter config set-host http://localhost:8080
+```
+
+# Remote Execution
+### `porter run [RELEASE] -- [COMMAND] [args...]`
+
+> 🚧 Prequisites
+> 
+> **Note:** before running this command, you should make sure your cluster is set in your config. Run `porter clusters list` to view the list of connected clusters, and run `porter config set-cluster [ID]` to set the correct cluster in your config.
+
+Allows users to execute a command on a remote container. The `release` variable is the name of the release on the Porter dashboard (this can be a release either in the "Applications" or the "Jobs" tab). For example, if I have a release called `web`, and I would like to enter an interactive shell in the container attached to `web`, I would run:
+
+```sh
+porter run web -- sh
+```
+ 
+To test that remote execution is working, you can run:
+
+```sh
+porter run web -- echo "hello world"
+```
+
+To run in a namespace other than `default`, use the `--namespace` flag:
+
+```sh
+porter run web --namespace other-namespace -- sh
+```
+
+# Commands
+
+Here's a reference table for the CLI documentation:
+
+| Command | Description |
+|:------- |:------------|
+| `porter config set-host [HOST]` | Sets the API server host name that the CLI will communicate with. |
+| `porter auth login` | Logs in via the CLI. |
+| `porter config set-project [PROJECT_ID]` | Sets the current project in config. |
+| `porter connect [INTEGRATION]` | Connects Porter with the given infrastructure. Accepts `kubeconfig` and `ecr` as arguments. |
+| `porter docker configure` | Grants the `docker` CLI access to a provisioned image registry. |
+| `porter run [RELEASE] -- [COMMAND] [args...]` | Executes a command on a remote container, specified by the release name. |