diff --git a/README.md b/README.md
index 117d2782e..cc59bbee3 100644
--- a/README.md
+++ b/README.md
@@ -46,7 +46,7 @@ The suite of modules in this repository is designed for rapid composition and re
All modules share a similar interface where each module tries to stay close to the underlying provider resources, support IAM together with resource creation and modification, offer the option of creating multiple resources where it makes sense (e.g. not for projects), and be completely free of side-effects (e.g. no external commands).
-A well defined naming standard is used across Stellar Engine to ensure adherence to Google Clouds best practices, naming requirements, and naming collision avoidance for global resources. The Google Cloud naming standard documentation is [here](/documentation/naming-standard.md) and will be used before the Stellar Engine deployment begins by choosing a naming standard that will flow through the Google Cloud infrastructure state.
+A well-defined naming standard is used across Stellar Engine to ensure adherence to Google Cloud's best practices, naming requirements, and naming collision avoidance for global resources. The Google Cloud naming standard documentation is [here](documentation/naming-convention.md) and will be used before the Stellar Engine deployment begins by choosing a naming standard that will flow through the Google Cloud infrastructure state.
The current modules support most of the core foundational and networking components used to design end-to-end infrastructure, with more modules in active development for specialized compute, security, and data scenarios.
@@ -54,14 +54,14 @@ For more information and usage examples see each module's README file, as well a
## End-to-End Blueprints
-Stellar Engine currently offers blueprints that are compliant with [FRH](https://github.com/gcp-stellar-engine/stellar-engine/tree/main/blueprints/fedramp-high) and [IL5](https://github.com/gcp-stellar-engine/stellar-engine/tree/main/blueprints/il5) baselines.
+Stellar Engine currently offers blueprints that are compliant with [FRH](https://github.com/google/stellar-engine/tree/main/blueprints/fedramp-high) and [IL5](https://github.com/google/stellar-engine/tree/main/blueprints/il5) baselines.
These blueprints range from full end-to-end services like a Cloud Native Access Point (CNAP), to ad-hoc services that are designed to be molded to users' individual use cases.
For more information, please look at each blueprint's README file.
## Cybersecurity Documentation
-In addition to the IaC, Stellar Engine provides supporting documentation that maps NIST 800-53r5 controls for users leveraging the IaC. This documentation is designed to streamline achieving ATO by providing generalized templates. All documentation is provided [here](https://drive.google.com/drive/folders/1NeWZcOuxysi7kUNRCFDd8CeHnxF14ywp). For how to utilize these documents, please see the following [Path to Authorization](https://docs.google.com/document/d/1vyrWgLIXWkZO3c5qkqLhltmo4LMrVfDHx0EQCuQMYac/edit?tab=t.0#heading=h.qyoze3epkux8) guide. NOTE: you will need to request permissions for it.
+In addition to the IaC, Stellar Engine provides supporting documentation that maps NIST 800-53r5 controls for users leveraging the IaC. This documentation is designed to streamline achieving ATO by providing generalized templates. All documentation is provided [here](https://drive.google.com/drive/folders/1NeWZcOuxysi7kUNRCFDd8CeHnxF14ywp). For how to utilize these documents, please see the following [Path to Authorization](docs/path-to-authorization.md) guide.
## Detailed Deployment Guide
@@ -71,7 +71,7 @@ Key stages include Stage 0 (Bootstrap), which initializes the infrastructure, cr
The guide emphasizes the importance of compliance, providing instructions for enabling Access Transparency and managing IAM roles effectively. Appendices include steps for creating new GCP Organizations and troubleshooting common issues like KMS key errors. Overall, the document serves as a comprehensive manual for deploying compliant, scalable, and secure cloud environments tailored to government and regulated industry requirements.
-For more information, please look at the [DDG](https://docs.google.com/document/d/1UOaHefcxHCl2C4CbYsTl37ZRxB4xmDHbWmfLcF0VY70/edit?pli=1&tab=t.0#heading=h.7axmtvj2exmb). NOTE: you will need to request permissions for it.
+For more information, please look at the [DDG](docs/ddg.md).
## Technical Design Document
@@ -79,7 +79,7 @@ The Stellar Engine Technical Design Document (TDD) outlines a comprehensive fram
The document delves into key aspects such as Identity and Access Management (IAM), Google Cloud Organization configuration, Google Cloud Project hierarchy, networking, and security. It emphasizes principles like least privilege for IAM, structured role group management, and secure service account configurations. The networking section introduces a hub-and-spoke VPC architecture, leveraging shared VPCs and service controls to ensure isolation and secure interservice communications. Additionally, it provides guidelines for implementing encryption at rest and in transit, logging and monitoring strategies, and robust access control mechanisms to meet compliance needs. This document is a vital resource for teams aiming to adopt GCP with a focus on security, compliance, and scalability. The TDD is used in conjunction with the Security Best Practices Guide for hardening the deployment against real-world cyber threats and attacks.
-For more information, please look at the [TDD](https://docs.google.com/document/d/15WMwslyCrkmuI7EutGBd7YXH3K8P3KrwzLOGcv-W4t8/edit?resourcekey=0-mjoA_PGM2MkIMPpr75SQbQ&tab=t.0). NOTE: you will need to request permissions for it.
+For more information, please look at the [TDD](docs/tdd.md).
## Security Best Practices Guide
@@ -91,14 +91,16 @@ Accompanied by the SBPG, the document incorporates recommendations from penetrat
For more information, please look at the [Security Best Practices Guide](https://docs.google.com/document/d/1uv62Fqg73r9oJNP-NPZebpzoBom8rOgLoHkiMZPutbo/edit?usp=sharing). NOTE: you will need to request permissions for it.
-## How to Contribute
-This GitLab repository is for contributing, for view access to the Stellar Engine [GitHub](https://github.com/gcp-stellar-engine/stellar-engine/issues) please fill out this [form](https://docs.google.com/forms/d/e/1FAIpQLScetWXBErWaopYrGa8qKz6vFZOz1-_O0o_HAU4tr4vdhMzWpQ/viewform).
+## Contributing
-For issue reporting please create an issue on the GitHub repository and email [stellar-engine@google.com](mailto:stellar-engine@google.com) with details on the issue you are facing.
+We welcome contributions to Stellar Engine! Since this is an open-source project, you can contribute by forking the repository, making your changes, and submitting a pull request.
-To contribute code to this repository, please email [stellar-engine@google.com](mailto:stellar-engine@google.com) with a request for developer access to Stellar Engine's codebase.
+Please ensure your code adheres to our formatting and security standards.
+
+## Issue Reporting
+
+If you encounter any bugs, have feature requests, or run into deployment issues, please [create an issue](https://github.com/google/stellar-engine/issues) on our GitHub repository. Keep the issue description clear and provide steps to reproduce if applicable.
-We'll review your request and grant you the necessary access as quickly as possible.
## Google’s Open Source Software Vulnerability Rewards Program (OSS VRP)
diff --git a/blueprints/il5/bq-project/README.md b/blueprints/il5/bq-project/README.md
index 86d093778..3fd064e59 100644
--- a/blueprints/il5/bq-project/README.md
+++ b/blueprints/il5/bq-project/README.md
@@ -68,10 +68,6 @@ keyring, for example
```terraform apply``` to apply the infrastructure build
```terraform destroy``` to destroy the built infrastructure
-Verification of a successful deployment?
-The dataset in BigQuery will look like this in your Google Cloud Console.
-
-
It will take a few minutes. When complete, you should see an output stating the command completed successfully, a list of the created resources.
The Output will look like following
```
diff --git a/docs/ddg.md b/docs/ddg.md
new file mode 100644
index 000000000..f854feba1
--- /dev/null
+++ b/docs/ddg.md
@@ -0,0 +1,584 @@
+# **Stellar Engine**
+
+Cloud Foundation Fabric Detailed Deployment Guide
+
+| Created: | June 04, 2023 |
+| :------------------------------------ | :------------ |
+| Updated: | Mar 06, 2025 |
+| Version: | v2.7.1 |
+| Most recent changes: | N/A |
+
+##
+
+## Introduction
+
+### Stellar Engine
+
+Stellar Engine is a project aimed at providing Infrastructure as Code (IaC) for
+Google Cloud Platform (GCP) customers who need to create a landing zone
+environment with the Assured Workload overlays. The project has been confirmed
+to work with (DISA) Impact Level 5 (IL5), Impact Level 4 (IL4), and FedRAMP
+High, but will function as a starting point for any other Assured Workloads
+overlay. In addition to the IaC, there is documentation available for both the
+IL5 and FedRAMP High compliance regimes that provide a mapping of National
+Institute of Standards and Technology (NIST) 800-53r5 controls to enable
+projects that leverage the Stellar Engine codebase to accelerate the speed at
+which an Authorization to Operate (ATO) can be attained. These responses are
+provided purely as examples, and should be reviewed in depth after the Stellar
+Engine deployment process. Many of the controls are handled via IaC, so any
+deviations from the outlined systems should be cross-checked with the control
+responses.
+
+### Instructions
+
+**_Note: Please make a copy of this deployment guide before filling out the
+variable section below._**
+
+After completing the **Variables** section below, you may proceed to each stage
+and complete the steps listed in that section. After filling out these
+variables, commands will be updated to reflect your specific environment making
+it easy to copy and paste.
+
+The deployment process is broken up into stages. During each stage, certain
+variables are required to be added to the **`terraform.tfvars`** file. Upon
+completion of a stage, the Terraform code will write out a **```-tfvar.auto.tfvars.json`** file to the Google Cloud Storage
+(GCS) bucket created in the initial **0-bootstrap stage**. Subsequent stages
+will use the **gcloud** command line interface (CLI) to copy the files into the
+new stage folder, as well as a provider file that impersonates a stage-specific
+service account.
+
+### Conventions
+
+- Code to be executed in a bash-like environment has the following form:
+ - **/bin/my-fun-command.sh –with arguments**
+
+### Deployment Times
+
+Depending on the number of tenants, current deployment of a clean environment
+takes approximately 1 hour.
+
+## Variables
+
+To make using this deployment guide easier, the variables described below need to be populated into specific `terraform.tfvars` files in the repository.
+
+**Most of these variables are configured in the Stage 0 Bootstrap `terraform.tfvars` file (a sample can be found at [`fast/stages-aw/0-bootstrap/terraform.tfvars.sample`](../fast/stages-aw/0-bootstrap/terraform.tfvars.sample)), except for the Tenant names, which are configured in Stage 1 (a sample can be found at [`fast/stages-aw/1-resman/terraform.tfvars.sample`](../fast/stages-aw/1-resman/terraform.tfvars.sample)).**
+
+| VARIABLE | TFVARS LOCATION | DESCRIPTION |
+| :------------------------------------ | :------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Billing Account | `billing_account.id` | The billing account to use for the deployment of the environments. Console Link |
+| Bootstrap Project ID | `bootstrap_project` | The bootstrap project id (created below) |
+| Compliance Regime | `assured_workloads.regime` | The compliance regime for this environment, (confirmed working in IL4, IL5, FEDRAMP_HIGH, and NO_COMPLIANCE_REGIME) |
+| Customer ID | `organization.customer_id` | The Google Workspace Directory Customer ID.
Run gcloud organizations list to view. |
+| Domain Name | `organization.domain` | The primary Fully Qualified Domain Name (FQDN). Run gcloud organizations list to view (make sure you have authorized as per prerequisites below) |
+| Alert Email | `alert_email` | The email address used for logging alerts notifications. |
+| Organization ID | `organization.id` | The Organization ID for the GCP Organization. Run gcloud organizations list to view. |
+| Prefix | `prefix` | This is the prefix appended to the beginning of projects and resources deployed selected by your or your organization. Full project names must be globally unique and the prefix must use a maximum of 7 characters. A 409 error will occur if a globally unique project name is not created. |
+| Region | `assured_workloads.location` | This is the (US) based region that we are deploying resources into (Dual regions like “NAM9” or continents are currently not supported) |
+| Tenant Name | `tenants` (Stage 1) | The name for the first tenant that will be deployed via this document. Full project names must be globally unique and the tenant-name must use a maximum of 7 characters. |
+
+## Prerequisites
+
+In a testing environment, it is possible that one user may have administrator
+roles of all three types of resources. However, in a production environment, it
+is more likely that we need to have multiple administrators involved during the
+initial setup. **Note: If you have access issues either grant the roles for
+yourself on the organization node or have your Administrator grant them for you.
+A hard refresh of the cloud console may be required to be able to use the active
+permissions.**
+
+- A Google Cloud Organization
+ - If creating a new organization, see Appendix below
+ - Login it at least once admin.google.com
+- Clone [Stellar Engine Github](https://github.com/google/stellar-engine/)
+- [Create a bootstrap project manually in Google
+ Cloud](https://console.cloud.google.com/projectcreate) if you do not already
+ have one
+ - Enable billing on the bootstrap project by going to [Account
+ Management](https://console.cloud.google.com/billing/manage)
+ - [Enable The Cloud Monitoring
+ API](https://console.developers.google.com/apis/api/monitoring.googleapis.com/overview)
+ in the bootstrap project
+- Edit Variables Section above
+- Install [Google Cloud SDK](https://cloud.google.com/sdk/docs/install)
+- **gcloud auth login**
+- **gcloud config set project ``**
+- **gcloud auth application-default login**
+- Navigate to [IAM & Admin](https://console.cloud.google.com/iam-admin/iam) at
+ the Organization level **_(not project-specific)_** in the GCP Console and
+ assign the following IAM roles for the deploying user. See the note at the
+ bottom of this list for a script to automatically assign these permissions.
+ - Access Transparency Admin (roles/axt.admin)
+ - Assured Workloads Administrator (roles/assuredworkloads.admin)
+ - Billing Account Administrator (roles/billing.admin) either on the
+ organization or the billing account (see the following section for
+ details)
+ - Logging Admin (roles/logging.admin)
+ - Organization Administrator (roles/resourcemanager.organizationAdmin)
+ - Organization Policy Admin (roles/orgpolicy.policyAdmin)
+ - Organization Role Administrator (roles/iam.organizationRoleAdmin)
+ - Owner (roles/owner)
+ - Project Creator (roles/resourcemanager.projectCreator)
+ - Service Account Admin (roles/iam.serviceAccountAdmin)
+ - Service Account Token Creator (roles/iam.serviceAccountTokenCreator)
+ - Tag Admin (roles/resourcemanager.tagAdmin)
+ - Note: If you are starting with a brand new organization, the above
+ permissions (excluding billing account admin and super admin) can be
+ automated by running the following script:
+ - **Warning: You will lose all current permissions for your user
+ besides Super User**
+ - **./setIam.sh \ ``** in the
+ fast/stages-aw/0-bootstrap folder.
+
+- Navigate to the [Super Admin](https://admin.google.com/ac/roles) roles
+ section in Google Workspace to ensure that the deploying user is a Super
+ Admin
+
+- Update local
+ [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)
+ to version \>= 1.8.1
+- Follow the [Initial Groups and Administrative Access in Cloud Setup Steps 2
+ and 3](https://console.cloud.google.com/cloud-setup/overview) instructions
+ adding all the below groups.
+ - If prompted, skip the IDP step for now
+- Note: You do not have to complete subsequent steps but make sure you finish
+ Step 2. Google may change their default group names. You can manually create
+ the [group](https://console.cloud.google.com/iam-admin/groups) if it is not
+ contained in the wizard.)
+ - gcp-billing-admins@``
+ - gcp-developers@``
+ - gcp-devops@``
+ - gcp-hybrid-connectivity-admins@``
+ - gcp-logging-monitoring-admins@``
+ - gcp-logging-monitoring-viewers@``
+ - gcp-organization-admins@``
+ - gcp-vpc-network-admins@``
+ - gcp-security-admins@``
+- We need to enable these Google Cloud Services by running the following
+ command:
+ - **echo "iam cloudkms pubsub serviceusage cloudresourcemanager bigquery
+ assuredworkloads cloudbilling logging iamcredentials orgpolicy" | xargs
+ -n1 -I {} gcloud services enable
+ "{}.**[**googleapis.com**](http://googleapis.com)**"**
+- [Enable Access
+ Transparency](https://console.cloud.google.com/iam-admin/settings) for your
+ organization
+ - Note: If this is unavailable, make sure you have the Access Transparency
+ Admin role and try again
+- Install [jq binary](https://jqlang.github.io/jq/download/)
+- (Optional) Install OpenSSL 3.3.1 (The macOS built-in LibreSSL version will
+ not work. Install via homebrew for macOS)
+ - For MacOS, see the steps to install OpenSSL
+ - Follow instruction at https://formulae.brew.sh/formula/openssl@3
+- Request “13 projects” here
+ if your
+ quota is below 13
+ - [View and Manage
+ Quotas](https://cloud.google.com/docs/quotas/view-manage)
+
+## Stage 0 - Bootstrap
+
+### Description
+
+This is the beginning stage where we align the existing parts of our network
+with the Terraform state. It creates the initial IaC bootstrap service accounts
+and projects. It is designed to transition from whatever project the user
+initially has into a newly created “core” project and migrate the Terraform
+state.
+
+### Steps
+
+- Enable [billing](https://console.cloud.google.com/billing/projects) for your
+ bootstrap project if it is not enabled
+- Change directory into **fast/stages-aw/0-bootstrap**
+- Copy file **terraform.tfvars.sample** to **terraform.tfvars**
+ - **cp terraform.tfvars.sample terraform.tfvars**
+- Copy file **providers.tf.tmp** to **0-bootstrap-providers.tf**
+ - **cp providers.tf.tmp 0-bootstrap-providers.tf**
+- Update information in **terraform.tfvars** as follows below, the variables
+ from the above sections are already included
+
+**`fast/stages-aw/0-bootstrap/terraform.tfvars`**
+
+```hcl
+# use `gcloud beta billing accounts list`
+billing_account = {
+ id = "``" # taken from Google Cloud Console Billing Accounts -> Manage Billing Account
+}
+# locations for GCS, BigQuery, and logging buckets created here
+locations = {
+ bq = "``"
+ gcs = "``"
+ logging = "``"
+ pubsub = ["``"]
+ kms = "``"
+}
+# use `gcloud organizations list`
+organization = {
+ domain = "``" # DISPLAY_NAME
+ id = "``"
+ customer_id = "``"
+}
+outputs_location = "~/fast-config"
+# use something unique and no longer than 9 characters
+prefix = "``" # full project names must be globally unique
+log_sinks = {
+ audit-logs = {
+ filter = "logName:\"/logs/cloudaudit.googleapis.com%2Factivity\" OR logName:\"/logs/cloudaudit.googleapis.com%2Fsystem_event\" OR protoPayload.metadata.@type=\"type.googleapis.com/google.cloud.audit.TransparencyLog\""
+ type = "pubsub"
+ }
+ vpc-sc = {
+ filter = "protoPayload.metadata.@type=\"type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata\""
+ type = "pubsub"
+ }
+ workspace-audit-logs = {
+ filter = "logName:\"/logs/cloudaudit.googleapis.com%2Fdata_access\" and protoPayload.serviceName:\"login.googleapis.com\""
+ type = "pubsub"
+ }
+ empty-audit-logs = {
+ filter = ""
+ type = "pubsub"
+ }
+}
+org_policies_config = {
+ import_defaults = false # handled via import script
+}
+fast_features = {
+ envs = true
+}
+assured_workloads = {
+ regime = "``"
+ location = "us-east4"
+}
+bootstrap_project = "``"
+alert_email = "``"
+```
+
+- Run **terraform init**
+- Run **terraform apply -var bootstrap_user=$(gcloud config list --format
+ 'value(core.account)')**
+ - Type **yes** when prompted
+ - **Note:** You may receive an error in this stage where it reports that
+ ‘bigquery.googleapis.com\` is not usable in the Assured Workloads. If
+ you see this error, go to the [Assured Workloads
+ ](https://console.cloud.google.com/compliance/assuredworkload)page and
+ for the StellarEngine-`` folder (and Networking folder, if applicable),
+ click “Review Available Updates”, go to “Allowed Services”, and click
+ “Allow services” to bring in the BigQuery family of APIs. If prompted,
+ say yes to the additional dialog confirming your choice. After
+ making this change, you should wait \~2 minutes and then re-run
+ **terraform apply -var bootstrap_user=$(gcloud config list --format
+ 'value(core.account)')**
+ - Type **yes** when prompted
+ - **Note:** You may encounter a bug where your bootstrap project loses
+ access to your billing account. If so [re-enable billing for your
+ bootstrap project](https://console.cloud.google.com/billing/projects)
+- Switch project to your new project
+ - **gcloud config set project ``-prod-iac-core-0**
+- Copy the new providers local
+ - **gcloud alpha storage cp
+ gs://``-prod-iac-core-outputs-0/providers/0-bootstrap-providers.tf
+ .**
+- Migrate the state from local to remote **terraform init --migrate-state**
+ - Type **yes** when prompted
+- Run ./**import.sh**
+- Apply Terraform one more time before moving on to the next stage via
+ **terraform apply**
+ - Type **yes** when prompted
+
+## Stage 1 - Resource Management
+
+### Description
+
+In this stage, we begin to build out the different folders, projects, and
+service accounts that will be used at the organization level for subsequent
+stages. In order to build out the environment, you will have to update the
+**terraform.tfvars** file in **fast/stages-aw/1-resman** to include a tenants
+variable as seen below.
+
+### Steps
+
+- **Note:** If you are using an external billing account, you have to add the
+ Billing Account Administrator to
+ **``-prod-resman-0@``-prod-iac-core-0.iam.gserviceaccount.com**.
+
+ **Steps to add the external billing account:**
+ - In the Google Cloud console (External billing Account), go to the
+ Account management page for the Cloud Billing account, select the
+ Organization level and Go to Account management in Cloud Billing
+ - At the prompt, choose the Cloud Billing account you want to view.
+ - In the Permissions panel, To add new principals and assign permissions,
+ do the following:
+ - Click Add principal.
+ - In the New principals field, enter the email address for the principals
+ you want to add for example
+ ``-prod-resman-0@``-prod-iac-core-0.iam.gserviceaccount.com
+ - Select a permission for the principal(s) from Select a role as “Billing
+ Account Administrator”.
+ - When done, click Save.
+
+- Change directory into **fast/stages-aw/1-resman**
+- Copy file **terraform.tfvars.sample** to **terraform.tfvars **
+ - **cp terraform.tfvars.sample terraform.tfvars**
+- Update information in **terraform.tfvars** as follows
+ - Note: Change “tenant_name(s)” below
+
+**`fast/stages-aw/1-resman/terraform.tfvars`**
+
+```hcl
+tenants = {
+ tenant_name = { ## Change tenant_name here - 7 or less characters
+ admin_principal = "group:gcp-devops@``"
+ descriptive_name = "``" ## Change descriptive_name here
+ locations = {
+ gcs = "us-east4"
+ kms = "us-east4"
+ }
+ },
+ tenant_name-2 = { ## Change tenant_name-2 here - 7 or less characters
+ admin_principal = "group:gcp-devops@``"
+ descriptive_name = "tenant-name-2" ## Change descriptive_name here
+ locations = {
+ gcs = "us-east4"
+ kms = "us-east4"
+ }
+ }
+## You can have “n” number of tenants
+}
+fast_features = {
+ envs = true
+}
+envs_folders = {
+ Prod = {
+ admin = "gcp-organization-admins@``"
+ },
+ Int = {
+ admin = "gcp-organization-admins@``"
+ },
+ Test = {
+ admin = "gcp-organization-admins@``"
+ }
+}
+```
+
+- Copy the tfvars files from the GCS
+ - **gcloud storage[^1] cp
+ gs://``-prod-iac-core-outputs-0/providers/1-resman-providers.tf ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-globals.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-bootstrap.auto.tfvars.json
+ ./**
+- Run **terraform init**
+- Run **terraform apply**
+ - Type **yes** when prompted
+
+## Stage 2 - Network Creation
+
+## FedRAMP High - Stage 2.1 Networking
+
+### Steps
+
+- **Note: If you are using an external billing account, you have to add the
+ Billing Account Administrator to
+ ``-prod-resman-net-0@``-prod-iac-core-0.iam.gserviceaccount.com from
+ the external account. Steps to add the external billing account:**
+ - In the Google Cloud console (External billing Account), go to the
+ Account management page for the Cloud Billing account, select the
+ Organization level and Go to Account management in Cloud Billing
+ - At the prompt, choose the Cloud Billing account you want to view.
+ - In the Permissions panel, To add new principals and assign permissions,
+ do the following:
+ - Click Add principal.
+ - In the New principals field, enter the email address for the principals
+ you want to add for example
+ ``-prod-resman-0@``-prod-iac-core-0.iam.gserviceaccount.com
+ - Select a permission for the principal(s) from Select a role as “Billing
+ Account Administrator”.
+ - When done, click Save.
+- Change directory into **fast/stages-aw/2-networking-a-fedramp-high**
+- Copy the **terraform.tfvars.tf** files from the GCS buckets
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/providers/2-networking-providers.tf
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-globals.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-bootstrap.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/1-resman.auto.tfvars.json
+ ./**
+- Run **terraform init**
+- **terraform apply**
+ - Type **yes** when prompted
+
+## IL4/IL5 Stage 2.1 - Networking - WIP
+
+### Description
+
+This step deploys a pair of Palo Alto vm-series Next-Generation Firewalls
+(NGFWs) into the network account. They use the Bring Your Own License (BYOL)
+deployment image and will require you to use the Palo Alto web console to upload
+a VM code and register them. For more instructions, see the README in the the
+**2-networking-b-il5-ngfw **stage folder.
+
+### Steps
+
+- **Note: If you are using an external billing account, you have to add the
+ Billing Account Administrator to
+ ``-prod-resman-net-0@``-prod-iac-core-0.iam.gserviceaccount.com from
+ the external account. Steps to add the external billing account:**
+ - In the Google Cloud console (External billing Account), go to the
+ Account management page for the Cloud Billing account, select the
+ Organization level and Go to Account management in Cloud Billing
+ - At the prompt, choose the Cloud Billing account you want to view.
+ - In the Permissions panel, To add new principals and assign permissions,
+ do the following:
+ - Click Add principal.
+ - In the New principals field, enter the email address for the principals
+ you want to add for example
+ ``-prod-resman-net-0@``-prod-iac-core-0.iam.gserviceaccount.com
+ - Select a permission for the principal(s) from Select a role as “Billing
+ Account Administrator”.
+ - When done, click Save.
+- Change directory into **fast/stages-aw/2-networking-b-il5-ngfw**
+- Copy the **terraform.tfvars.tf** files from the GCS buckets
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/providers/2-networking-providers.tf
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-globals.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-bootstrap.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/1-resman.auto.tfvars.json
+ ./**
+- Run **terraform init**
+- Run **terraform apply -target
+ google_project_iam_custom_role.ngfw-custom-role**
+ - Type **yes** when prompted
+- **terraform apply**
+ - Type **yes** when prompted
+ - **Note: **If you receive an error relating to a service account and/or
+ KMS not existing, please click “Settings” in the ``-prod-net-landing-0
+ storage account on the console, and it will generate the service account
+ for you
+
+## Stage 3 - Security and Audit Account Configuration
+
+### Description
+
+This stage configures the security and audit projects. The security project
+contains KMS services that support the CMEK requirements, it can also host
+Secret Manager service. In IL5, CMEK is enabled by default for compute,
+container, storage, and SQL server, the following organization policies related
+to CMEK are enforced:
+
+- `gcp.restrictNonCmekServices`:
+ - `denied_values: "compute.googleapis.com"`
+ - `denied_values: "container.googleapis.com"`
+ - `denied_values: "storage.googleapis.com"`
+ - `denied_values: "sqladmin.googleapis.com"`
+- `gcp.restrictCmekCryptoKeyProjects`: list CMEK key projects allowed to be used.
+
+In this step, prod-sec-core-0 project is created to host KMS and Secret Manager
+(optional) services. A restricted admin role (can grant decrypt permissions to
+other services) is granted to the KMS restricted admins. KMS key rings in
+different locations are also provisioned. (KMS Key ring locations must match the
+service locations, for example, a multi-regional keyring cannot be used in a
+single region storage bucket, or vice versa).
+
+TBA: The audit project contains a logging bucket for audit logs.
+
+Security administrators are responsible for the security project, and auditors
+are responsible for the audit project.
+
+### Steps
+
+- **Note: If you are using an external billing account, you have to add the
+ Billing Account Administrator to
+ ``-security-0@``-prod-iac-core-0.iam.gserviceaccount.com from the
+ external account.**
+ **Steps to add the external billing account:**
+ - In the Google Cloud console (External billing Account), go to the
+ Account management page for the Cloud Billing account, select the
+ Organization level and Go to Account management in Cloud Billing
+ - At the prompt, choose the Cloud Billing account you want to view.
+ - In the Permissions panel, To add new principals and assign permissions,
+ do the following:
+ - Click Add principal.
+ - In the New principals field, enter the email address for the principals
+ you want to add for example
+ ``-security-0@``-prod-iac-core-0.iam.gserviceaccount.com
+ - Select a permission for the principal(s) from Select a role as “Billing
+ Account Administrator”.
+ - When done, click Save.
+- Change directory into **fast/stages-aw/3-security**
+- Copy the **terraform.tfvars** files from the GCS buckets
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/providers/3-security-providers.tf
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-globals.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/0-bootstrap.auto.tfvars.json
+ ./**
+ - **gcloud storage cp
+ gs://``-prod-iac-core-outputs-0/tfvars/1-resman.auto.tfvars.json
+ ./**
+- Run **terraform init**
+- Run **terraform apply**
+ - Type **yes** when prompted
+ - **Note:** Any issues with Service Accounts can be resolved by rerunning
+ **terraform apply**
+- Run **./sa_lockdown.sh** to disable the Service Accounts used during the
+ deployment
+- Delete the `` project by running the following command:
+ - **./delete_gcp_project.sh --project-id=``**
+ - Confirm by re-entering the project-id **``** when prompted
+
+**Congratulations, you have successfully deployed Stellar Engine\! For further
+securing of the environment, please see the** [**Stellar Engine Security Best
+Practices
+Guide**](security-best-practices.md).
+
+## Appendices
+
+### Creating a new Google Cloud Org
+
+1. Create Basic Cloud Identity Account
+ 1.
+ 1. You must first log into the Google Admin console, and then cloud
+ console, and wait approximately 2 minutes to provision the org
+2. Complete Domain Name verification
+ 1. This depends on your DNS provider
+3. Enable the account in GCP
+ 1.
+
+### Billing Accounts
+
+1. Create a billing account
+
+
+
+1.
+
+### Additional Notes
+
+- To add or modify tenants, rerun stage 1 with the updated tenant info
+ - You may need to re-pull the variables files
+- When modifying modules is necessary, please copy the entire module over, and
+ use the naming convention \ to avoid merge conflicts when
+ periodic updates are pulled in from the CFF
+- If you receive KMS key errors during build, you may have to manually turn
+ the keys back on in [KMS
+ Management](https://console.cloud.google.com/security/kms/keyrings). If you
+ receive these additional errors, please wait \~1 minute and rerun
+ **terraform apply**
+
+####
diff --git a/docs/path-to-authorization.md b/docs/path-to-authorization.md
new file mode 100644
index 000000000..d0782fe39
--- /dev/null
+++ b/docs/path-to-authorization.md
@@ -0,0 +1,256 @@
+[//]: # Tab Start: Tab 1
+
+# Path to Authorization
+
+_Impact Level 5 (H-H-X) & FedRAMP High_
+
+**Created Date:**
+
+**Last Modified: **
+
+**Google POC(s):** stellar-engine@google.com
+
+**Version:** 1.0
+
+**Purpose:** The purpose of this document is to discuss the path to
+authorization through the use of the Risk Management Framework along with
+Stellar Engine and ATO-Ready Deployment Blueprints.
+
+**Background:** The Risk Management Framework (RMF) provides a comprehensive,
+flexible, and repeatable seven-step process designed to help organizations
+manage their specific information security and privacy risks. All seven steps
+are essential for the successful implementation of the RMF.
+
+Stellar Engine accelerates Google Cloud deployments for Public Sector customers
+by providing reusable Infrastructure as Code (IaC) and cybersecurity
+documentation. This enables both customers and Independent Software Vendors
+(ISVs) to deploy solutions more quickly and achieve Authorization to Operate
+(ATO) on Google Cloud with greater efficiency.
+
+By leveraging the RMF alongside Stellar Engine’s tools, organizations can
+streamline their security and compliance efforts, ensuring faster, more secure
+deployments in the cloud.
+
+**RMF & Stellar Engine**:
+
+_Red Text indicates MAJOR ARTIFACTS for ATO submission; Blue Text indicates
+AO ACTIONS_
+
+[**Step 0;
+Prepare**](https://csrc.nist.gov/Projects/risk-management/about-rmf/prepare-step)**:**
+Essential activities to **prepare** the organization to manage security and
+privacy risks
+
+- Designate an individual, or individuals, who will be assigned the task of
+ executing the Risk Management Framework.
+ - Roles and responsibilities may be assigned to personnel internal or
+ external to your organization.
+- Create a [risk management
+ strategy](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-39.pdf)
+ for the organization that articulates your organizational risk tolerance.
+ - Understand and document specific assumptions, constraints, risk
+ tolerances, priorities, and trade-offs.
+ - Make strategic-level decisions on how to manage cybersecurity and
+ privacy risk.
+ - There is no “correct level” of risk tolerance. The degree of risk
+ tolerance is generally based on organizational culture, could be
+ different for different types of losses or compromises, and can be
+ influenced by risk tolerance of executives.
+- Implement a [continuous monitoring
+ strategy](https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-137.pdf)
+ for your organization to monitor security and privacy risk posture.
+ - The strategy articulates frequency of control monitoring and how
+ monitoring is to be conducted.
+- Determine the scope of protection for the system and what falls into that
+ scope.
+ - Authorization boundaries establish the scope of systems to be protected,
+ managed, and authorized for operation or use.
+ - [Impact Level 5 Authorization
+ Boundary](https://drive.google.com/drive/folders/1BLtoC59BMUueC9uwtWjBymJuQg2ku2m3)
+ - [FedRAMP High Authorization
+ Boundary](https://drive.google.com/drive/folders/16zsUdmzq_fK1BRFCrjgxsFRM23YVhjBa)
+- Regularly assess the security and privacy risks at the organization level
+ and system level. Update risk assessment results on an on-going basis.
+
+[**Step 1;
+Categorize**](https://csrc.nist.gov/Projects/risk-management/about-rmf/categorize-step):
+**Categorize** the system and information processed, stored, and transmitted
+based on the impact analysis
+
+- [Categorize](https://csrc.nist.gov/pubs/sp/800/60/v1/r1/final) each system
+ based on the impact to the organization if the confidentiality,
+ availability, or integrity were to become compromised.
+ - [FIPS 199](https://drive.google.com/drive/u/0/folders/1CMLkSzmdJTtld5-R7Ttks2uSGvv79Ikr)
+ - [System Security Plan
+ Templates](https://drive.google.com/drive/u/0/folders/1qV0Pu9EDnCEJpk0iKDodU_69BN96hEVr)
+
+[**Step 2;
+Select**](https://csrc.nist.gov/Projects/risk-management/about-rmf/select-step):
+**Select** the set of NIST SP 800-53 controls to protect the system based on
+risk assessment(s)
+
+- Now that you have categorized the systems and assets, select the appropriate
+ controls needed for protection.
+ - The control baseline is a set of controls you can implement to meet
+ strategic, legal, regulatory, or contractual security and privacy
+ requirements and manage risk.
+ - What security and privacy controls are needed to satisfy the
+ organization’s security and privacy requirements and to adequately
+ manage risk?
+ - For our initial selection of controls, should we use a baseline
+ (pre-defined) control selection approach, or should we select our own
+ controls?
+- After selecting an appropriate control baseline, tailor the controls to
+ address the specific security and privacy requirements for the organization.
+- Develop and implement a system-level strategy for monitoring control
+ effectiveness.
+ - This strategy defines how changes to the system and environment of
+ operation are to be monitored, how risk assessments are conducted, and
+ the reporting requirements.
+ - How effective are the controls we have implemented? What is the
+ frequency in which the controls are monitored?
+ - [Security Control Traceability Matrix (SCTM)
+ Templates](https://drive.google.com/drive/u/0/folders/13SahM7cIrE_jeA2G103yhGfpeXlBaZIq)
+ (IL5 H-H-X and FedRAMP High Baselines)
+ - [Policies and Procedures
+ Templates](https://drive.google.com/drive/u/0/folders/1qnZ0N2BrRd8fF5u4h06H2Bbd4TxvLJvl)
+ (IL5 HHX and FedRAMP High Baselines)
+
+[**Step 3;
+Implement**](https://csrc.nist.gov/Projects/risk-management/about-rmf/implement-step):
+**Implement **the controls and document how controls are deployed
+
+- Now that you have categorized systems by their risks and have selected
+ appropriate controls, now is the time to implement the controls.
+ - Have the security and privacy controls been implemented or is there an
+ implementation plan in place?
+- Update security and privacy plans to document necessary changes.
+ - It’s not always feasible to implement controls as planned. Document
+ necessary revisions that reflect how the control is implemented.
+
+[**Step 4;
+Assess**](https://csrc.nist.gov/Projects/risk-management/about-rmf/assess-step):
+**Assess** to determine if the controls are in place, operating as intended, and
+producing the desired results
+
+- Select an individual or team responsible for conducting a control
+ assessment.
+ - Organizations can conduct self-assessments of controls or obtain the
+ services of an independent assessor.
+- Develop, review, and approve plans to assess implemented controls.
+- Once plans are approved, conduct control assessments using the assessment
+ plans.
+- Prepare an assessment report documenting the findings and recommendations,
+ such as plans for correcting deficiencies.
+- Prepare the plan of action and milestones, which details remediation plans
+ based on the findings and recommendations of the assessment report.
+
+[**Step 5;
+Authorize**](https://csrc.nist.gov/Projects/risk-management/about-rmf/authorize-step):
+Senior official makes a risk-based decision to **authorize** the system (to
+operate). Authorizing Officials (AOs) are executive-level leaders with demanding
+schedules, which is why they typically rely on a team for information system
+security. Each AO has a unique perspective on risk tolerance and while they are
+not always technical subject matter experts, they are experts in the business or
+mission area. To engage effectively, it’s important to translate security
+controls in a way that aligns with and supports the success of their mission.
+
+- Assemble the authorization package and submit it to the authorizing official
+ for an authorization decision.
+ - If security and privacy controls are being implemented by an external
+ provider, ensure the provider makes available the information needed for
+ your organization to make risk-based decisions.
+- The authorizing official analyzes the information in the authorization
+ package and finalizes the determination of risk to the organization.
+- The authorizing official issues an authorization decision for the
+ information system, indicating whether the system is authorized to operate
+ or not.
+
+[**Step 6;
+Monitor**](https://csrc.nist.gov/Projects/risk-management/about-rmf/monitor-step):
+Continuously **monitor** control implementation and risks to the system
+
+- Monitor the system and environment of operation for changes that impact
+ security and privacy.
+- Using the results of the ongoing monitoring activities, risk assessments,
+ and outstanding items in plans of action and milestones, determine the
+ appropriate risk response and implement.
+- Maintain ongoing communication with organizational leadership to convey the
+ current security and privacy posture of the organization.
+
+**Additional Artifacts for an ATO:**
+
+- [Privacy Impact Assessment
+ (PIA)](https://drive.google.com/drive/u/0/folders/18N1nBCZOaV2peJI4cEDJ0KSNyTo3s32o)
+- [PII Confidentiality Impact Level
+ (PCIL)](https://drive.google.com/drive/u/0/folders/18N1nBCZOaV2peJI4cEDJ0KSNyTo3s32o)
+- [System of Records Notice
+ (SORN)](https://drive.google.com/drive/u/0/folders/18N1nBCZOaV2peJI4cEDJ0KSNyTo3s32o)
+- [Hardware
+ List](https://drive.google.com/drive/u/0/folders/1fH-PDmfeJxf7b8BNkJVW39EHtUhYImUm)
+- [Software
+ List](https://drive.google.com/drive/u/0/folders/1fH-PDmfeJxf7b8BNkJVW39EHtUhYImUm)
+- [Ports, Protocols, and Services Management
+ (PPSM)](https://drive.google.com/drive/u/0/folders/1cvnyqHlwMINoxU2sZ61OQRWOKetSauuF)
+
+**Things to Consider:**
+
+- Team members should be U.S. Citizens.
+- In addition to documentation, you may be required to ensure compliance with
+ STIGs and ACAS scans.
+- ATOs are for a set time frame with a max of 3 years, but that does not mean
+ the work stops; packages will need to be maintained throughout the lifecycle
+ of the ATO. Without continuous updates, there will be a herculean effort to
+ make updates to address major changes, security requirement updates, or
+ CVEs.
+- An ATO with one agency/program does not necessarily transfer to another
+ agency program. ATOs are agency specific, however, there is potential for
+ reciprocity. Many AOs will accept reciprocity if the application, system, or
+ component of the system was authorized by another government official,
+ especially within the same agency or within DoD.
+
+**Example Work Breakdown Structure (WBS) for ATO:**
+
+| WBS #: | Action: |
+| :---------------------- | :--------------------------------------------------------------------------------------------------------------------- |
+| 1.0 | Conduct kick-off meetings with stakeholders, including the Authorizing Official and their team. |
+| 1.1 | Roadmap with schedule , milestones, and responsibilities for the process |
+| 1.2 | Obtain access to system accounts (networks, eMASS, etc.). |
+| 1.3 | Build ATO Deliverables |
+| 1.3.1 | Build Initial Artifacts: Architecture Diagram; HW/SW List; System Security Plan |
+| 1.3.2 | Determine categorization per FIPS 199 |
+| 1.3.3 | Select controls based upon NIST SP 800-53 rev 4 or rev 5 |
+| 1.3.4 | Assist team with implementation of controls |
+| 1.3.5 | Write policies and procedures (Configuration Management, Incident Response Plan, Continuous Monitoring Strategy, etc.) |
+| 1.3.6 | Perform self-assessment, draft POA&M |
+| 1.3.7 | Submit package to AO, answer questions, and provide support as needed |
+| 1.4 | AO awards ATO |
+
+**References**
+
+- [NIST SP 800-37 rev 2](https://csrc.nist.gov/pubs/sp/800/37/r2/final), _Risk
+ Management Framework for Information Systems and Organizations: A System
+ Life Cycle Approach for Security and Privacy_
+ - Describes the RMF and provides guidelines for apply the RMF to
+ information systems and organizations
+- [Federal Information Processing Standards
+ (FIPS) 199](https://csrc.nist.gov/pubs/fips/199/final), _Standards for
+ Security Categorization of Federal Information and Information Systems_
+ - Standard for categorizing information systems according to concerns for
+ confidentiality, integrity, and availability. Used with SP 800-60, Guide
+ for Mapping Types of Information and Information Systems to Security
+ Categories
+- [FIPS 200](https://csrc.nist.gov/pubs/fips/200/final), _Minimum Security
+ Requirements for Federal Information and Information Systems_
+ - Provides a risk-based process for selecting the security controls
+ necessary to satisfy the minimum requirements for information and an
+ information system
+- NIST SP 800-53 [rev 4](https://csrc.nist.gov/pubs/sp/800/53/r4/upd3/final)
+ and [rev 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), _Security
+ and Privacy Controls for Federal Information Systems and Organizations_
+ - Catalog of security and privacy controls for information systems and
+ organizations to protect against a diverse set of threats and risks.
+- [Stellar Engine Technical Design
+ Document](https://docs.google.com/document/d/15WMwslyCrkmuI7EutGBd7YXH3K8P3KrwzLOGcv-W4t8/edit?resourcekey=0-mjoA_PGM2MkIMPpr75SQbQ&tab=t.0)
+
+[//]: # Tab End: Tab 1
diff --git a/docs/scc-deployment-guide.md b/docs/scc-deployment-guide.md
new file mode 100644
index 000000000..e76e9f887
--- /dev/null
+++ b/docs/scc-deployment-guide.md
@@ -0,0 +1,74 @@
+# Stellar Engine: Security Command Center (SCC) Deployment Guide
+
+Procedures to deploy Security Command Center into a Stellar Engine deployment
+
+## 1.0 - Getting Started
+
+This document follows the information and steps as outlined in the following Google Cloud Security Command Center articles to enable SCC for a Stellar Engine Deployment.
+
+- [Security Command Center Overview](https://cloud.google.com/security-command-center/docs/security-command-center-overview)
+- [Overview of Organization-Level Activation](https://cloud.google.com/security-command-center/docs/activate-scc-overview#overview_of_organization-level_activation)
+- [Activate the Security Command Center Standard/Premium Tier for an Organization](https://cloud.google.com/security-command-center/docs/activate-scc-for-an-organization)
+
+## 2.0 - Grant IAM Permissions
+
+Grant the appropriate IAM roles to set up the Security Command Center service.
+
+1. Navigate to [IAM & Admin](https://console.cloud.google.com/iam-admin/iam)
+2. In the **Project Selector**, select your organization
+3. If not already added, add your user to the following GCP groups that have the appropriate IAM roles already assigned:
+
+**Resources**
+- [Set Up Permissions](https://cloud.google.com/security-command-center/docs/activate-scc-for-an-organization#set_up_permissions)
+
+## 3.0 - Modify Resource Service Usage Policy
+
+**Note**: This step is for evaluating SCC within an IL5 environment since SCC is not yet approved for the IL5 compliance regime.
+
+Enable the Security Command Center APIs for the **StellarEngine-** folder under the Restrict Resource Service Usage Organizational Policy.
+
+1. Navigate to [Organizational Policies - Restrict Resource Service Usage](https://console.cloud.google.com/iam-admin/orgpolicies/gcp-restrictServiceUsage)
+2. In the **Project Selector**, select the **StellarEngine-** folder
+3. Click **Manage Policy**
+4. Ensure not to modify the radio buttons as the following should already be selected:
+ - **Policy Source**: Override Parent’s Policy
+ - **Policy Enforcement**: Replace
+
+**IMPORTANT**: Ensure that **Replace** is selected instead of **Merge with Parent**
+
+5. Scroll down to the **Rules** section and expand the **Allow** list
+6. Scroll down to the bottom and click **Add Value** twice (x2)
+7. Copy and paste the following APIs into each of the text fields:
+ - `securitycenter.googleapis.com`
+ - `securitycentermanagement.googleapis.com`
+8. Click **Done**
+9. Click **Set Policy**
+
+## 4.0 - Activate Security Command Center
+
+1. Navigate to [Security Command Center](https://console.cloud.google.com/marketplace/product/google-cloud-platform/cloud-security-command-center-premium)
+2. In the **Project Selector**, select your organization
+3. Click **Go to Security Command Center**
+4. Click **Get Security Command Center**
+5. Select a tier
+ - We have been evaluating the Premium tier
+6. Click **Next**
+7. Accept the default services. All services are enabled by default.
+8. Click **Next**
+9. Under Data Residency, select **Enable Data Residency**
+ - **IMPORTANT: Data Residency is required for FedRAMP High compliance**
+10. Select the **Grant Roles Automatically** radio button
+11. Click **Grant Roles**
+12. Click **Next** once the roles have successfully been granted.
+13. Click **Finish** once you have reached the **Ready to Complete Setup** screen
+14. Navigate to [Compliance Center](https://console.cloud.google.com/security/command-center/compliance)
+15. In the **Project Selector**, select your organization
+16. Wait about **24 hours** for the scans to complete and for the reports to be generated
+
+## Appendix
+
+### Resources
+- [Security Command Center Overview](https://cloud.google.com/security-command-center/docs/security-command-center-overview)
+- [Overview of Organization-Level Activation](https://cloud.google.com/security-command-center/docs/activate-scc-overview#overview_of_organization-level_activation)
+- [Activate the Security Command Center Standard/Premium Tier for an Organization](https://cloud.google.com/security-command-center/docs/activate-scc-for-an-organization)
+- [Set Up Permissions](https://cloud.google.com/security-command-center/docs/activate-scc-for-an-organization#set_up_permissions)
diff --git a/docs/security-best-practices.md b/docs/security-best-practices.md
new file mode 100644
index 000000000..5b7e91ef6
--- /dev/null
+++ b/docs/security-best-practices.md
@@ -0,0 +1,537 @@
+# SE Security Best Practices Guide
+
+**Created Date:** Nov 21, 2024
+
+**Google POC(s):** stellar-engine@google.com
+
+**Version:** 1.0.2
+
+**Recent changes**: Updated formatting, TOC, and added
+essentialcontacts.managed.allowedContactDomains information
+
+**Purpose:** The purpose of this document is to outline recommended actions and
+procedures to help organizations effectively protect their systems, data, and
+infrastructure from cyber threats, while using Stellar Engine.
+
+**Background:** Stellar Engine is designed to facilitate rapid deployment and
+operation of services within a secure Google Cloud environment, specifically
+targeting Department of Defense (DoD) Impact Level 5 (IL5) and FedRAMP High
+Authorization to Operate (ATO) requirements. Leveraging Infrastructure as Code
+(IaC), Stellar Engine utilizes bootstrap scripts to provision a baseline
+environment within an Assured Workloads folder. This foundational environment
+allows users to selectively deploy Google Cloud and approved third-party
+services based on their specific needs, offering flexibility while maintaining a
+standardized security posture.
+
+To proactively identify and mitigate potential security vulnerabilities within
+Stellar Engine, Mandiant conducted a penetration test. This assessment focused
+on emulating real-world attacker tactics, techniques, and procedures (TTPs) from
+the perspective of an authenticated user on a Google Compute Engine instance.
+The primary objective was to attempt privilege escalation and lateral movement
+within the environment.
+
+Mandiant’s testing encompassed 18 key Google Cloud resources, aiming to identify
+and close any security gaps that could impede the attainment of an ATO. The
+findings and recommendations presented in this Security Best Practices guide are
+derived from the penetration test report and aim to enhance Stellar Engine’s
+ability to prevent, detect, and contain potential threats. These best practices
+will assist in bolstering the security posture of Stellar Engine and contribute
+to a successful ATO.
+
+# Table of Contents
+
+# Identity and Access Management
+
+Stellar Engine's Identity and Access Management (IAM) security best practices
+prioritize least privilege and Infrastructure-as-Code (IaC) using Terraform.
+Human access to create or modify cloud resources is limited to tightly
+controlled development environments. The Cloud Resource Manager service account
+must be used for terraform apply actions within the CI/CD pipeline.
+
+Access must be granted to _groups_, not _users_, based on curated,
+least-privilege roles. Each microservice must utilize a unique service account
+with minimal necessary permissions. Downloadable service account keys and
+automatic role grants to default service accounts must be disabled.
+
+Role groups, mirroring development and administrative functions must manage
+access, following a gcp-X-${tenant}-${role}@X.gov naming convention. Initial
+memberships are defined in Terraform with ongoing management handled by existing
+IAM systems or Terraform.
+
+While both individual Google Cloud accounts (preferred) and service accounts can
+execute the bootstrap phase, an individual account (with gcp_org_admins group
+membership) must be used. This account shall be tightly controlled and must be
+disabled when not in use. Specific organization and folder-level roles are
+required for bootstrap execution, including Organization Admin/Viewer,
+Organization Policy Admin, Billing Admin/User, Folder Creator, Access Context
+Manager Admin/User, and Security Admin.
+
+ISV must follow the principle of least privilege and configure IAM roles that
+align with the needs of the system.
+
+## _Human Access_
+
+- Access must be granted to groups, not individual users.
+- Access will be granted based on a minimalized set of curated roles.
+
+## _Machine Access_
+
+- Individual Service Accounts will be defined for each microservice.
+- Downloadable Service Account keys will not be used and their creation should
+ be disabled by organization policy.
+- Access will be granted based on the principle of least privilege, with only
+ necessary functionality granted for the microservice.
+- Disable automatic role grants to default service accounts
+ (iam.automaticIamGrantsForDefaultServiceAccounts ) should be enabled as
+ organization policy , this will remove the editor role from the default
+ service accounts.
+
+## _Application Specific Groups_
+
+While Stellar Engine manages a core set of role groups, it's important to
+acknowledge that other platforms and tenant-specific environments may require
+additional role groups not explicitly defined within Stellar Engine. These
+externally managed groups, such as those found in a secured data warehouse
+(e.g., Data Analyst, Data Engineer, Network Administrator, Security
+Administrator, Security Analyst), are outside the immediate scope of Stellar
+Engine. However, understanding their potential overlap and interaction with
+Stellar Engine's role management is crucial for a holistic view. Future
+integrations or extensions may consider mapping or accommodating these external
+groups.
+
+## User Specific IAM
+
+- Navigate to
+- In the console search bar type and select **_IAM_**
+
+- Select the **_Allow_** and **_View By Principals_**
+- Click **_Grant Access_**
+
+- Insert the email address into the **_New principals_** field
+- Assign the appropriate roles needed. **Ensure to select roles based off the
+ principle of least privileged. **
+
+- Click **_SAVE_** when all necessary roles have been selected.
+
+## Group IAM
+
+- Navigate to
+- In the console search bar type and select **Groups**.
+
+You will see pre-configured IAM Groups that are organically a part of Stellar
+Engine
+
+- By clicking on the three dots of a particular group, you can add users to
+ the specified group
+
+- To create additional groups, click **_Create_**
+
+- Fill out **Group Name**, **Group email address**, _and_ **Group
+ description**.
+- Click **_SAVE_**
+- Navigate to **_IAM_**
+
+- Select the **_Allow_** and **_View By Principals_**
+- Click **_Grant Access_**
+
+- Insert the new group email address into the **_New principals_** field
+- Assign the appropriate roles needed. **Ensure to select roles based off the
+ principle of least privilege. **
+
+- Click **_SAVE_** when all necessary roles have been selected.
+
+**References**
+
+**_NIST SP 800-53 revision 5, Security and Privacy Controls for Federal
+Information Systems and Organizations_**
+
+- **SA-8(14) Security and Privacy Engineering Principles | Least Privilege**
+ - Implement the security design principle of least privilege
+
+#
+
+# Group Permission Viewing Restrictions
+
+In most organizations a healthy amount of opacity is desired. Most organizations
+will want to disable general viewership of group members. For example, an
+organization may wish that their developers not be able to generally see who are
+all the members of a super-admin group. For instance, an organization might want
+to restrict developers' general access to the membership list of a super-admin
+group.
+
+- With a Super User account, visit
+ [https://admin.google.com](https://admin.google.com/ac/security/2sv)
+- Via the Dropdown Menu Directory \> Groups
+- Select the group which you wish to restrict membership viewers
+- Access Settings \> Edit (hover over access type and click the pencil)
+- Change the Radio Icon to Restricted.
+
+# Multi Factor Authentication
+
+Multi-Factor Authentication (MFA) must be enforced for all users (both
+privileged and non-privileged) to significantly enhance account security. While
+not enforced by default in Stellar Engine due to logistical reasons, enabling
+MFA is crucial for IL5 compliance and aligns with best practices recommended by
+Google and NIST. Without MFA, compromised user credentials (obtained via methods
+like password spraying or social engineering) grant an attacker unrestricted
+access to internal resources, potentially facilitating lateral movement.
+
+ISV must implement MFA when creating user accounts.
+
+- With a Super User account, visit
+- Check the box **_Allow users to turn on 2-Step Verification_**
+- Select **_Enforcement On_**
+- Configure **_New user enrollment period_**, **_Frequency_**, and
+ **_Methods_**, as needed
+
+- Click **_SAVE_**
+
+**References**
+
+**_NIST SP 800-53 revision 5, Security and Privacy Controls for Federal
+Information Systems and Organizations_**
+
+- **IA-2(1) Identification and Authentication (organizational users) |
+ Multi-Factor Authentication for Privileged Accounts**
+ - Implement multi-factor authentication for access to privileged accounts
+- **IA-2(2) Identification and Authentication (organizational users) |
+ Multi-Factor Authentication for Non-Privileged Accounts**
+ - Implement multi-factor authentication for access to non-privileged
+ accounts
+
+**_NIST SP 800-63-3, Digital Identity Guidelines_**
+
+#
+
+# GCP Cloud Shell Enabled
+
+Cloud Shell is not currently supported at IL2, IL4, and IL5 and should be
+disabled. To see a list of compliant products, refer to this page
+[https://cloud.google.com/assured-workloads/docs/supported-products
+](https://cloud.google.com/assured-workloads/docs/supported-products)
+
+Cloud Shell provides an opportunity for execution within the GCP Environment. An
+attacker can leverage this resource to execute a payload and gain persistence in
+the environment. The Cloud Shell instance is not a Compute Instance hosted by
+the GCP customer, rather GCP. Although the compromised instance isn't directly
+involved in critical projects, it lets attackers proxy traffic through Google
+and bypass system-level logging on the host.
+
+ISV is responsible for disabling GCP Cloud Shell access.
+
+- With a Super User account, visit
+
+
+
+
+- Scroll down and click **_Google Cloud Platform_**
+- Click **Cloud Shell Access Settings**
+
+- Uncheck **_Allow access to Cloud Shell_**
+- Click **_SAVE_**
+
+#
+
+**References**
+
+**_Google Cloud Shell Documentation_**
+
+- **_Disable or reset Cloud Shell;
+ _**
+
+# Detection, Alerting, and Logging; Security Information and Event Management is Not Segmented
+
+Google Cloud collects several types of logs for auditing and monitoring:
+
+- **Audit Logs**: Record actions within the cloud environment. There are three
+ types:
+ - **Admin Activity**: Logs changes to resource configurations (e.g.,
+ creating VMs, changing permissions).
+ - **Data Access**: Logs reads of resource configuration and metadata, as
+ well as user actions that create, modify, or read user data.
+ - **System Event**: Logs changes made by Google's systems, not directly by
+ users.
+- **Other Logs**: Provide network and access information:
+- **VPC Flow Logs**: Sample network traffic to and from your VMs. Useful for
+ network monitoring and security analysis.
+- **Firewall Rule Logs**: Show how firewall rules are working and how many
+ connections they affect.
+- **Access Transparency Logs**: Detail actions taken by Google staff when they
+ access your environment.
+
+ISV must bring a SIEM/SOAR solution that lives outside the project, in order to
+meet organizational and compliance requirements.
+
+The ISV shall implement their own SIEM to proactively monitor their software’s
+infrastructure and the usage patterns within the software. This SIEM system may
+integrate with existing SIEM tools, but the ISV shall maintain its own dedicated
+instance for comprehensive internal security oversight. The SIEM system must
+provide real-time alerts and notifications based on configured rules and
+thresholds, enabling the ISV to promptly respond to potential threats or
+anomalies.
+
+The ISV must segment the SIEM in another project and on a separate VPC from
+where it is collecting data.
+
+**References**
+
+**_NIST SP 800-53 revision 5, Security and Privacy Controls for Federal
+Information Systems and Organizations_**
+
+- **Audit and Accountability (AU)**
+- **Assessment, Authorization, and Monitoring (CA)**
+- **System and information Integrity (SI)**
+
+# Data Exfiltration Possible from Cloud Storage
+
+Multi-Factor Authentication (MFA) must be enforced for all users (both
+privileged and non-privileged) to significantly enhance account security. While
+not enforced by default in Stellar Engine due to logistical reasons, enabling
+MFA is crucial for IL5 compliance and aligns with best practices recommended by
+Google and NIST. Without MFA, compromised user credentials (obtained via methods
+like password spraying or social engineering) grant an attacker unrestricted
+access to internal resources, potentially facilitating lateral movement.
+
+ISV must implement MFA when creating user accounts.
+
+Referring to **_Multi Factor Authentication_**
+
+While implementing robust security measures, Stellar Engine shall utilize
+Context Aware Access to restrict resource access based on user context.
+Additionally, the implementation of VPC Service Controls on data storage buckets
+is strongly recommended for enhanced data security. However, it is important to
+note that Identity-Aware Proxy (IAP) is not compliant with IL5 and VPC Service
+Controls are currently unavailable within an IL5.
+
+**References**
+
+**_NIST SP 800-53 revision 5, Security and Privacy Controls for Federal
+Information Systems and Organizations_**
+
+- **IA-2(1) Identification and Authentication (organizational users) |
+ Multi-Factor Authentication for Privileged Accounts**
+ - Implement multi-factor authentication for access to privileged accounts
+- **IA-2(2) Identification and Authentication (organizational users) |
+ Multi-Factor Authentication for Non-Privileged Accounts**
+ - Implement multi-factor authentication for access to non-privileged
+ accounts
+
+**_NIST SP 800-63-3, Digital Identity Guidelines_**
+
+**_Context Aware Access_**
+
+-
+
+**_VPC Service Controls_**
+
+- [https://cloud.google.com/vpc-service-controls/docs/overview\#how-vpc-service-controls-work](https://cloud.google.com/vpc-service-controls/docs/overview#how-vpc-service-controls-works)
+
+# Essential Contacts
+
+Essential Contacts, a core service within Google Cloud, is designed to provide
+timely and relevant notifications to designated personnel regarding critical
+aspects of resources and projects. This service allows organizations to assign
+specific individuals or groups to receive alerts on a wide range of topics,
+including billing, security, technical matters, and legal obligations. Properly
+leveraging Essential Contacts is essential for mitigating risks, ensuring
+compliance, and fostering a proactive approach to maintaining a secure and
+well-managed Google Cloud environment.
+
+ISV must configure essential contacts.
+
+## Managing Essential Contacts Domain Restrictions
+
+To enhance security, Google Cloud may enforce the
+essentialcontacts.managed.allowedContactDomains organization policy constraint.
+This policy restricts the email domains that can be used when adding [Essential
+Contacts](https://cloud.google.com/resource-manager/docs/managing-notification-contacts)
+for important notifications.
+
+By default, for organizations created on or after June 26, 2025, the
+organization's own domain is automatically included in the allowed list.
+However, you might find that the list of allowed domains is empty or does not
+include all necessary domains. This can occur if the organization was created
+before this date or if an Infrastructure as Code (IaC) tool like Terraform is
+managing and applying a stricter version of the policy.
+
+If a domain is not on the allowed list, you will be blocked from adding contacts
+with email addresses from that domain.
+
+**Action**:
+
+1. Verify the current state of the
+ essentialcontacts.managed.allowedContactDomains policy for your
+ organization.
+2. If necessary, update the policy to add any required contact domains. Each
+ domain entry must be prefixed with an "@" symbol (e.g., @myorgdomain.com,
+ @[partnerdomain.com](http://partnerdomain.com)).
+3. **Important**: If you use Terraform, Latchkey, or another IaC tool to manage
+ organization policies, ensure you make these changes within your IaC
+ configuration files. Manual changes made via the Cloud Console or gcloud may
+ be overwritten by your IaC automation.
+
+This ensures that only email addresses from approved domains can be designated
+as essential contacts, while allowing you to configure the domains your
+organization trusts.
+
+## Add Essential Contacts
+
+- Navigate to
+- Select the appropriate organization or project where Essential Contacts will
+ be configured. Essential Contacts configured at the organizational level
+ will be inherited by sub-folders and projects but are able to be overridden
+ at lower levels
+
+- In the console search bar type and select **_Essential Contacts_**
+
+- Click **_Add Contact_**
+
+- Enter the email address of the individual or group to configure
+ notifications.
+- Select the Notification Categories to send corresponding notifications for.
+ - Suspension; messages related to imminent suspension
+ - Security; Security/Privacy issues, notifications, and vulnerabilities
+ - Technical; Technical events and issues, such as outages, errors, and
+ bugs
+ - Billing; Billing and payments notifications, price updates, errors,
+ credits
+ - Legal; Enforcement actions, regulatory compliance, government notices
+ - Product Updates; New versions, product terms updates, deprecations
+ - All; All notifications from every other category
+- Click **_SAVE_**
+
+## Manage Existing Contacts
+
+- Click to **_edit_** the Essential Contact for the applicable **_Notification
+ Category_**
+
+- Click to **_EDIT_** the Essential Contact or Click to ***DELETE ***the
+ Essential Contact
+
+**References**
+
+**_Managing Contacts for Notifications_**
+
+-
+
+# Data Security
+
+Google provides many protections to GCP customers, however, security of
+workloads running in GCP is a shared responsibility.
+
+Customers are responsible for the following aspects of their application
+security:
+
+## Data
+
+Google encrypts all data communication channels that it uses to transmit data
+between services; customers are responsible for ensuring that the transmission
+of data is facilitated over an encrypted channel.
+
+Google encrypts all data on storage devices to prevent anyone with physical
+access to physical devices from being able to inspect the data contained on
+those devices. Customers can provide their own encryption keys for the
+encryption of Google Compute Engine Persistent Disks and Google Cloud Storage
+buckets.
+
+Data stored within databases are all encrypted at the storage level, however
+additional encryption is advisable at the application level to prevent customer
+users from accessing content and limiting spillage in the event of intrusion.
+
+A customer may load data which may include PII and PCI into BigQuery for
+analysis. Customers are responsible for being aware of and abiding by any
+regulations regarding the use and storage of this data and are responsible for
+developing their own aggregation capabilities.
+
+## Cloud Key Management Service (KMS)
+
+Cloud KMS is a global cloud-hosted key management service that lets customers
+manage encryption for cloud services the same way a customer would on-premise.
+
+Cloud KMS is a good solution if a customer needs to encrypt data at the
+application level or manage their own encryption keys for compliance or
+regulatory reasons.
+
+- **Key Rotation: **Regular rotation of the encryption key is encouraged.
+ Regular rotation will limit the amount of data protected by a single key.
+ Automatic rotation can be configured on a user defined schedule by using
+ _gcloud_ or the _GCP Console_.
+- **Separation of duties: **Cloud KMS should be run in its own project without
+ an owner at the project-level and instead being managed by an Org Admin. The
+ Org Admin is not able to manage or use keys, but they are able to set IAM
+ policies to restrict who has permissions for key management and usage.
+ Additionally, the ability to manage Cloud KMS should have role separation
+ from the ability to perform encryption and decryption operations. Any user
+ with management access should not be able to decrypt data.
+- **Additional Authenticated Data (AAD): **It is recommended to use AAD as an
+ additional integrity check to help protect your data from a confused deputy
+ attack. Additional authenticated data is a string that is passed to Cloud
+ KMS as part of an encrypt or decrypt API call. Cloud KMS cannot decrypt
+ ciphertext unless the same AAD value is used for both encryption and
+ decryption. By default, an empty string is used for the AAD value.
+
+## Google Cloud Storage
+
+Google Cloud Storage identifies buckets and objects by their names. While Access
+Control Lists (ACLs) prevent unauthorized actions, anyone can attempt requests
+using bucket or object names. Even a failed request reveals whether a bucket or
+object exists. This is a potential security concern, as observing error
+responses can expose the existence of buckets and objects to unauthorized
+parties. Furthermore, bucket names themselves might hint at the data they
+contain, potentially leading to information leaks. If you're concerned about
+protecting the privacy of your bucket and object names, consider taking
+appropriate precautions, such as:
+
+- Choosing bucket and object names that are difficult to guess
+ - For example, a bucket named _stellar-engine-mybucket-gxl3_ is random
+ enough that unauthorized third parties cannot feasibly guess it or
+ enumerate other bucket names from it.
+- Ensure your default object ACLs meet your requirements before uploading
+ objects to a bucket. Setting them correctly beforehand can save significant
+ time compared to updating individual object ACLs later.
+- A straightforward and effective security practice is to segregate private
+ and public data into separate buckets, clearly labeling those intended for
+ public access.
+ - For example, _stellar-engine-mybucket-public-3vxa_.
+- **_Best Practice_**: Carefully evaluate the access control settings for each
+ Cloud Storage object before writing data. Public read access grants
+ universal read permissions and should only be used when deliberately
+ exposing data to the public internet. Once an object is made publicly
+ readable, it can be copied and disseminated, making it virtually impossible
+ to regain exclusive control over the content.
+- **_Best Practice_**: Avoid granting public write access to Cloud Storage
+ buckets. While publicly writable buckets may appear convenient, the security
+ risks are substantial. They can be easily exploited for malicious purposes,
+ including the distribution of illegal content, viruses, and malware. Bucket
+ owners bear full legal and financial responsibility for all content stored
+ within their buckets, regardless of its origin.
+- **_Best Practice_**: Utilize Signed URLs to securely share Cloud Storage
+ content with users who do not have Google accounts. Signed URLs offer a
+ controlled method for granting temporary, authenticated access to specific
+ objects without requiring users to directly authenticate with Cloud Storage.
+ Users retain control over the access type (read, write, delete) and its
+ duration.
+
+## Data residency
+
+**_Best Practice_**: Implement a comprehensive data residency strategy
+leveraging the Organization Policy Service and Cloud IAM.
+
+- **Define Required Regions**: Clearly define the regions where your data is
+ permitted to reside, based on regulatory requirements and business needs.
+- **Implement Organization Policy**: Use the Organization Policy Service's
+ "resource locations" constraint to restrict the creation of new resources to
+ the approved regions. Apply this policy at the organization, folder, or
+ project level as appropriate.
+- **Integrate with Cloud IAM**: Configure Cloud IAM to control service usage,
+ preventing users from inadvertently deploying resources in non-compliant
+ regions.
+- **Audit and Monitor**: Regularly audit your Google Cloud environment to
+ ensure ongoing compliance with data residency policies.
+- **Understand Limitations**: Be aware that the "resource locations"
+ constraint only applies to newly created resources. Existing resources
+ created before the policy was implemented will continue to function in their
+ existing locations.
+
+This multi-faceted approach helps ensure your data at rest remains within the
+intended geographic boundaries.
diff --git a/docs/tdd.md b/docs/tdd.md
new file mode 100644
index 000000000..4014e2b4e
--- /dev/null
+++ b/docs/tdd.md
@@ -0,0 +1,1665 @@
+# **Stellar Engine**
+
+Technical Design Document
+
+Date updated: December 5, 2025
+
+| Created: | |
+| :------------------------------------ | :-------------------------------------------------------------- |
+| Updated: | |
+| Version: | v3.0 |
+| Most recent changes: | Updates to reflect new architecture; Administrative corrections |
+
+Contents
+
+_This document is intended to be a technical working document to help manage the
+delivery of a project and is provided for illustrative purposes only. The
+activities and goals serve as guidelines and additional detail, and do not
+supersede any legal terms or conditions as defined in the customer’s written
+contract with Google. _
+
+# 1.0 Purpose
+
+Extending [Cloud Foundation
+Fabric](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric), Stellar
+Engine provides compliant Google Cloud infrastructure and opinionated
+architecture patterns for supported [Assured
+Workloads](https://cloud.google.com/security/products/assured-workloads?hl=en)
+compliance regimes, including Google best practices and cybersecurity
+documentation to aid in the Federal Authorization to Operate (ATO) process.
+Stellar Engine is designed as an accelerator for technology teams supporting
+Google Cloud customer and partner engagements.
+
+Specifically, Stellar Engine:
+
+- Templatizes and automates the initial GCP foundations phases of a GPS
+ Professional Services Organization (PSO) customer engagement
+- Provides a common base on which to build GPS reference architectures and
+ solutions
+- Supports PSO customers with an Impact Level 5 (IL5) compliant Infrastructure
+ as Code (IaC) framework that they can adopt
+- Provides PSO the ability to extend the codebase to be compliant with other
+ Federal Government programs in future engagements
+- Eases the use of GCP and Terraform for GPS customers
+
+This document is not intended to inform users on Security Best Practices. For
+information on post deployment recommendations, please see the [Security Best
+Practices Guide
+(SBPG)](https://docs.google.com/document/d/1uv62Fqg73r9oJNP-NPZebpzoBom8rOgLoHkiMZPutbo/edit?tab=t.0#heading=h.gjdgxs).
+
+# 2.0 Executive Summary
+
+The supplied structure and code is intended to form a starting point for
+building your own foundation with pragmatic defaults that you can customize to
+meet your own requirements. Currently, all code is deployed manually as we
+determine a path forward that embraces automation – such as Github Actions,
+GitLab CI/CD, or Cloud Build – and is compliant with established IL5
+regulations.
+
+A root folder is created at the top of the organization to hold all projects and
+resources. From the root folder, tenants can be created in either an Assured
+Workloads folder with configurable compliance regime, such as FedRAMP High or
+IL5, based on the needs of the user. Each deployment initially creates a
+development environment with the intention of users migrating to a production
+environment after the development environment has been fully built and tested.
+
+## 2.1 Overview
+
+Stellar Engine is provisioned through a series of bootstrap scripts to create a
+baseline environment within an Assured Workloads folder. Once the environment is
+provisioned, users can pick and choose what services to deploy based on their
+specific use case (see Section 3.4 for more details). In addition to Google
+Cloud services, the IaC provides users the flexibility to provision third-party
+offerings to operate within the Stellar Engine environment.
+
+## 2.2 Project Architecture
+
+In the _example.com_ architecture, a series of projects reside under the
+organization node, which contains resources that are used across the
+_example.com_ organization. These projects, detailed in the table below, provide
+various enterprise functions and are created through the infrastructure
+deployment pipeline.
+
+| Project | Description | |
+| :------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :-- |
+| org-iac-core-0 | Contains the deployment pipeline that's used to build out the foundation components of the organization. This project should be highly restricted. | |
+| org-audit-logs-0 | Provides a destination for log sinks and detective controls. | |
+| org-billing-export-0 | Contains a BigQuery dataset with the organization's billing exports. (optional) | |
+
+# 3.0 Cloud Foundation
+
+## 3.1 Project Organization
+
+.The following policies are implemented at the organization level via Terraform.
+They are applied using a combination of Google-managed defaults and custom
+constraints defined in YAML files (data/org-policies/ and
+data/custom-org-policies/).
+
+### Network Security & Isolation:
+
+- compute.vmExternalIpAccess: Deny All. VMs cannot be created with external IP
+ addresses.
+- compute.skipDefaultNetworkCreation: Enforced. Skips creation of the default
+ network in new projects.
+- compute.restrictProtocolForwardingCreationForTypes: Restricted to INTERNAL.
+ New forwarding rules must use internal IP addresses.
+- compute.restrictLoadBalancerCreationForTypes: Restricted to INTERNAL. Only
+ internal load balancers are allowed.
+- compute.disableGlobalLoadBalancing: Enforced. Global load balancing is
+ disabled.
+- compute.disableGlobalCloudArmorPolicy: Enforced. Global Cloud Armor policies
+ are disabled.
+- compute.disableGlobalSelfManagedSslCertificate: Enforced. Global
+ self-managed SSL certificates are disabled.
+- compute.restrictCloudNATUsage: Restricted to organization resources.
+- compute.restrictDedicatedInterconnectUsage: Restricted to organization
+ resources.
+- compute.restrictPartnerInterconnectUsage: Restricted to organization
+ resources.
+- compute.restrictVpcPeering: Restricted to organization resources and
+ specific allowed folders.
+- compute.setNewProjectDefaultToZonalDNSOnly: Enforced. Newly created projects
+ use Zonal DNS by default.
+- sql.restrictAuthorizedNetworks: Enforced. Prevents adding authorized
+ networks for unproxied Cloud SQL access.
+- sql.restrictPublicIp: Enforced. Restricts public IP addresses on Cloud SQL
+ instances.
+
+### Data Security & Encryption:
+
+- custom.kmsRotation: Enforced. Requires proper rotation period for KMS keys
+ (Custom Constraint).
+- storage.uniformBucketLevelAccess: Enforced. Buckets must use uniform
+ bucket-level access.
+- storage.publicAccessPrevention: Enforced. Prevents public access to Cloud
+ Storage buckets.
+- storage.secureHttpTransport: Enforced. Requires HTTPS for Cloud Storage.
+- appengine.disableCodeDownload: Enforced. Disables downloading source code
+ from App Engine.
+- bigquery.disableBQOmniAWS: Enforced. Disables BigQuery Omni for AWS.
+- bigquery.disableBQOmniAzure: Enforced. Disables BigQuery Omni for Azure.
+
+### IAM & Service Accounts:
+
+- iam.managed.disableServiceAccountKeyCreation: Enforced.
+- iam.managed.disableServiceAccountKeyUpload: Enforced.
+- iam.managed.preventPrivilegedBasicRolesForDefaultServiceAccounts: Enforced.
+- iam.automaticIamGrantsForDefaultServiceAccounts: Enforced. Prevents
+ automatic role grants for default service accounts.
+- iam.serviceAccountKeyExposureResponse: DISABLE_KEY. Automatically disables
+ exposed keys.
+- iam.allowedPolicyMemberDomains: Restricted to the customer ID and allowed
+ domains.
+- essentialcontacts.managed.allowedContactDomains: Restricted to allowed
+ domains.
+- essentialcontacts.allowedContactDomains: Restricted to the organization's
+ domain.
+
+### Serverless Security (Cloud Run & Functions):
+
+- cloudfunctions.allowedIngressSettings: ALLOW_INTERNAL_AND_GCLB. Restricts
+ function ingress.
+- cloudfunctions.allowedVpcConnectorEgressSettings: ALL_TRAFFIC. Forces all
+ traffic through VPC connector.
+- run.allowedIngress: internal-and-cloud-load-balancing. Restricts Cloud Run
+ ingress.
+- run.allowedVPCEgress: all-traffic. Forces all Cloud Run traffic through VPC
+ connector.
+- cloudbuild.allowedIntegrations: Restricted to github.com.
+
+### Resource Management:
+
+- resourcemanager.allowedImportSources: Restricted to the organization.
+- resourcemanager.allowedExportDestinations: Restricted to the organization.
+- resourcemanager.accessBoundaries: Restricted to the organization (Cloud
+ Console visibility).
+
+The purpose of this step is to create the below resources. The names of the
+folders can be modified in 1-resman folders.
+
+**Folders:**
+
+- Root level folders
+ - Folder representing an organization root - Eg: Organization Business
+ Unit
+- First level folders
+ - Common Services
+ - Test
+ - Integration
+ - Production
+- Second level folders
+ - Network - Downstream of Common Services. Network hub folder, contains
+ all the networking projects.
+ - Security - Downstream of Common Services. Centralized security services
+ folder. Contains IAC Core, Audit Logs, and Billing Export projects.
+ - Tenant - Downstream of Test/Integration/Prod folders.
+- Third Level Folders
+ - Tenant Core - Contains Tenant IAC Core project
+ - Tenant Main - Contains Tenant Main Project
+
+The structure below is an example of a nested layout conforming to the proposed
+cloud project organization.
+
+Example Organization
+
+- Organization Root Folder
+ - Network-IL5
+ - Production
+ - \-prod-net-landing-0
+ - \-prod-net-spoke-0
+ - Development
+ - \-dev-net-landing-0
+ - \-dev-net-spoke-0
+ - Security
+ - \-prod-security-core-0
+ - \-workstream-a-iac-core-0
+ - Workstream Tenant
+ - Project-A
+ - Project-B
+ - Workstream-B-IL4
+ - Workstream Core
+ - \-workstream-b-iac-core-0
+ - Workstream Tenant
+ - Project-C
+ - Project-D
+
+## 3.2 IAM
+
+### IAM Principles
+
+These are the cybersecurity design principles that guide the IAM settings.
+
+- IAM Policy is defined by Infrastructure-as-code (IaC) and enforced by Google
+ IAM. The IaC used to define this policy is reviewed and submitted using
+ Terraform. Using this code:
+ - No human should have permissions to create or modify cloud resources in
+ any environment other than a development environment. Even in a
+ development environment, these permissions should be tightly controlled.
+ - The Cloud Resource Manager access required to execute Terraform code
+ will be assigned to a unique service account defined by Stellar Engine.
+ - This service account will only be used by the CI/CD pipeline for
+ terraform apply actions.
+ - **Note:** Broader permissions may be given to development projects to
+ accelerate the rate of development. This should only be done under
+ extreme caution, and permissions should not propagate to any environment
+ other than the development environment (e.g., testing, staging, or
+ production).
+- Human access
+ - Access must be granted to groups, not individual users.
+ - Access will be granted based on a minimalized set of curated roles.
+ - Access is granted based on the principle of least privilege, with only
+ the minimum amount of access granted to perform a function.
+- Machine access
+ - Individual Service Accounts will be defined for each microservice.
+ - Downloadable Service Account keys will not be used and their creation
+ should be disabled by organization policy.
+ - Access will be granted based on the principle of least privilege, with
+ only necessary functionality granted for the microservice.
+ - Disable automatic role grants to default service accounts
+ (*iam.automaticIamGrantsForDefaultServiceAccounts ) *should be enabled
+ as organization policy , this will remove the editor role from the
+ default service accounts.
+
+The IAM model used in GCP is illustrated in the diagram below:
+
+GCP Pre-Defined Roles will be used, custom roles are not recommended due to
+lifecycle management burdens.
+
+### Role Groups
+
+Role Groups are created corresponding to the various development and
+administrative roles needed to build and maintain the application.
+
+- Roles are identified by development and administrative teams.
+- Groups are created for each role and owned by X. The bootstrap Terraform
+ service account is an owner of each of these groups.
+ - Group naming convention: **gcp-X-${tenant}-${role}@X.gov**
+- Initial role group memberships needed for system provisioning are checked
+ into Terraform code and applied by the bootstrap Terraform service accounts.
+ Ongoing role group membership management should be implemented with
+ customers existing IAM systems, or should continue to use the Terraform IaC
+ method.
+
+#### GCP Initial Role Groups
+
+The initial _role groups_ created during the GCP organization set up and their
+assigned _roles_ at the _organization level_ are:
+
+| Role Group | Description | Assigned Roles @ Org Level |
+| :---------------------- | :----------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| gcp-organization-admins | Org admins have access to administer all resources belong to the organization | - Billing Account User
- Support Account Admin
- Organization Role Admin
- Organization Policy Admin
- Folder Admin
- Organization Admin |
+| gcp-billing-admins | set up billing accounts and monitoring their usage | - Billing Account Administrator
- Billing Account Creator
- Organization Viewer |
+| gcp-vpc-network-admins | creating networks, subnets, firewall rules, and network devices such as router, vpn, and load balancers | - Compute Network Admin
- Compute Security Admin
- Compute Shared VPC Admin
- Folder Viewer |
+| gcp-logging-admins | have access to all features of logging | -Logging Admin |
+| gcp-logging-viewers | have read-only access to a specific subsets of logs ingested into logging | - Logging Viewer |
+| gcp-monitoring-admins | have access to use and configure all features of Cloud Monitoring | - Monitoring Admin |
+| gcp-security-admins | establish and manage security policies for the entire organization, including access management and organization constraint policies | - Compute Viewer
- Kubernetes Engine Viewer
- Organization Role Viewer
- Security Reviewer
- Logs Configuration Writer
- Private Logs Viewer
- Organization Policy Administrator
- Folder IAM Admin
- Security Center Admin |
+| gcp-developers | design, develop and test applications | No default roles at the organization level. |
+| gcp-devops | create or manage end-to-end pipelines that support CICD, monitoring, and system provisioning | - Folder Viewer
- Organization Viewer*
- Organization Policy Viewer*
- Logging Viewer\* |
+
+Not all above role groups are mandatory, and one GCP group can be mapped to
+multiple role groups.
+
+#### Additional Role Groups to Role Mapping
+
+Stellar Engine adds additional role groups and role mappings at the top node
+level (could be Organization or a Folder, depending on customer environment) and
+projects levels for custom groups.
+
+| Group | Description |
+| :---------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| gcp-organization-admins | Administer all organization resources. Assign this role only to your most trusted users. |
+| gcp-billing-admins | Set up billing accounts and monitor usage. |
+| gcp-vpc-network-admins | Create networks, subnets, firewall rules, and network devices such as Cloud Router, Cloud VPN, and load balancers. |
+| gcp-logging-admins | Use all Cloud Logging features. |
+| gcp-logging-viewers | Read-only access to a subset of logs. |
+| gcp-monitoring-admins | Establishing and managing security policies for the entire organization, including access management and organization constraint policies. See the Google Cloud enterprise foundations blueprint for more information about planning your Google Cloud security infrastructure. |
+| gcp-security-admins | Establish and manage security policies for the entire organization, including access management and organization constraint policies. |
+| gcp-developers | Design, code, and test applications. |
+| gcp-devops | Create or manage end-to-end pipelines that support continuous integration and delivery, monitoring, and system provisioning. |
+
+#### Application Specific Groups
+
+There could be additional groups with role bindings for each tenant and/or
+service platforms. This is beyond the scope of Stellar Engine but worth
+mentioning here so that we can have a complete picture of the role group
+management.
+
+For example, a secured data warehouse blueprint defines the following groups:
+
+- Data analyst group
+- Data engineer group
+- Network administrator group
+- Security administrator group
+- Security analyst group
+
+Some above groups can be mapped to existing groups which Stellar Engine defines,
+but some new groups need to be created and managed.
+
+### Service Accounts
+
+Multiple service accounts are used across Stellar Engine. This is to maintain
+principles of least privilege and ensure service accounts are used for a
+singular purpose, so as not to make them overly permissive.
+
+Additional restrictions can be set to service accounts, including:
+
+- Disable automatic role grants to default service accounts\*
+- Disable service account creation
+- Disable service account key creation\*
+- Disable service account key upload\*
+- Disable attachment of service accounts to resources in other projects
+- Restrict removal of project liens when service accounts are used across
+ projects
+
+####
+
+Policies with (\*) are recommended.
+
+#### Initial Service Account in bootstrap phase
+
+A system administrator with an individual GCP account can run the bootstrap
+phase, or they can impersonate a service account to do so. The preferred and
+suggested method is to use an individual account with the gcp_org_admins group
+membership assigned to ensure the required privileges are assigned. This account
+should be tightly controlled and disabled when not in use.
+
+To run the bootstrap phase using the service account, grant the following roles
+outside of Terraform:
+
+- Organization Admin of the GCP Organization if the root node is the
+ Organization itself.
+- Organization Policy Admin of the GCP Organization, to manage organization
+ policies.
+- Billing Admin of the Billing Account, or at minimum the Billing User role,
+ to create projects.
+- Folder Creator, also to create folders and projects.
+- Access Context Manager Admin, to create VPC SC policies.
+- Assured Workloads Admin, to create assured workloads folders.
+
+The minimum set of roles needed to run the bootstrap phase in a given assured
+workloads folder are:
+
+- Organization Viewer, to query organization level resources.
+- Organization Policy Admin, to manage organization policies.
+- Billing User of the Billing Account, to create new projects.
+- Folder Creator, also to create new folders and projects.
+- Access Context Manager, to create VPC SC policies.
+- Security Admin, to manage security command center and security events.
+
+### IAM roles
+
+Our IAM bindings for the Stellar Engine projects are taken from the Cloud
+Foundations Fabric guidelines. They are as follows:
+
+#### Organization _\[organization \#0\]_
+
+| members | roles |
+| :--------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| dev-resman-pf-0
serviceAccount | roles/orgpolicy.policyAdmin +• |
+| prod-resman-net-0
serviceAccount | roles/compute.orgFirewallPolicyAdmin +
roles/compute.xpnAdmin + |
+| prod-resman-pf-0
serviceAccount | roles/orgpolicy.policyAdmin +• |
+| prod-resman-sec-0
serviceAccount | roles/cloudasset.viewer +
roles/accesscontextmanager.policyAdmin + |
+
+#### Folder _data platform/development_
+
+| members | roles |
+| :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| dev-resman-dp-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin
roles/logging.admin
roles/owner
roles/resourcemanager.folderAdmin
roles/resourcemanager.projectCreator |
+| dev-resman-dp-0r
serviceAccount | roles/resourcemanager.folderViewer
roles/viewer |
+
+#### Folder _data platform/production_
+
+| members | roles |
+| :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| prod-resman-dp-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin
roles/logging.admin
roles/owner
roles/resourcemanager.folderAdmin
roles/resourcemanager.projectCreator |
+| prod-resman-dp-0r
serviceAccount | roles/resourcemanager.folderViewer
roles/viewer |
+
+#### Folder _gke/development_
+
+| members | roles |
+| :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| dev-resman-gke-0
serviceAccount | roles/compute.xpnAdmin
roles/logging.admin
roles/owner
roles/resourcemanager.folderAdmin
roles/resourcemanager.projectCreator |
+| dev-resman-gke-0r
serviceAccount | roles/resourcemanager.folderViewer
roles/viewer |
+
+#### Folder _gke/production_
+
+| members | roles |
+| :---------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| prod-resman-gke-0
serviceAccount | roles/compute.xpnAdmin
roles/logging.admin
roles/owner
roles/resourcemanager.folderAdmin
roles/resourcemanager.projectCreator |
+| prod-resman-gke-0r
serviceAccount | roles/resourcemanager.folderViewer
roles/viewer |
+
+#### Folder _networking_
+
+| members | roles |
+| :---------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| gcp-network-admins
group | roles/editor |
+| prod-resman-net-0
serviceAccount | roles/compute.xpnAdmin
roles/logging.admin
roles/owner
roles/resourcemanager.folderAdmin
roles/resourcemanager.projectCreator |
+| prod-resman-net-0r
serviceAccount | roles/resourcemanager.folderViewer
roles/viewer |
+
+#### Folder _networking/development_
+
+| members | roles |
+| :--------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------- |
+| dev-resman-dp-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin |
+| dev-resman-dp-0r
serviceAccount | roles/compute.networkViewer |
+| dev-resman-gke-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin |
+| dev-resman-gke-0r
serviceAccount | roles/compute.networkViewer |
+| dev-resman-pf-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin |
+| dev-resman-pf-0r
serviceAccount | roles/compute.networkViewer |
+
+#### Folder _networking/production_
+
+| members | roles |
+| :---------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------- |
+| prod-resman-dp-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin |
+| prod-resman-dp-0r
serviceAccount | roles/compute.networkViewer |
+| prod-resman-gke-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin |
+| prod-resman-gke-0r
serviceAccount | roles/compute.networkViewer |
+| prod-resman-pf-0
serviceAccount | organizations/[organization #0]/roles/serviceProjectNetworkAdmin |
+| prod-resman-pf-0r
serviceAccount | roles/compute.networkViewer |
+
+#### Project _prod-iac-core-0_
+
+| members | roles |
+| :------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------- |
+| dev-resman-dp-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| dev-resman-dp-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| dev-resman-gke-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| dev-resman-gke-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| dev-resman-pf-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| dev-resman-pf-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| dev-resman-sbox-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-dp-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-gke-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-gke-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-net-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-net-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-net-1
serviceAccount | roles/logging.logWriter + |
+| prod-resman-net-1r
serviceAccount | roles/logging.logWriter + |
+| prod-resman-pf-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-pf-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-sec-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-sec-0r
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-sec-1
serviceAccount | roles/logging.logWriter + |
+| prod-resman-sec-1r
serviceAccount | roles/logging.logWriter + |
+| prod-resman-teams-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+| prod-resman-test-3-0
serviceAccount | roles/serviceusage.serviceUsageConsumer + |
+
+**Legend: + additive, \* conditional**
+
+IAM roles have been taken from the following Cloud Foundations Fabric document -
+.
+
+## 3.3 Code Control
+
+### GitHub Projects
+
+Code has been forked off of the open-source Cloud Foundations Fabric (CFF)
+GitHub repository.
+The Stellar Engine project is fully open source and is maintained on GitHub.
+
+[Cloud Foundations Fabric
+GitHub](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric)
+
+[Google Public Sector Stellar Engine
+GitHub](https://github.com/google/stellar-engine)
+
+### Branches
+
+Our development process follows a trunk-based branching strategy. This entails
+having one main branch, with developers creating their own feature branches,
+which are then merged back into the main branch upon completion. For more
+information on trunk based branching, please see [Trunk Based
+Branching](https://www.atlassian.com/continuous-delivery/continuous-integration/trunk-based-development).
+Our branching strategy will follow Cloud Foundations Fabric’s branching strategy
+upon reintroduction of our project’s repository into the larger repository.
+
+### Validation
+
+Currently, development happens on feature and bug fix branches. When complete, a
+[pull request
+(PR)](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) can be opened
+targeting the main branch. After two reviewers have submitted comments, and
+their recommendations have been adjudicated (which can be an iterative process)
+the feature branch is merged into the main branch.
+
+## 3.4 Infrastructure-as-Code
+
+Stellar Engine Terraform configuration is based on the deployment process from
+[Google Cloud Foundation
+Fabric](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric).
+Successful deployment requires prerequisite steps, labeled 0 through 3, to be
+completed before the deployment is ready for further development. The project’s
+state is managed by a single backend, but can use multiple state files within
+this backend.
+
+- Terraform configuration is stored in an infrastructure code repository.
+ Repository access is limited to infrastructure administrators.
+- Once the initial bootstrap environment is created by infrastructure admins,
+ all configuration changes are gated by a code review in the infrastructure
+ repository.
+
+### File Structure
+
+Initial deployment/bootstrapping is achieved via the code in the
+**fast/stages** or **fast/stages-aw** folders. The **modules** folder contains a series of
+portable Terraform code designed to fast track the deployment of individual GCP
+services. The **blueprints** folder contains use cases of these modules, with
+the use case represented by the name of the corresponding subfolder, (for
+example, the **il5** folder is designed for deployments within an IL5
+environment.) A subset of the file structure containing IaC is listed below.
+
+- **blueprints**
+ - **fedramp-high**
+ - **access-context-manager**
+ - **app-engine**
+ - **artifact-registry**
+ - **bastion-pattern**
+ - **bcap**
+ - **beyondcorp**
+ - **bigtable**
+ - **bq-project**
+ - **cloud-armor**
+ - **cloud-composer-environment**
+ - **cloud-functions**
+ - **cloud-ids**
+ - **cloud-run**
+ - **cloud-scheduler-job**
+ - **cloud-spanner**
+ - **cloud-translation**
+ - **cloud-workstations**
+ - **cnap**
+ - **compute-engine**
+ - **dataflow**
+ - **datafusion**
+ - **dataproc-cluster**
+ - **datastore**
+ - **document-ai**
+ - **firestore**
+ - **gcs-project**
+ - **gemini-enterprise**
+ - **gitlab**
+ - **gke**
+ - **gke-hardened**
+ - **kms-project**
+ - **network-connectivity-center**
+ - **postgresql**
+ - **private-service-connect**
+ - **pub-sub-project**
+ - **secret-manager**
+ - **shielded-vm-project**
+ - **vertex-mlops**
+ - **workflows**
+ - **il5**
+ - **artifact-registry**
+ - **bastion-pattern**
+ - **bcap**
+ - **bigquery**
+ - **bq-project**
+ - **compute-engine**
+ - **dataflow**
+ - **gcs-project**
+ - **gke**
+ - **gke-hardened**
+ - **kms-project**
+ - **postgresql**
+ - **private-service-connect**
+ - **pub-sub-project**
+ - **shielded-vm-project**
+- **fast**
+ - **stages-aw**
+ - **0-bootstrap**
+ - **1-resman**
+ - **2-networking-a-fedramp-high**
+ - **2-networking-b-il5-ngfw**
+ - **3-security**
+- **modules**
+ - **access-context-manager**
+ - **alloydb**
+ - **analytics-hub**
+ - **api-gateway**
+ - **apigee**
+ - **app-engine**
+ - **artifact-registry**
+ - **beyondcorp**
+ - **biglake-catalog**
+ - **bigquery-dataset**
+ - **bigtable-instance**
+ - **billing-account**
+ - **binauthz**
+ - **certificate-authority-service**
+ - **certificate-manager**
+ - **cis-log-alerts**
+ - **cis-log-metrics**
+ - **cloud-config-container**
+ - **cloud-function-v1**
+ - **cloud-function-v2**
+ - **cloud-identity-group**
+ - **cloud-run**
+ - **cloud-run-v2**
+ - **cloud-run-v2-se**
+ - **cloud-scheduler**
+ - **cloudsql-instance**
+ - **compute-mig**
+ - **compute-vm**
+ - **container-registry**
+ - **data-catalog-policy-tag**
+ - **data-catalog-tag**
+ - **data-catalog-tag-template**
+ - **dataform-repository**
+ - **datafusion**
+ - **datafusion-se**
+ - **dataplex**
+ - **dataplex-datascan**
+ - **dataproc**
+ - **dns**
+ - **dns-response-policy**
+ - **endpoints**
+ - **firestore**
+ - **folder**
+ - **gcs**
+ - **gcve-private-cloud**
+ - **gke-cluster-autopilot**
+ - **gke-cluster-standard**
+ - **gke-cluster-standard-se**
+ - **gke-hub**
+ - **gke-nodepool**
+ - **iam-service-account**
+ - **intrusion-detection-system**
+ - **kms**
+ - **logging-bucket**
+ - **looker-core**
+ - **ncc-spoke-ra**
+ - **net-address**
+ - **net-cloudnat**
+ - **net-firewall-policy**
+ - **net-ipsec-over-interconnect**
+ - **net-lb-app-ext**
+ - **net-lb-app-ext-regional**
+ - **net-lb-app-int**
+ - **net-lb-app-int-cross-region**
+ - **net-lb-ext**
+ - **net-lb-int**
+ - **net-lb-proxy-int**
+ - **net-swp**
+ - **net-vlan-attachment**
+ - **net-vpc**
+ - **net-vpc-firewall**
+ - **net-vpc-peering**
+ - **net-vpn-dynamic**
+ - **net-vpn-ha**
+ - **net-vpn-static**
+ - **organization**
+ - **organization-se**
+ - **private-service-connect**
+ - **project**
+ - **project-factory**
+ - **projects-data-source**
+ - **pubsub**
+ - **secret-manager**
+ - **secure-source-manager-instance**
+ - **service-directory**
+ - **source-repository**
+ - **spanner-instance**
+ - **spanner-instance-se**
+ - **vpc-sc**
+ - **workflows**
+ - **workstation-cluster**
+
+## 3.5 Infrastructure-as-Code Principles
+
+### Terraform Configuration
+
+Terraform configuration is partitioned into stand-alone, per-environment
+configuration modules. Additional customer tenants can be configured
+per-environment. To facilitate rapid iteration and collaboration across tenants,
+configuration is relatively static. Each combination of tenant and environment
+has a dedicated configuration module. Each configuration module relies on
+Terraform locals that reside in the same file.
+
+This approach leads to a lot of repetition but minimizes the opportunity for
+changes in one tenant or environment to impact any other.
+
+Example:
+
+```hcl
+# project-specific locals
+locals {
+ step_terraform_sa = [
+ "serviceAccount:${google_service_account.terraform-env-sa["bootstrap"].email}",
+ "serviceAccount:${google_service_account.terraform-env-sa["org"].email}",
+ "serviceAccount:${google_service_account.terraform-env-sa["env"].email}",
+ "serviceAccount:${google_service_account.terraform-env-sa["net"].email}",
+ "serviceAccount:${google_service_account.terraform-env-sa["proj"].email}",
+ ]
+ org_project_creators = distinct(concat(var.org_project_creators, local.step_terraform_sa))
+ parent = var.parent_folder != "" ? "folders/${var.parent_folder}" : "organizations/${var.org_id}"
+ org_admins_org_iam_permissions = var.org_policy_admin_role == true ? [
+ "roles/orgpolicy.policyAdmin", "roles/resourcemanager.organizationAdmin", "roles/billing.user"
+ ] : ["roles/resourcemanager.organizationAdmin", "roles/billing.user"]
+ group_org_admins = var.groups.create_groups ? var.groups.required_groups.group_org_admins : var.group_org_admins
+ group_billing_admins = var.groups.create_groups ? var.groups.required_groups.group_billing_admins : var.group_billing_admins
+}
+
+resource "google_folder" "bootstrap" {
+ display_name = "${var.folder_prefix}-bootstrap"
+ parent = local.parent
+}
+```
+
+## 3.6 Networking
+
+### References:
+
+###
+
+### Flow Diagram
+
+Google Cloud offers a robust architecture for managing Virtual Private Clouds
+(VPCs) and facilitating communication among them using VPC Network Peering.
+[Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) is a networking
+construct that significantly reduces the amount of complexity in network design.
+With Shared VPC, network policy and control for all networking resources are
+centralized and easier to manage. Service project departments can configure and
+manage non-network resources, enabling a clear separation of responsibilities
+for different teams in the organization.
+
+Resources in Shared VPC networks can communicate with each other securely and
+efficiently across project boundaries using internal IP addresses. You can
+manage shared network resources—such as subnets, routes, and firewalls—from a
+central host project, so you can enforce consistent network policies across
+projects.
+
+As shown in the diagram below, the example.com architecture uses two Shared VPC
+networks, base and restricted, as the default networking construct for each
+environment. Each Shared VPC network is contained within a single project. The
+base VPC network is used for deploying services that contain non-sensitive data,
+and the restricted VPC network uses [VPC Service
+Controls](https://cloud.google.com/vpc-service-controls) to limit access to
+services that contain sensitive data.
+
+You can implement the model described in the preceding section independently for
+each of the four environments (common, development, non-production, and
+production). This model provides the highest level of network segmentation
+between environments.
+
+For the above scenario, all environments can directly communicate with shared
+resources in the common environment hub. The common environment can host tooling
+that requires connectivity to other environments, like CI/CD infrastructure,
+directories, and security and configuration management tools. As with the
+previous independent Shared VPC model, the hub-and-spoke scenario is also
+composed of base and restricted VPC networks. A base Shared VPC hub connects the
+base Shared VPC network spokes in development, non-production, and production,
+while the restricted Shared VPC hub connects the restricted Shared VPC network
+spokes in these same environments. The choice between base and restricted Shared
+VPC networks also depends on whether VPC Service Controls are required. For
+workloads with strong data exfiltration mitigation requirements, the
+hub-and-spoke associated with the restricted Shared VPC networks is preferred.
+
+###
+
+### VPCs
+
+A VPC is a virtual network dedicated to a user's Google Cloud resources. It
+provides isolation from other networks and allows customization of IP address
+ranges, subnets, and routing tables. Consider if using a VPC to control traffic
+and access is appropriate within the customers GCP organization.
+
+Google VPC Network Peering
+
+Google Cloud offers a robust architecture for managing Virtual Private Clouds
+(VPCs) and facilitating communication among them using VPC Network Peering. VPC
+Network Peering allows VPCs within the same project or across different projects
+to communicate securely and efficiently using internal IPs. Peering connections
+do not require any additional gateways or routers; traffic remains within
+Google's backbone network, ensuring low latency and high reliability. VPC
+Network Peering allows VPCs to exchange traffic securely and privately using
+internal IP addresses. It facilitates communication between resources deployed
+in different VPCs without needing to traverse the public internet.
+
+###
+
+### Subnet Allocations
+
+Subnet allocation in Google Cloud involves the process of dividing the IP
+address range of a Virtual Private Cloud (VPC) into smaller segments called
+subnets. These subnets are then assigned to specific regions within GCP where
+resources such as virtual machine (VM) instances, Kubernetes clusters, and other
+services can be deployed. Our subnet allocation paradigm provides users the
+ability to host eight organizations, with eight tenants per organization, with
+eight projects per organization, for a maximum of 512 total projects.
+
+The Table below displays the four Subnets allocation for Project 1 with
+CIDR 10.200.2.0/23 having 512 IPs.
+
+| Subnet Address | Range of Addresses | Useable IPs | Hosts |
+| :------------------------------ | :---------------------------------- | :--------------------------- | :--------------------- |
+| 10.200.2.0/25 | 10.200.2.0 - 10.200.2.127 | 10.200.2.1 - 10.200.2.126 | 126 |
+| 10.200.2.128/25 | 10.200.2.128 - 10.200.2.255 | 10.200.2.129 - 10.200.2.254 | 126 |
+| 10.200.3.0/25 | 10.200.3.0 - 10.200.3.127 | 10.200.3.1 - 10.200.3.126 | 126 |
+| 10.200.3.128/25 | 10.200.3.128 - 10.200.3.255 | 10.200.3.129 - 10.200.3.254 | 126 |
+
+Subnets are logical partitions within a VPC that define IP address ranges for
+resources deployed in specific geographic locations (regions or availability
+zones). Each subnet is associated with a specific region and availability zone
+within that region. It is recommended to allocate a subnet for each application.
+For example, a Google Kubernetes Engine (GKE) cluster would require a subnet and
+two secondary ranges (one for pods and one for services). You can add the subnet
+in the environments section in the Terraform networks stage, and reference the
+subnet when configuring your cluster. When creating a VPC, you define a primary
+IP address range (CIDR block) in IPv4 format (e.g., 10.200.2.0/23). This range
+determines the total number of IP addresses available for allocation across all
+subnets within the VPC.
+
+When creating subnets within a VPC, you specify a subnet IP address range that
+is a subset of the VPC's primary IP range. For example, if your VPC has the
+range 10.200.2.0/23, you might create subnets with ranges
+like 10.200.2.0/25, 10.200.2.128/25, 10.200.3.0/25 and 10.200.3.128/25 so on.
+
+The Distribution of the CIDR as per Organization, Tenants and Projects
+
+| Component | CIDR | Notes |
+| :------------------------- | :-------------------- | :------------------------- |
+| Organization (All Tenants) | 10.200.0.0/16 | 8 Tenants per Organization |
+| Tenants 1 | 10.200.3.0/19 | 8 projects per tenant |
+| Project 1 | 10.200.2.0/23 | |
+| Project 2 | 10.200.4.0/23 | |
+| Project 3 | 10.200.6.0/23 | |
+| Project 4 | 10.200.8.0/23 | |
+| Project 5 | 10.200.10.0/23 | |
+
+Utilize the above table to create new projects with CIDRs. The 10.200.0.0/16 is
+a tenant subnet.
+
+### Google Private Access
+
+Access to Google-managed services, (e.g. AppEngine, CloudSQL, CloudFunctions,)
+will be routed through internal network space using [Google Private
+Access](https://cloud.google.com/vpc/docs/private-google-access). Access from
+Google-managed services to the VPC will be routed through internal network space
+using [Serverless VPC
+Access](https://cloud.google.com/vpc/docs/serverless-vpc-access). Google Private
+Access is enabled for subnets.
+
+### Interservice Communications
+
+Direct network connections between microservices will be routed within the VPC
+using [Internal
+Load-Balancing](https://cloud.google.com/load-balancing/docs/l7-internal) or
+[Google Private Access](https://cloud.google.com/vpc/docs/private-google-access)
+for managed services. Interservice message quoting will utilize [Cloud
+Pub/Sub](https://cloud.google.com/pubsub) for asynchronous delivery.
+
+### VPC Firewall Rules
+
+By default, no ingress traffic is allowed. By default, all egress traffic is
+allowed. Ingress/egress firewall policies can be defined
+in 2-networking-a-peering.The Google Cloud VPC (Virtual Private Cloud) firewall
+rules are used to control traffic in a VPC network. The Firewall rules in Google
+Cloud are defined at the network level (VPC network) and are stateful. Firewall
+rules are applied in priority order. The first rule that matches the traffic
+criteria (source IP, destination IP, protocol, port, etc.) is applied, and
+subsequent rules are not evaluated. Rules can specify which protocols (TCP, UDP,
+ICMP, etc.) and ports (such as 80 for HTTP or 443 for HTTPS) are allowed.
+
+### Hub and Spoke architecture
+
+Within Google Cloud, VPC Network Peering is used to connect VPCs within or
+between projects in order to execute the Hub and Spoke architecture. A
+networking design pattern known as "Hub and Spoke" includes setting up Virtual
+Private Clouds (VPCs) in a centralized hub and outward spoke architecture. A
+centralized networking hub housing shared resources and services is provided by
+the hub VPC. Shared services that are accessed by several spoke VPCs, such
+logging, security, or monitoring tools, may also be hosted by the hub VPC.
+
+The Spoke VPCs are separate VPC networks that are connected to the hub VPC. Each
+spoke VPC represents a distinct environment, such as development, testing, or
+production environments. Spoke VPCs contain the application-specific resources
+and workloads. They are isolated from each other and communicate with each other
+through the hub VPCHub and Spoke Architecture with Compliance Overlays
+
+Stellar Engine implements a robust Hub and Spoke network architecture,
+specifically tailored to meet rigorous compliance requirements (FedRAMP High and
+IL5). In this model, centralized Hub VPCs host shared networking resources,
+connectivity gateways (Cloud VPN, Interconnect), and security appliances, while
+Spoke VPCs host isolated workload environments (e.g., Development, Production).
+
+Connectivity between the Hub and Spokes is established using VPC Network
+Peering, ensuring low-latency, high-bandwidth communication. Shared VPC is
+utilized throughout to centralize network administration within the Hub project
+while allowing workload owners to consume network resources in their respective
+Spoke projects.
+
+Both supported patterns implement a Virtual Datacenter Security Stack (VDSS)
+topology with distinct security appliance strategies:
+
+- Landing VPC (Trust): The internal hub that connects to Spoke VPCs and
+ on-premises networks.
+- DMZ VPC (Untrust): The external-facing hub that handles Internet
+ ingress/egress.
+
+Supported Implementations:
+
+#### FedRAMP High Pattern (2-networking-a-fedramp-high):
+
+- Topology: VDSS (Landing + DMZ).
+- Perimeter Security: Uses Network Virtual Appliances (NVAs).
+- Implementation: Deploys a generic NVA cluster (based on simple-nva) to
+ handle routing and basic traffic filtering between Trust and Untrust zones.
+
+#### IL5 Pattern (2-networking-b-il5-ngfw):
+
+- Topology: VDSS (Landing + DMZ).
+- Perimeter Security: Uses Next-Generation Firewalls (NGFWs).
+- Implementation: Deploys a scalable cluster of Palo Alto Networks VM-Series
+ firewalls to provide advanced Layer 7 inspection, intrusion prevention
+ (IPS), and granular application-aware filtering required for IL5 compliance.
+
+# 4.0 Assured Workloads
+
+Assured workloads in GCP are a set of features that help you ensure the
+performance, availability, and security of your applications. Assured Workloads,
+using FedRAMP Moderate by default, is enabled on the Assured Workloads folder.
+All sub folders below the Assured Workloads folder will inherit policies and
+constraints according to the compliance program selected.
+
+To support compliance with data residency requirements, Google Cloud provides
+you the ability to restrict the regions where data at rest can be stored.
+
+During the Assured Workloads setup, you create an environment and select your
+compliance regime. When you create resources in the environment, Assured
+Workloads restricts the regions you can select for those resources based on the
+compliance program chosen, using Organization Policies.
+
+Projects in Stellar Engine are described via the terraform.tfvars file in
+the 1-resman stage. In order to create a new tenant folder with Assured
+Workloads configured. An example configuration for a tenant folder with an IL5
+Assured Workloads is as follows:
+
+```hcl
+tenants = {
+ dino-runner = {
+ admin_principal = "group:gcp-devops@dino-runner.cloud"
+ descriptive_name = "dino-runner"
+ compliance = {
+ regime = "IL5"
+ location = "us"
+ }
+ locations = {
+ gcs = "us-east4"
+ kms = "us-east4"
+ }
+ }
+}
+```
+
+**Note: If the compliance field is not set, the tenant folder will be generated
+without an Assured Workloads configuration**. For a complete list of available
+compliance regimes, please refer to the [Terraform
+documentation](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/assured_workloads_workload#compliance_regime)
+
+# 5.0 Information Security
+
+The Stellar Engine information security strategy relies primarily on controlled
+access to resources and securing data.
+
+##
+
+## 5.1 Access Control
+
+### Account Types
+
+**Users - ** Users are created and managed through Google Identity Platform or
+Google Workspace. See the Identity Providers section below for more information.
+
+**Service Accounts -** A service account is a special kind of account typically
+used by an application or compute workload rather than a human. Its email
+address, which is unique to the account, identifies a service account.
+
+In Google Cloud, there are several different types of service accounts:
+
+- **User-managed service accounts:** Service accounts that you create and
+ manage. These service accounts are often used as identities for workloads.
+- **Default service accounts:** User-managed service accounts that are created
+ automatically when you enable certain Google Cloud services. You are
+ responsible for managing these service accounts.
+- **Google-managed service accounts:** Google-created and Google-managed
+ service accounts that enable services to access resources on your behalf.
+
+Impersonation is typically used to temporarily grant a user elevated access,
+because it allows users to assume the roles and permissions that the service
+account is assigned. A principal can use service account impersonation to run
+commands as a service account. However, a principal can't use service account
+impersonation to access the Google Cloud console. Within Stellar Engine,
+impersonation is set via Terraform and used to create projects and resources.
+
+### Identity Providers
+
+Identity can be provided through [Google Identity
+Platform](https://cloud.google.com/identity-platform) or [Cloud
+Identity](https://cloud.google.com/identity).
+
+Google Identity Platform is a customer identity and access management (CIAM)
+platform. It helps organizations add identity and access management
+functionality to their applications, protect user accounts, and scale with
+confidence on Google Cloud.
+
+Cloud Identity is an Identity as a Service (IDaaS) solution that allows you to
+centrally manage users and groups who can access Google Cloud and Google
+Workspace resources. It is the same identity service that powers Google
+Workspace and can also be used as IdP for third-party applications (supports
+SAML and LDAP applications).
+
+### Multi-Factor Authentication (MFA)
+
+MFA enforcement is a best practice when administering users. Google.com accounts
+always require hardware-based multi-factor authentication. Google believes
+enabling MFA is the best way to protect accounts from phishing and recommends
+partners and customers always enable it.
+
+**Note:** By default, Stellar Engine does not enforce MFA for logistical
+reasons; however, in order to be fully compliant with IL5, the user _must_
+enforce MFA in their environment.
+
+### RBAC
+
+Role-based Access Control is defined in the IAM Section of this document.
+
+## 5.2 Data Security
+
+### GCP Platform security
+
+Google provides [many protections](https://cloud.google.com/security/) to GCP
+customers, however security of workloads running in GCP is a shared
+responsibility. All of the decisions reflected in this section are recommended
+to the Customer to implement.
+
+Customers are responsible for the following aspects of their applications'
+security. It should be noted that the following list is not intended to be
+exhaustive and care should be exercised on a case by case basis when deploying
+and configuring resources in GCP.
+
+### Data
+
+Google encrypts all data communication channels that it uses to transmit data
+between services. Customers are responsible for ensuring that the transmission
+of data is facilitated over an encrypted channel.
+
+Google encrypts all data on storage devices to prevent anyone with physical
+access to physical devices from being able to inspect the data contained on
+those devices. Customers can provide their own encryption keys for the
+encryption of Google Compute Engine Persistent Disks and Google Cloud Storage
+buckets.
+
+Data stored within databases are all encrypted at the storage level, however
+additional encryption is advisable at the application level to prevent customer
+users from accessing content and limiting spillage in the event of intrusion.
+
+A customer may load data which may include PII and PCI into BigQuery for
+analysis. Customers are responsible for being aware of and abiding by any
+regulations regarding the use and storage of this data and are responsible for
+developing their own aggregation capabilities.
+
+### Cloud KMS
+
+If a customer needs to encrypt data at the application level or manage their own
+encryption keys for compliance or regulatory reasons, they should consider at
+[Cloud Key Management Service (KMS)](https://cloud.google.com/kms/). Cloud KMS
+is a global cloud-hosted key management service that lets you manage encryption
+for your cloud services the same way you do on-premises.
+
+#### Best Practices
+
+- Key rotation - Regular rotation of the encryption key is encouraged. This
+ limits the amount of data protected by a single key. Automatic rotation can
+ be configured on a user defined schedule by using _gcloud_ or the GCP
+ Console.
+- Separation of duties - Cloud KMS should be run in its own project without an
+ owner at the project-level and instead being managed by an Org Admin. The
+ Org Admin is not able to manage or use keys, but they are able to set IAM
+ policies to restrict who has permissions for key management and usage.
+ Additionally, the ability to manage Cloud
+
+KMS should have role separation from the ability to perform encryption and
+decryption operations. Any user with management access should not be able to
+decrypt data.
+
+- Additional authenticated data (AAD) - We recommend AAD as an additional
+ integrity check as it can help protect your data from a [confused deputy
+ attack](https://en.wikipedia.org/wiki/Confused_deputy_problem). Additional
+ authenticated data is a string that you pass to Cloud KMS as part of an
+ encrypt or decrypt API call. Cloud KMS cannot decrypt ciphertext unless the
+ same AAD value is used for both encryption and decryption. By default an
+ empty string is used for the AAD value.
+
+### Google Cloud Storage
+
+Cloud Storage requests refer to buckets and objects by their names. As a result,
+even though ACLs will prevent unauthorized third parties from operating on
+buckets or objects, a third party can attempt requests with bucket or object
+names and determine their existence by observing the error responses. From the
+bucket name it might be possible to infer information present within the bucket
+itself, which might lead to leaks. If you are concerned about the privacy of
+your bucket or object names, you should take appropriate precautions, such as:
+
+- Choosing bucket and object names that are difficult to guess. For example, a
+ bucket named stellar-engine-mybucket-gxl3 is random enough that unauthorized
+ third parties cannot feasibly guess it or enumerate other bucket names from
+ it.
+- Before adding objects to a bucket, check that the [default object
+ ACLs](https://cloud.google.com/storage/docs/access-control#default) are set
+ to your requirements first. This could save you a lot of time updating ACLs
+ for individual objects.
+- Bucket and object ACLs are independent of each other, which means that the
+ ACLs on a bucket do not affect the ACLs on objects inside that bucket. It is
+ possible for a user without permissions for a bucket to have permissions for
+ an object inside the bucket. For example, you can create a bucket such that
+ only GroupA is granted permission to list the objects in the bucket, but
+ then upload an object into that bucket that allows GroupB READ access to the
+ object. GroupB will be able to read the object, but will not be able to view
+ the contents of the bucket or perform bucket-related tasks.
+- A simple yet effective approach is to keep private and public data separate
+ in different buckets and to label public buckets clearly, such as
+ stellar-engine-mybucket-public-3vxa.
+- The Cloud Storage access control system includes the ability to specify that
+ objects are publicly readable. Make sure you intend for any objects you
+ write with this permission to be public. Once "published", data on the
+ Internet can be copied to many places, so it's effectively impossible to
+ regain read control over an object written with this permission.
+- The Cloud Storage access control system includes the ability to specify that
+ buckets are publicly writable. While configuring a bucket this way can be
+ convenient for various purposes, we recommend against using this permission
+ - it can be abused for distributing illegal content, viruses, and other
+ malware, and the bucket owner is legally and financially responsible for the
+ content stored in their buckets.
+- If you need to make content available securely to users who don't have
+ Google accounts, we recommend you use [signed
+ URLs](https://cloud.google.com/storage/docs/access-control/signed-urls). For
+ example, with signed URLs you can provide a link to an object and your
+ application's customers do not need to authenticate with Cloud Storage to
+ access the object. When you create a signed URL you control the type (read,
+ write, delete) and duration of access.
+
+### Data Residency
+
+Google Cloud offers you the ability to control where your data is stored. When a
+Customer chooses to configure resources in any location, Google may store that
+Data at rest only in the selected Region. For that purpose we recommend using
+[Organization Policy
+constraints](https://docs.google.com/document/d/1AZijm5PTlfWeXUy1Jm6zjsC6ecyihberM5SVhbB_LgY/edit?resourcekey=0-4Mu_okWoVrjK9Du9hiZcgw#heading=h.de1wyjzdkh1h)
+which can be applied at the organization, folder, or project level. Customers
+can limit the physical location of a new resource with the Organization Policy
+Service [resource locations
+constraint](https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations).
+When coupled with Cloud IAM configuration to enable or disable services for sets
+of users, you can prevent your employees from accidentally storing data in the
+wrong Google Cloud region.
+
+After you define resource locations, this limitation will apply only to
+newly-created resources. Resources you created before setting the resource
+locations constraint will continue to exist and perform their function.
+
+### Encryption-at-Rest
+
+All data stored in Google Cloud is encrypted at the storage level using AES256
+using
+[Google-managed](https://cloud.google.com/storage/docs/encryption/default-keys)
+data encryption keys
+([DEK](https://cloud.google.com/security/encryption/default-encryption#key_management)).
+Google uses a common cryptographic library which incorporates a FIPS 140-2
+validated module,
+[BoringCrypto](https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3318).
+
+### Encryption-in-Transit
+
+Microservices will primarily use [Cloud
+Pub/Sub](https://cloud.google.com/pubsub/docs/encryption) and REST transmission
+methods within the project system. Both of these protocols leverage HTTPS.
+
+### TLS Version
+
+All web services utilizing Transport Layer Security are required to support
+version 1.2 or higher.
+
+The following two ciphers are disabled when using TLS.
+
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
+
+Organization policy gcp.restrictTLSVersion can be used to restrict the TLS
+versions.
+
+## 5.3 Logging & Auditing
+
+### Audit Logs
+
+The following are all [audit logs](https://cloud.google.com/logging/docs/audit)
+that are collected and stored within Google Cloud:
+
+**Activity Logs** - Admin Activity audit logs contain log entries for API calls
+or other actions that modify the configuration or metadata of resources. For
+example, these logs record when users create VM instances or change Identity and
+Access Management permissions.
+
+**Data Access Logs** -Data Access audit logs contain API calls that read the
+configuration or metadata of resources, as well as user-driven API calls that
+create, modify, or read user-provided resource data.
+
+**System Event Logs** - System Event audit logs contain log entries for Google
+Cloud actions that modify the configuration of resources. System Event audit
+logs are generated by Google systems; they aren't driven by direct user action.
+
+### Other Logging
+
+**VPC Flow Logs** - VPC Flow Logs record a sample of network flows sent from and
+received by VM instances, including instances used as GKE nodes. These logs can
+be used for network monitoring, forensics, real-time security analysis, and
+expense optimization.
+
+**Firewall Rule Logs** - Firewall Rules Logging lets you audit, verify, and
+analyze the effects of your firewall rules. For example, you can determine if a
+firewall rule designed to deny traffic is functioning as intended. Firewall
+Rules Logging is also useful if you need to determine how many connections are
+affected by a given firewall rule.
+
+**Access Transparency Logs** - Access Transparency logs include data about
+Google staff activity, including:
+
+- Actions by the Support team that you may have requested by phone
+- Basic engineering investigations into your support requests
+- Other investigations made for valid business purposes, such as recovering
+ from an outage
+
+### Log Destinations
+
+Audit logs and other logs do not expire and are sent to the following
+destinations:
+
+- BigQuery
+- Storage
+- Pub/Sub
+
+When the log destination is in a different project, we need to make sure the log
+writer identity service account of the log sink has the permission to write to
+the destination. If there is a VPC SC or other additional restrictions, we need
+to grant access to the log writer identity as well.
+
+## 5.4 Monitoring
+
+### Projects
+
+To access Cloud Monitoring for each environment a host project has been created
+to hold the dashboards and alerts. The folders and monitoring host projects are
+listed in the table below.
+
+| Folder | Monitoring Project |
+| :------- | :----------------------- |
+| Security | ``-dev-sec-core-0 |
+| Security | ``-prod-sec-core-0 |
+
+### Groups
+
+A group named gcp-monitoring-admins is created during the bootstrap process (See
+3.2 IAM).
+
+### Alerts
+
+Alerts can be created based on events and log metrics. Alerting gives timely
+awareness to problems in your cloud applications so you can resolve the problems
+quickly. Within Cloud Monitoring, an alerting policy describes the circumstances
+under which you want to be alerted and how you want to be notified.
+
+### Dashboards
+
+Cloud Monitoring automatically installs a dashboard when you create a resource
+in a Google Cloud project. These dashboards display metrics and general
+information about a single Google Cloud service. Custom dashboards are
+dashboards that you create or install. Unlike dashboards for Google Cloud
+services and those for your supported integrations, custom dashboards let you
+view and analyze data from different sources in the same context. For example,
+you can create a dashboard that displays metric data, alerting policies, and log
+entries.
+
+###
+
+## 5.5 Security Boundaries
+
+In the above diagram, the dotted red line represents the intended Authorization
+Boundary of a Stellar Engine deployment.
+
+### Cloud Organization Policy
+
+The customer’s GCP Cloud Organization may inherit its organizational policy
+restrictions. If a resource node has set inheritFromParent = true, then the
+effective Policy of the parent resource is inherited, merged, and reconciled to
+evaluate the resulting effective policy. If a resource hierarchy node has a
+policy that includes inheritFromParent = false, it doesn't inherit the
+organization policy from its parent. Instead, the node inherits the constraint's
+default behavior unless you set a policy with allowed or denied values.
+
+### Project Layout
+
+All cross-project permission grants are controlled by Cloud IAM and defined in
+Terraform.
+
+Details about the project layout are documented in the Cloud Project
+Organization section of this document. As a part of the hub and spoke network
+architecture, a default VPC service control perimeter is created around the
+project which hosts the restricted shared VPC.
+
+VPC Service Controls
+
+VPC Service Controls secure and improve the ability to mitigate the risk of data
+exfiltration from GCP services by defining controls. These controls include the
+creation of perimeters that protect resources and the data of services that are
+explicitly specified. We can enforce adaptive access control based on IP range
+or device trust (BeyondCorp) for GCP resource access from outside privileged
+networks.
+
+A VPC Service Control makes sure that data in most GCP services cannot exit the
+perimeter to an un-recognized network IP, even if they have the appropriate IAM
+credentials such as a user account or service account.
+
+## 5.6 Network Security
+
+### Network Control
+
+The customer project may be managed by the SharedVPC network. Each service
+project may have one or more subnets provisioned for its use by the customer.
+Service accounts can be granted
+[compute.networkUser](https://cloud.google.com/iam/docs/job-functions/networking)
+permissions within the specific subnet for each project to allow IP addresses to
+be created.
+
+### Firewall Rules
+
+Firewall rules are set at the SharedVPC level and may be administered by the
+customer or personnel. By default, no ingress traffic is allowed. By default,
+all egress traffic is allowed. Ingress/egress policies can be defined at the
+environment (development, non-production and production) level.
+
+### DNS
+
+For DNS, Stellar Engine uses Google Cloud DNS, a recently provisionally approved
+first party service.
+
+## 5.7 Customer Security Practices
+
+The structure of the cloud projects, and method by which the application is
+developed and maintained, minimizes risk and allows for disparate teams to work
+using independent release schedules. Security is enforced through a separation
+of projects and IAM controls.
+
+### Security Command Center
+
+**Note: SCC is currently in the roadmap for IL5 but is not approved at this
+time.**
+
+Status: Manual Deployment Required
+
+Deployment Guide: [Stellar Engine - SCC Deployment Guide](./scc-deployment-guide.md)
+
+[Security Command Center
+(SCC](https://cloud.google.com/security-command-center/)) is a security and data
+risk database for GCP. It unifies assets, resources, policies, IAM,
+recommendations, and security/risk specific annotations in one place. These
+include:
+
+- Asset discovery and inventory with Cloud Inventory API
+- Sensitive data scanning for PII: GCS Buckets (note the
+ [regions](https://cloud.google.com/dlp/docs/locations) of Cloud DLP
+ deployment and possible regulatory declarations for offshore data analysis
+ needed)
+- Web app vulnerability scanning - XSS, insecure libraries
+- Notifications: Pub/Sub events on new/deleted/modified assets discovered
+
+Customers can optionally use Security Command Center with a Premium level
+license for all organizations which will leverage built-in security sources
+including:
+
+- [Event Threat
+ Detection](https://cloud.google.com/event-threat-detection/docs/)
+- [Cloud Anomaly
+ Detection](https://cloud.google.com/security-command-center/docs/how-to-view-vulnerabilities-threats#anomaly_detection)
+- [Security Health
+ Analytics](https://cloud.google.com/security-command-center/docs/how-to-manage-security-health-analytics)
+ - CIS Security Software
+ [Certified](https://www.cisecurity.org/partner/google-inc/) for the
+ following Benchmarks: CIS Benchmark for Google Cloud Platform Foundation
+ Benchmark, v1.0.0, Level 1 & 2.
+
+Additional compliance mappings are included for reference in [Security Health
+Analytics
+findings](https://cloud.google.com/security-command-center/docs/concepts-security-health-analytics-findings)
+but are neither provided or reviewed by the OWASP Foundation. Customers should
+review the guiding documentation of OWASP Top Ten, National Institute of
+Standards and Technology 800-53 (NIST 800-53), and International Organization
+for Standardization 27001 (ISO 27001) for how to check violations manually.
+
+Google also recommends using the list as a reference for Google Cloud security
+controls.
+
+| Google Recommendation:
- Continuous Review: Regularly review risk reports and insights presented in the SCC dashboard.
- Automation: Automate risk scanning using Cloud Asset API and implement custom detection logic (e.g., via Cloud Functions). Integrate outcomes of custom security analysis using results back into SCC through Findings API.
- SIEM Integration: Configure Pub/Sub exports for finding notifications to integrate with external SIEM systems.
|
+| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+
+### Firewall Rules
+
+| Requirements:
- Default deny for ingress
- Firewall rules to be managed using IAC
- Need to be able to target specific resources, e.g. node pool within GKE cluster
|
+| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Design Constraints:
|
+| Assumptions:
|
+
+Each VPC network implements a distributed virtual firewall. Configure firewall
+rules that allow or deny traffic to and from the resources attached to the VPC,
+including Compute Engine VM instances and GKE clusters.
+
+Firewall rules are applied at the VPC level, so they help provide effective
+protection and traffic control regardless of the operating system your instances
+use. The firewall is stateful, which means that for flows that are permitted,
+return traffic is automatically allowed.
+
+Firewall rules are specific to a particular VPC network. The rules allow you to
+specify the type of traffic, such as ports and protocols, and the source or
+destination of the traffic, including IP addresses, subnets, tags, and service
+accounts. For example, you can create an ingress rule to allow any VM instance
+associated with a particular service account to accept TCP traffic on port 80
+that originated from a specific source IP address or CIDR range. Once created
+firewall rules cannot be renamed so consider your naming convention to support
+operational needs.
+
+**Figure:** Firewall rules assignment options
+
+Each VPC automatically includes default and implied firewall rules:
+
+- **Implied egress rule:** An egress rule whose action is ALLOW, destination
+ is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance
+ send traffic to any destination, except for traffic
+ [blocked](https://cloud.google.com/vpc/docs/firewalls#blockedtraffic) by
+ GCP. Outbound access may be restricted by creating a higher priority
+ firewall rule.
+- **Implied deny ingress rule:** An ingress rule whose action is DENY, source
+ is 0.0.0.0/0, and priority is the lowest possible (65535) protects all
+ instances by blocking incoming traffic to them. Incoming access may be
+ allowed by a higher priority rule.
+
+The implied rules cannot be removed, but they have the lowest possible
+priorities. Rules you create can override them as long as your rules have higher
+priorities (less than 65535).
+
+Firewall Rules Logging allows you to audit, verify, and analyze the effects of
+your firewall rules. For example, you can determine if a firewall rule designed
+to deny traffic is functioning as intended. Firewall Rules Logging is also
+useful if you need to determine how many connections are affected by a given
+firewall rule.
+
+You enable Firewall Rules Logging individually for each firewall rule whose
+connections you need to log. FirewaThe implied rules cannot be removed, but they
+have the lowest possible priorities. Rules you create can override them as long
+as your rules have higher priorities (less than 65535).
+
+Firewall Rules Logging allows you to audit, verify, and analyze the effects of
+your firewall rules. For example, you can determine if a firewall rule designed
+to deny traffic is functioning as intended. Firewall Rules Logging is also
+useful if you need to determine how many connections are affected by a given
+firewall rule.
+
+You enable Firewall Rules Logging individually for each firewall rule whose
+connections you need to log. Firewall Rules Logging is an option for any
+firewall rule, regardless of the action (allow or deny) or direction (ingress or
+egress) of the rule. Firewall Rules Logging is useful if you need to determine
+the effectiveness of a firewall rule and how many connections are affected by a
+given firewall rule. For information about viewing logs, see [Using Firewall
+Rules Logging](https://cloud.google.com/vpc/docs/using-firewall-rules-logging).
+
+When you enable logging for a firewall rule, Google Cloud creates an entry
+called a connection record each time the rule allows or denies traffic. Each
+connection record contains the source and destination IP addresses, the protocol
+and ports, date and time, and a reference to the firewall rule that applied to
+the traffic. You can view these records in Cloud Logging, and you can export
+logs to any destination that Cloud Logging export supports.
+
+In addition to firewall rules per VPC there is also the ability to create
+Hierarchical firewall policies which let you create and enforce a consistent
+firewall policy across the organization. You can assign hierarchical firewall
+policies to the organization as a whole or to individual folders.
+
+Hierarchical firewall policies are containers for firewall rules that can
+explicitly deny or allow connections. In addition, hierarchical firewall policy
+rules can delegate evaluation to lower-level policies or VPC network firewall
+rules if desired. Lower-level rules cannot override a rule from a higher place
+in the resource hierarchy. This lets organization-wide admins manage critical
+firewall rules in one place.
+
+All rules associated with the organization node are evaluated, followed by those
+of the first level of folders, and so on. However with Shared VPC the evaluation
+follows the resource path of the Shared VPC host project, not the service
+project. Hierarchical firewall policy rules can be targeted to specific VPC
+networks and VMs by using target resources. This lets you create exceptions for
+groups of VMs.ll Rules Logging is an option for any firewall rule, regardless of
+the action (allow or deny) or direction (ingress or egress[^1]) of the rule.
+Firewall Rules Logging is useful if you need to determine the effectiveness of a
+firewall rule and how many connections are affected by a given firewall rule.
+For information about viewing logs, see [Using Firewall Rules
+Logging](https://cloud.google.com/vpc/docs/using-firewall-rules-logging).
+
+When you enable logging for a firewall rule, Google Cloud creates an entry
+called a connection record each time the rule allows or denies traffic. Each
+connection record contains the source and destination IP addresses, the protocol
+and ports, date and time, and a reference to the firewall rule that applied to
+the traffic. You can view these records in Cloud Logging, and you can export
+logs to any destination that Cloud Logging export supports.
+
+In addition to firewall rules per VPC there is also the ability to create
+Hierarchical firewall policies which let you create and enforce a consistent
+firewall policy across the organization. You can assign hierarchical firewall
+policies to the organization as a whole or to individual folders.
+
+Hierarchical firewall policies are containers for firewall rules that can
+explicitly deny or allow connections. In addition, hierarchical firewall policy
+rules can delegate evaluation to lower-level policies or VPC network firewall
+rules if desired. Lower-level rules cannot override a rule from a higher place
+in the resource hierarchy. This lets organization-wide admins manage critical
+firewall rules in one place.
+
+All rules associated with the organization node are evaluated, followed by those
+of the first level of folders, and so on. However with Shared VPC the evaluation
+follows the resource path of the Shared VPC host project, not the service
+project. Hierarchical firewall policy rules can be targeted to specific VPC
+networks and VMs by using target resources. This lets you create exceptions for
+groups of VMs.
+
+| Google Recommendation:
- Create firewall rules leveraging service accounts as the source or target wherever possible, as this allows for more autonomy for applications teams to scale their resources without requiring additional firewall changes. In addition service accounts are specific to projects and can only be changed on VMs by stopping and starting.
- Limit the use of firewall rules using tags as they can be invoked by simply adding a network tag to a VM and are not specific to any project.
- Where more general firewall rules are required, using a specific subnet or a summarized IP CIDR range is recommended to reduce the complexity of the rules.
- To improve security posture it is recommended to create an egress-deny rule with a higher priority than the implied rules to ensure that both ingress and egress traffic is managed.
- Define a standard naming convention for firewall rules and make use of description metadata to allow those reviewing rules to better understand the intent or history of the rule.
- Firewall rules created by GCP service accounts for the purpose of running services inside customers VPCs, e.g. Google Kubernetes Engine, should only be changed in accordance with the documentation for the service. Any changes outside of this may result in service failure and/or invalidation of SLAs.
- Enable Firewall Rules Logging to allow the audit, verification, and analysis the effects of your firewall rules.
- Leverage the Network Intelligence Firewall Insights service which provides visibility into firewall usage and detects firewall configuration issues. Related insights and metrics are also integrated into the Google Cloud Console for the Virtual Private Cloud (VPC) firewall.
- Manage custom firewall rules and configuration centrally, using Infrastructure as Code and Terraform. This provides development teams the ability to manage their rulesets which are approved as part of a CI/CD process by appropriate parties. In addition there is built in auditability and traceability in the process.
- Investigate Hierarchical firewall policies to centralize general firewall rules across the organization and/or environments. At the time of writing this feature is in preview.
|
+| :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+
+### Data Management
+
+#### Data Loss Prevention (DLP)
+
+To validate and understand where applications in Kubernetes might be logging
+PII, Google recommends the customer utilize DLP to regularly scan logs and
+create a report that details the presence of potentially sensitive data.
+
+## 5.8 Vulnerability Management
+
+Vulnerability management is currently a work in progress, as availability is
+limited in the IL5 enclave. The below section details our end-state goal for
+vulnerability management.
+
+### Security Command Center
+
+Security Command Center’s Premium offering has [Rapid Vulnerability
+Detection](https://cloud.google.com/security-command-center/docs/concepts-rapid-vulnerability-detection-overview#overview),
+a built-in service that finds critical vulnerabilities in several different scan
+targets.
+
+### Supported Scan Targets
+
+- Compute Engine
+- Cloud Load Balancing (External)
+- Google Kubernetes Engine ingress
+- Cloud Run
+- App Engine
+
+### Scans
+
+Rapid Vulnerability Detection runs managed scans that
+[detect](https://cloud.google.com/security-command-center/docs/concepts-rapid-vulnerability-detection-overview#scan_findings_and_remediations):
+
+- Weak credentials
+- Exposed interface findings
+- Incomplete software installations
+- Vulnerable software findings
+- Exposed administrator user interfaces
+
+When enabled, Security Command Center automatically configures and manages the
+service scans without a need to provide target URLs or manually start scans.
+Rapid Vulnerability Detection uses [Cloud Asset
+Inventory](https://cloud.google.com/asset-inventory/docs/overview#features) to
+retrieve information about new VMs and applications in projects and runs scans
+once a week to find public endpoints and detect vulnerabilities.
+
+### Enablement
+
+[Rapid Vulnerability
+Detection](https://cloud.google.com/security-command-center/docs/how-to-use-rapid-vulnerability-detection#enabling)
+can be enabled via Google Cloud Console on the Services page.
+
+The scans start automatically within 24 hours after first enabling Rapid
+Vulnerability Detection. After the first scan, Rapid Vulnerability Detection
+runs managed scans weekly.
+
+#### Test
+
+For testing, open source [Testbed for Tsunami
+Security](https://github.com/google/tsunami-security-scanner-testbed/tree/master/truepositives/secure)
+can be used. This is available on GitHub to generate findings for
+vulnerabilities.
+
+#### Review
+
+Findings contain detected vulnerabilities and information about affected
+projects. Vulnerabilities are reported for projects, not specific scan targets
+(endpoints and application software) or VMs contained within projects.
+
+Findings can be viewed in the Security Command Center dashboard or by using
+[Security Command Center
+API.](https://cloud.google.com/security-command-center/docs/how-to-api-list-findings#list_all_findings)
+
+#### Display
+
+Rapid Vulnerability Detection generates [vulnerability
+findings](https://cloud.google.com/security-command-center/docs/concepts-vulnerabilities-findings)
+that are available in the Security Command Center. When they are enabled in the
+Security Command Center, integrated services, like VM Manager, also generate
+vulnerability findings.
+
+Displaying all findings for a port or IP address can be done via Google Cloud
+Console under
+[Findings](https://cloud.google.com/security-command-center/docs/how-to-use-rapid-vulnerability-detection#display-findings-for-port-or-ip)
+and via [Security Command Center
+API](https://cloud.google.com/security-command-center/docs/how-to-api-list-findings#list_all_findings).
diff --git a/fast/stages-aw/0-bootstrap/README.md b/fast/stages-aw/0-bootstrap/README.md
index 0d9275d21..93423fe18 100644
--- a/fast/stages-aw/0-bootstrap/README.md
+++ b/fast/stages-aw/0-bootstrap/README.md
@@ -336,8 +336,6 @@ The remaining configuration is manual, as it regards the repositories themselves
- create a key pair
- create a [deploy key](https://docs.github.com/en/developers/overview/managing-deploy-keys#deploy-keys) in the modules repository with the public key
- create a `CICD_MODULES_KEY` secret with the private key in each of the repositories that need to access modules (for Gitlab, please Base64 encode the private key for masking)
- - for Gitlab
- - TODO
- for Source Repositories
- assign the reader role to the CI/CD service accounts
- create one repository for each stage