Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions
When setting a provider's source
url to the artifactory registry mirror I receive this validation error:
The “source” attribute must be in the format “[hostname/][namespace/]name”
How can I bypass the validation?
The api artifactory url is:
https://instance-name.company-name.com/artifactory/api/terraform/virtual-repository-name/providers/"
Example url I've tried (although I haven't been able to confirm it works properly, due to the validation error):
instance-name.company-name.com/virtual-repository-name/newrelic/newrelic
Argument | Required? | Description -- | -- | -- region | Required | The region for the data center for which your New Relic account is configured. TheÂ
NEW_RELIC_REGION
 environment variable can also be used. Valid values areÂUS
 orÂEU
.
https://github.com/newrelic/terraform-provider-newrelic/blob/7a3df88c8b33ccce2d2ea5d763f20d6f8e127b40/newrelic/provider.go#L62
--
Because the source code sets a default of US
, the argument is not required.
Seeing this on 3.12.0
.
Tried data
:
│ Error: Invalid data source
│
│ on main.tf line 1, in data "newrelic_notification_destination" "slack":
│ 1: data "newrelic_notification_destination" "slack" {
│
│ The provider newrelic/newrelic does not support data source
│ "newrelic_notification_destination".
│
│ Did you intend to use the managed resource type
│ "newrelic_notification_destination"? If so, declare this using a "resource"
│ block instead of a "data" block.
Bumps json5 from 1.0.1 to 1.0.2.
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and languageYou can disable automated security fix PRs for this repo from the Security Alerts page.
Bump json5 from 1.0.1 to 1.0.2 (#31)
Bumps json5 from 1.0.1 to 1.0.2.
updated-dependencies:
Signed-off-by: dependabot[bot] support@github.com
Signed-off-by: dependabot[bot] support@github.com Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Right it wouldn't be explicit, which is confusing. Maybe a note in the docs is enough.
Agree with @zeffron that this is just an edge case in your validation not taking slack destinations into account. I like the idea of allowing empty when `type === "SLACK" and updating docs to recommend this:
resource "newrelic_notification_destination" "slack" {
type = "SLACK"
}
Extra credit: If there was a way to automatically prevent destroy, that may be useful?
Are you sure you're properly creating the terraform.tfstate
files? Without them, there's no way to know the output.
Related: #2179
remote_state
works tooremote_state { backend = "local" config = { path = "${get_parent_terragrunt_dir()}/${path_relative_to_include()}/terraform.tfstate" } generate = { path = "backend.tf" if_exists = "overwrite" } }
IMO this (local backends) should be documented.
Note: I linked to this issue in a SO answer.
Related: #2179
For now, here's my workaround:
resource "newrelic_notification_destination" "slack" {
lifecycle {
# attributes are only being set to pass validation (not actually used)
# https://github.com/newrelic/terraform-provider-newrelic/issues/2025
ignore_changes = all
# destination requires manual import & therefore should not be destroyed
prevent_destroy = true
}
type = "SLACK"
name = ""
property {
key = ""
value = ""
}
}
Note my suggestion of prevent_destroy
- may want to document this too.
Docs: https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle
Can you clarify the proper way to import the Slack destination? Instructions say:
- Add an empty resource to your terraform file
- Run import command
Are we expected to copy the state values into the resource definition like OP did? My guess is that we leave the block empty and it gets the data from state. The issue is that the current destination validation requires type
, name
property
- should those actually not be required when the destination is slack?
Either way, I think some clarity in the above linked documentation would be helpful.
remote_state
(simpler) works for me with the absolute path (as suggested):
remote_state {
backend = "local"
config = {
path = "${get_parent_terragrunt_dir()}/${path_relative_to_include()}/terraform.tfstate"
}
generate = {
path = "backend.tf"
if_exists = "overwrite"
}
}
The issue is that the path in the generated backend.tf
is specific to my machine (e.g. /Users/JBallin/...
). I suppose I could gitignore backend.tf
and my teammates could generate their own during init.
I think there should be additional documentation about local backends, because I believe this issue shows that it won't work out of the box (I expected that not configuring a backend would just work). I know the guidance is to use remote backends, but I wanted to first see how it worked locally and consider managing it in git as an MVP with the understanding that only one developer deploys at a time.
Another workaround is to use locals
to store parts of the string as variables, which can also improve readability.
Before
inputs = {
url = "https://mystorageaccount.blob.core.windows.net/system/Microsoft.Compute/Images/windows-2016-datacenter/packer-osDisk.8c22742f-d22d-4f1a-babd-7712381c413e.vhd"
}
After
locals {
domain = "https://mystorageaccount.blob.core.windows.net"
file = "packer-osDisk.8c22742f-d22d-4f1a-babd-7712381c413e.vhd"
path = "system/Microsoft.Compute/Images/windows-2016-datacenter"
}
inputs = {
url = "${local.domain}/${local.path}/${local.file}"
}
We also don't install git
in our image, so I would like to be able pass whatever data is needed via CLI/env vars when using npx
.
I was thinking that a workaround would be to copy the source files over to the jenkins workspace (where we have git
) and then run the CLI from there (where we don't have node
).
The available options are listed here. While codacyApiBaseUrl
is supported as an alternative to setting CODACY_API_BASE_URL
, codacyReporterVersion
is missing as an alternative to setting CODACY_REPORTER_VERSION
.
I prefer to be consistent with using the CLI arguments (stylistically), instead of having some of them be controlled via env vars.
Current
export foo=1
report --bar 2
Desired
report --foo 1 --bar 2